domain
stringclasses 48
values | url
stringlengths 35
137
| text
stringlengths 0
836k
| topic
stringclasses 13
values |
---|---|---|---|
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/data-working-with-strings.html |
34 Data: Working with strings
=============================
*Purpose*: Strings show up in data science all the time. Even when all our variables are numeric, our *column names* are generally strings. To strengthen our ability to work with strings, we’ll learn how to use *regular expressions* and apply them to wrangling and tidying data.
*Reading*: [RegexOne](https://regexone.com/); All lessons in the Interactive Tutorial, Additional Practice Problems are optional
*Topics*: Regular expressions, `stringr` package functions, pivoting
*Note*: The [stringr cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/strings.pdf) is a helpful reference for this exercise!
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
34\.1 Intro to Stringr
----------------------
Within the Tidyverse, the package `stringr` contains a large number of functions for helping us with strings. We’re going to learn a number of useful functions for working with strings using regular expressions.
### 34\.1\.1 Detect
The function `str_detect()` allows us to *detect* the presence of a particular pattern. For instance, we can give it a fixed pattern such as:
```
## NOTE: No need to edit
strings <- c(
"Team Alpha",
"Team Beta",
"Group 1",
"Group 2"
)
str_detect(
string = strings,
pattern = "Team"
)
```
```
## [1] TRUE TRUE FALSE FALSE
```
`str_detect()` checks whether the given `pattern` is within the given `string`. This function returns a *boolean*—a `TRUE` or `FALSE` value—and furthermore it is *vectorized*—it returns a boolean vector of `T/F` values corresponding to each original entry.
Since `str_detect()` returns boolean values, we can use it as a helper in
`filter()` calls. For instance, in the `mpg` dataset there are automobiles with
`trans` that are automatic or manual.
```
## NOTE: No need to change this!
mpg %>%
select(trans) %>%
glimpse()
```
```
## Rows: 234
## Columns: 1
## $ trans <chr> "auto(l5)", "manual(m5)", "manual(m6)", "auto(av)", "auto(l5)", …
```
We can’t simply check whether `trans == "auto"`, because no string will *exactly* match that fixed pattern. But we can instead check for a substring.
```
## NOTE: No need to change this!
mpg %>%
filter(str_detect(trans, "auto"))
```
```
## # A tibble: 157 × 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 audi a4 1.8 1999 4 auto… f 18 29 p comp…
## 2 audi a4 2 2008 4 auto… f 21 30 p comp…
## 3 audi a4 2.8 1999 6 auto… f 16 26 p comp…
## 4 audi a4 3.1 2008 6 auto… f 18 27 p comp…
## 5 audi a4 quattro 1.8 1999 4 auto… 4 16 25 p comp…
## 6 audi a4 quattro 2 2008 4 auto… 4 19 27 p comp…
## 7 audi a4 quattro 2.8 1999 6 auto… 4 15 25 p comp…
## 8 audi a4 quattro 3.1 2008 6 auto… 4 17 25 p comp…
## 9 audi a6 quattro 2.8 1999 6 auto… 4 15 24 p mids…
## 10 audi a6 quattro 3.1 2008 6 auto… 4 17 25 p mids…
## # … with 147 more rows
```
### 34\.1\.2 **q1** Filter the `mpg` dataset down to `manual` vehicles using `str_detect()`.
```
df_q1 <-
mpg %>%
filter(str_detect(trans, "manual"))
df_q1 %>% glimpse()
```
```
## Rows: 77
## Columns: 11
## $ manufacturer <chr> "audi", "audi", "audi", "audi", "audi", "audi", "audi", "…
## $ model <chr> "a4", "a4", "a4", "a4 quattro", "a4 quattro", "a4 quattro…
## $ displ <dbl> 1.8, 2.0, 2.8, 1.8, 2.0, 2.8, 3.1, 5.7, 6.2, 7.0, 3.7, 3.…
## $ year <int> 1999, 2008, 1999, 1999, 2008, 1999, 2008, 1999, 2008, 200…
## $ cyl <int> 4, 4, 6, 4, 4, 6, 6, 8, 8, 8, 6, 6, 8, 8, 8, 8, 8, 6, 6, …
## $ trans <chr> "manual(m5)", "manual(m6)", "manual(m5)", "manual(m5)", "…
## $ drv <chr> "f", "f", "f", "4", "4", "4", "4", "r", "r", "r", "4", "4…
## $ cty <int> 21, 20, 18, 18, 20, 17, 15, 16, 16, 15, 15, 14, 11, 12, 1…
## $ hwy <int> 29, 31, 26, 26, 28, 25, 25, 26, 26, 24, 19, 17, 17, 16, 1…
## $ fl <chr> "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "r", "r…
## $ class <chr> "compact", "compact", "compact", "compact", "compact", "c…
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
all(
df_q1 %>%
pull(trans) %>%
str_detect(., "manual")
)
)
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
Part of the power of learning *regular expressions* is that we can write *patterns*, rather than exact matches. Notice that the `drv` variable in `mpg` takes either character or digit values. What if we wanted to filter out all the cases that had digits?
```
mpg %>%
filter(
!str_detect(drv, "\\d")
) %>%
glimpse()
```
```
## Rows: 131
## Columns: 11
## $ manufacturer <chr> "audi", "audi", "audi", "audi", "audi", "audi", "audi", "…
## $ model <chr> "a4", "a4", "a4", "a4", "a4", "a4", "a4", "c1500 suburban…
## $ displ <dbl> 1.8, 1.8, 2.0, 2.0, 2.8, 2.8, 3.1, 5.3, 5.3, 5.3, 5.7, 6.…
## $ year <int> 1999, 1999, 2008, 2008, 1999, 1999, 2008, 2008, 2008, 200…
## $ cyl <int> 4, 4, 4, 4, 6, 6, 6, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 4, 4, …
## $ trans <chr> "auto(l5)", "manual(m5)", "manual(m6)", "auto(av)", "auto…
## $ drv <chr> "f", "f", "f", "f", "f", "f", "f", "r", "r", "r", "r", "r…
## $ cty <int> 18, 21, 20, 21, 16, 18, 18, 14, 11, 14, 13, 12, 16, 15, 1…
## $ hwy <int> 29, 29, 31, 30, 26, 26, 27, 20, 15, 20, 17, 17, 26, 23, 2…
## $ fl <chr> "p", "p", "p", "p", "p", "p", "p", "r", "e", "r", "r", "r…
## $ class <chr> "compact", "compact", "compact", "compact", "compact", "c…
```
Recall (from the reading) that `\d` is a regular expression referring to a single digit. However, a trick thing about R is that we have to *double* the slash `\\` in order to get the correct behavior \[1].
### 34\.1\.3 **q2** Use `str_detect()` and an appropriate regular expression to filter `mpg` for *only* those values of `trans` that have a digit.
```
df_q2 <-
mpg %>%
filter(str_detect(trans, "\\d"))
df_q2 %>% glimpse()
```
```
## Rows: 229
## Columns: 11
## $ manufacturer <chr> "audi", "audi", "audi", "audi", "audi", "audi", "audi", "…
## $ model <chr> "a4", "a4", "a4", "a4", "a4", "a4 quattro", "a4 quattro",…
## $ displ <dbl> 1.8, 1.8, 2.0, 2.8, 2.8, 1.8, 1.8, 2.0, 2.0, 2.8, 2.8, 3.…
## $ year <int> 1999, 1999, 2008, 1999, 1999, 1999, 1999, 2008, 2008, 199…
## $ cyl <int> 4, 4, 4, 6, 6, 4, 4, 4, 4, 6, 6, 6, 6, 6, 6, 8, 8, 8, 8, …
## $ trans <chr> "auto(l5)", "manual(m5)", "manual(m6)", "auto(l5)", "manu…
## $ drv <chr> "f", "f", "f", "f", "f", "4", "4", "4", "4", "4", "4", "4…
## $ cty <int> 18, 21, 20, 16, 18, 18, 16, 20, 19, 15, 17, 17, 15, 15, 1…
## $ hwy <int> 29, 29, 31, 26, 26, 26, 25, 28, 27, 25, 25, 25, 25, 24, 2…
## $ fl <chr> "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "p…
## $ class <chr> "compact", "compact", "compact", "compact", "compact", "c…
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
all(
df_q2 %>%
pull(trans) %>%
str_detect(., "\\d")
)
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 34\.1\.4 Extract
While `str_detect()` is useful for filtering, `str_extract()` is useful with `mutate()`. This function returns the *first extracted substring*, as demonstrated below.
```
## NOTE: No need to change this!
str_extract(
string = c("abc", "xyz", "123"),
pattern = "\\d{3}"
)
```
```
## [1] NA NA "123"
```
Note that if `str_extract()` doesn’t find a extract, it will return `NA`. Also, here that I’m using a *quantifier*; as we saw in the reading, `{}` notation will allow us to specify the number of repetitions to seek.
```
## NOTE: No need to change this!
str_extract(
string = c("abc", "xyz", "123"),
pattern = "\\d{2}"
)
```
```
## [1] NA NA "12"
```
Notice that this only returns the first two digits in the extract, and neglects the third. If we don’t know the specific number we’re looking for, we can use `+` to select one or more characters:
```
## NOTE: No need to change this!
str_extract(
string = c("abc", "xyz", "123"),
pattern = "\\d+"
)
```
```
## [1] NA NA "123"
```
We can also use the `[[:alpha:]]` special symbol to select alphabetic characters only:
```
## NOTE: No need to change this!
str_extract(
string = c("abc", "xyz", "123"),
pattern = "[[:alpha:]]+"
)
```
```
## [1] "abc" "xyz" NA
```
And finally the wildcard `.` allows us to match any character:
```
## NOTE: No need to change this!
str_extract(
string = c("abc", "xyz", "123"),
pattern = ".+"
)
```
```
## [1] "abc" "xyz" "123"
```
### 34\.1\.5 **q3** Match alphabet characters
Notice that the `trans` column of `mpg` has many entries of the form `auto|manual\\([[:alpha:]]\\d\\)`; use `str_mutate()` to create a new column `tmp` with just the code inside the parentheses extracting `[[:alpha:]]\\d`.
```
## TASK: Mutate `trans` to extract
df_q3 <-
mpg %>%
mutate(tmp = str_extract(trans, "[[:alpha:]]\\d"))
df_q3 %>%
select(tmp)
```
```
## # A tibble: 234 × 1
## tmp
## <chr>
## 1 l5
## 2 m5
## 3 m6
## 4 <NA>
## 5 l5
## 6 m5
## 7 <NA>
## 8 m5
## 9 l5
## 10 m6
## # … with 224 more rows
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
(df_q3 %>% filter(is.na(tmp)) %>% dim(.) %>% .[[1]]) == 5
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
### 34\.1\.6 Match and Capture Groups
The `str_match()` function is similar to `str_extract()`, but it allows us to specify multiple “pieces” of a string to match with *capture groups*. A capture group is a pattern within parentheses; for instance, imagine we were trying to parse phone numbers, all with different formatting. We could use three capture groups for the three pieces of the phone number:
```
## NOTE: No need to edit; execute
phone_numbers <- c(
"(814) 555 1234",
"650-555-1234",
"8005551234"
)
str_match(
phone_numbers,
"(\\d{3}).*(\\d{3}).*(\\d{4})"
)
```
```
## [,1] [,2] [,3] [,4]
## [1,] "814) 555 1234" "814" "555" "1234"
## [2,] "650-555-1234" "650" "555" "1234"
## [3,] "8005551234" "800" "555" "1234"
```
Remember that the `.` character is a wildcard. Here I use the `*` quantifier for *zero or more* instances; this takes care of cases where there is no gap between characters, or when there are spaces or dashes between.
### 34\.1\.7 **q4** Modify the pattern below to extract the x, y pairs separately.
```
## NOTE: No need to edit this setup
points <- c(
"x=1, y=2",
"x=3, y=2",
"x=10, y=4"
)
q4 <-
str_match(
points,
pattern = "x=(\\d+), y=(\\d+)"
)
q4
```
```
## [,1] [,2] [,3]
## [1,] "x=1, y=2" "1" "2"
## [2,] "x=3, y=2" "3" "2"
## [3,] "x=10, y=4" "10" "4"
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
all(
q4[, -1] ==
t(matrix(as.character(c(1, 2, 3, 2, 10, 4)), nrow = 2))
)
)
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
34\.2 Removal
-------------
One last `stringr` function that’s helpful to know: `str_remove()` will simply remove the *first* matched pattern in a string. This is particularly helpful for dealing with prefixes and suffixes.
```
## NOTE: No need to edit; execute
string_quantiles <- c(
"q0.01",
"q0.5",
"q0.999"
)
string_quantiles %>%
str_remove(., "q") %>%
as.numeric()
```
```
## [1] 0.010 0.500 0.999
```
### 34\.2\.1 **q5** Use `str_remove()` to get mutate `trans` to remove the parentheses and all characters between.
*Hint*: Note that parentheses are *special characters*, so you’ll need to *escape* them as you did above.
```
df_q5 <-
mpg %>%
mutate(trans = str_remove(trans, "\\(.*\\)"))
df_q5
```
```
## # A tibble: 234 × 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 audi a4 1.8 1999 4 auto f 18 29 p comp…
## 2 audi a4 1.8 1999 4 manu… f 21 29 p comp…
## 3 audi a4 2 2008 4 manu… f 20 31 p comp…
## 4 audi a4 2 2008 4 auto f 21 30 p comp…
## 5 audi a4 2.8 1999 6 auto f 16 26 p comp…
## 6 audi a4 2.8 1999 6 manu… f 18 26 p comp…
## 7 audi a4 3.1 2008 6 auto f 18 27 p comp…
## 8 audi a4 quattro 1.8 1999 4 manu… 4 18 26 p comp…
## 9 audi a4 quattro 1.8 1999 4 auto 4 16 25 p comp…
## 10 audi a4 quattro 2 2008 4 manu… 4 20 28 p comp…
## # … with 224 more rows
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
all(
df_q5 %>%
pull(trans) %>%
str_detect(., "\\(.*\\)") %>%
!.
)
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
34\.3 Regex in Other Functions
------------------------------
Now we’re going to put all these ideas together—special characters, quantifiers, and capture groups—in order to solve a data tidying issue.
Other functions like `pivot_longer` and `pivot_wider` also take regex patterns. We can use these to help solve data tidying problems. Let’s return to the alloy data from `e-data03-pivot-basics`; the version of the data below do not have the convenient `_` separators in the column names.
```
## NOTE: No need to edit; execute
alloys <- tribble(
~thick, ~E00, ~mu00, ~E45, ~mu45, ~rep,
0.022, 10600, 0.321, 10700, 0.329, 1,
0.022, 10600, 0.323, 10500, 0.331, 2,
0.032, 10400, 0.329, 10400, 0.318, 1,
0.032, 10300, 0.319, 10500, 0.326, 2
)
alloys
```
```
## # A tibble: 4 × 6
## thick E00 mu00 E45 mu45 rep
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.022 10600 0.321 10700 0.329 1
## 2 0.022 10600 0.323 10500 0.331 2
## 3 0.032 10400 0.329 10400 0.318 1
## 4 0.032 10300 0.319 10500 0.326 2
```
As described in the RegexOne tutorial, you can use *capture groups* in parentheses `(...)` to define different groups in your regex pattern. These can be used along with the `pivot_` functions, for instance when you want to break apart column names into multiple groups.
### 34\.3\.1 **q6** Use your knowledge of regular expressions along with the `names_pattern` argument to successfully tidy the `alloys` data.
```
## TASK: Tidy `alloys`
df_q6 <-
alloys %>%
pivot_longer(
names_to = c("property", "angle"),
names_pattern = "([[:alpha:]]+)(\\d+)",
values_to = "value",
cols = matches("\\d")
) %>%
mutate(angle = as.integer(angle))
df_q6
```
```
## # A tibble: 16 × 5
## thick rep property angle value
## <dbl> <dbl> <chr> <int> <dbl>
## 1 0.022 1 E 0 10600
## 2 0.022 1 mu 0 0.321
## 3 0.022 1 E 45 10700
## 4 0.022 1 mu 45 0.329
## 5 0.022 2 E 0 10600
## 6 0.022 2 mu 0 0.323
## 7 0.022 2 E 45 10500
## 8 0.022 2 mu 45 0.331
## 9 0.032 1 E 0 10400
## 10 0.032 1 mu 0 0.329
## 11 0.032 1 E 45 10400
## 12 0.032 1 mu 45 0.318
## 13 0.032 2 E 0 10300
## 14 0.032 2 mu 0 0.319
## 15 0.032 2 E 45 10500
## 16 0.032 2 mu 45 0.326
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(dim(df_q6)[1] == 16)
```
```
## [1] TRUE
```
```
assertthat::assert_that(dim(df_q6)[2] == 5)
```
```
## [1] TRUE
```
```
print("Awesome!")
```
```
## [1] "Awesome!"
```
34\.4 Notes
-----------
\[1] This is because `\` has a special meaning in R, and we need to “escape” the slash by doubling it `\\`.
34\.1 Intro to Stringr
----------------------
Within the Tidyverse, the package `stringr` contains a large number of functions for helping us with strings. We’re going to learn a number of useful functions for working with strings using regular expressions.
### 34\.1\.1 Detect
The function `str_detect()` allows us to *detect* the presence of a particular pattern. For instance, we can give it a fixed pattern such as:
```
## NOTE: No need to edit
strings <- c(
"Team Alpha",
"Team Beta",
"Group 1",
"Group 2"
)
str_detect(
string = strings,
pattern = "Team"
)
```
```
## [1] TRUE TRUE FALSE FALSE
```
`str_detect()` checks whether the given `pattern` is within the given `string`. This function returns a *boolean*—a `TRUE` or `FALSE` value—and furthermore it is *vectorized*—it returns a boolean vector of `T/F` values corresponding to each original entry.
Since `str_detect()` returns boolean values, we can use it as a helper in
`filter()` calls. For instance, in the `mpg` dataset there are automobiles with
`trans` that are automatic or manual.
```
## NOTE: No need to change this!
mpg %>%
select(trans) %>%
glimpse()
```
```
## Rows: 234
## Columns: 1
## $ trans <chr> "auto(l5)", "manual(m5)", "manual(m6)", "auto(av)", "auto(l5)", …
```
We can’t simply check whether `trans == "auto"`, because no string will *exactly* match that fixed pattern. But we can instead check for a substring.
```
## NOTE: No need to change this!
mpg %>%
filter(str_detect(trans, "auto"))
```
```
## # A tibble: 157 × 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 audi a4 1.8 1999 4 auto… f 18 29 p comp…
## 2 audi a4 2 2008 4 auto… f 21 30 p comp…
## 3 audi a4 2.8 1999 6 auto… f 16 26 p comp…
## 4 audi a4 3.1 2008 6 auto… f 18 27 p comp…
## 5 audi a4 quattro 1.8 1999 4 auto… 4 16 25 p comp…
## 6 audi a4 quattro 2 2008 4 auto… 4 19 27 p comp…
## 7 audi a4 quattro 2.8 1999 6 auto… 4 15 25 p comp…
## 8 audi a4 quattro 3.1 2008 6 auto… 4 17 25 p comp…
## 9 audi a6 quattro 2.8 1999 6 auto… 4 15 24 p mids…
## 10 audi a6 quattro 3.1 2008 6 auto… 4 17 25 p mids…
## # … with 147 more rows
```
### 34\.1\.2 **q1** Filter the `mpg` dataset down to `manual` vehicles using `str_detect()`.
```
df_q1 <-
mpg %>%
filter(str_detect(trans, "manual"))
df_q1 %>% glimpse()
```
```
## Rows: 77
## Columns: 11
## $ manufacturer <chr> "audi", "audi", "audi", "audi", "audi", "audi", "audi", "…
## $ model <chr> "a4", "a4", "a4", "a4 quattro", "a4 quattro", "a4 quattro…
## $ displ <dbl> 1.8, 2.0, 2.8, 1.8, 2.0, 2.8, 3.1, 5.7, 6.2, 7.0, 3.7, 3.…
## $ year <int> 1999, 2008, 1999, 1999, 2008, 1999, 2008, 1999, 2008, 200…
## $ cyl <int> 4, 4, 6, 4, 4, 6, 6, 8, 8, 8, 6, 6, 8, 8, 8, 8, 8, 6, 6, …
## $ trans <chr> "manual(m5)", "manual(m6)", "manual(m5)", "manual(m5)", "…
## $ drv <chr> "f", "f", "f", "4", "4", "4", "4", "r", "r", "r", "4", "4…
## $ cty <int> 21, 20, 18, 18, 20, 17, 15, 16, 16, 15, 15, 14, 11, 12, 1…
## $ hwy <int> 29, 31, 26, 26, 28, 25, 25, 26, 26, 24, 19, 17, 17, 16, 1…
## $ fl <chr> "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "r", "r…
## $ class <chr> "compact", "compact", "compact", "compact", "compact", "c…
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
all(
df_q1 %>%
pull(trans) %>%
str_detect(., "manual")
)
)
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
Part of the power of learning *regular expressions* is that we can write *patterns*, rather than exact matches. Notice that the `drv` variable in `mpg` takes either character or digit values. What if we wanted to filter out all the cases that had digits?
```
mpg %>%
filter(
!str_detect(drv, "\\d")
) %>%
glimpse()
```
```
## Rows: 131
## Columns: 11
## $ manufacturer <chr> "audi", "audi", "audi", "audi", "audi", "audi", "audi", "…
## $ model <chr> "a4", "a4", "a4", "a4", "a4", "a4", "a4", "c1500 suburban…
## $ displ <dbl> 1.8, 1.8, 2.0, 2.0, 2.8, 2.8, 3.1, 5.3, 5.3, 5.3, 5.7, 6.…
## $ year <int> 1999, 1999, 2008, 2008, 1999, 1999, 2008, 2008, 2008, 200…
## $ cyl <int> 4, 4, 4, 4, 6, 6, 6, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 4, 4, …
## $ trans <chr> "auto(l5)", "manual(m5)", "manual(m6)", "auto(av)", "auto…
## $ drv <chr> "f", "f", "f", "f", "f", "f", "f", "r", "r", "r", "r", "r…
## $ cty <int> 18, 21, 20, 21, 16, 18, 18, 14, 11, 14, 13, 12, 16, 15, 1…
## $ hwy <int> 29, 29, 31, 30, 26, 26, 27, 20, 15, 20, 17, 17, 26, 23, 2…
## $ fl <chr> "p", "p", "p", "p", "p", "p", "p", "r", "e", "r", "r", "r…
## $ class <chr> "compact", "compact", "compact", "compact", "compact", "c…
```
Recall (from the reading) that `\d` is a regular expression referring to a single digit. However, a trick thing about R is that we have to *double* the slash `\\` in order to get the correct behavior \[1].
### 34\.1\.3 **q2** Use `str_detect()` and an appropriate regular expression to filter `mpg` for *only* those values of `trans` that have a digit.
```
df_q2 <-
mpg %>%
filter(str_detect(trans, "\\d"))
df_q2 %>% glimpse()
```
```
## Rows: 229
## Columns: 11
## $ manufacturer <chr> "audi", "audi", "audi", "audi", "audi", "audi", "audi", "…
## $ model <chr> "a4", "a4", "a4", "a4", "a4", "a4 quattro", "a4 quattro",…
## $ displ <dbl> 1.8, 1.8, 2.0, 2.8, 2.8, 1.8, 1.8, 2.0, 2.0, 2.8, 2.8, 3.…
## $ year <int> 1999, 1999, 2008, 1999, 1999, 1999, 1999, 2008, 2008, 199…
## $ cyl <int> 4, 4, 4, 6, 6, 4, 4, 4, 4, 6, 6, 6, 6, 6, 6, 8, 8, 8, 8, …
## $ trans <chr> "auto(l5)", "manual(m5)", "manual(m6)", "auto(l5)", "manu…
## $ drv <chr> "f", "f", "f", "f", "f", "4", "4", "4", "4", "4", "4", "4…
## $ cty <int> 18, 21, 20, 16, 18, 18, 16, 20, 19, 15, 17, 17, 15, 15, 1…
## $ hwy <int> 29, 29, 31, 26, 26, 26, 25, 28, 27, 25, 25, 25, 25, 24, 2…
## $ fl <chr> "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "p…
## $ class <chr> "compact", "compact", "compact", "compact", "compact", "c…
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
all(
df_q2 %>%
pull(trans) %>%
str_detect(., "\\d")
)
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 34\.1\.4 Extract
While `str_detect()` is useful for filtering, `str_extract()` is useful with `mutate()`. This function returns the *first extracted substring*, as demonstrated below.
```
## NOTE: No need to change this!
str_extract(
string = c("abc", "xyz", "123"),
pattern = "\\d{3}"
)
```
```
## [1] NA NA "123"
```
Note that if `str_extract()` doesn’t find a extract, it will return `NA`. Also, here that I’m using a *quantifier*; as we saw in the reading, `{}` notation will allow us to specify the number of repetitions to seek.
```
## NOTE: No need to change this!
str_extract(
string = c("abc", "xyz", "123"),
pattern = "\\d{2}"
)
```
```
## [1] NA NA "12"
```
Notice that this only returns the first two digits in the extract, and neglects the third. If we don’t know the specific number we’re looking for, we can use `+` to select one or more characters:
```
## NOTE: No need to change this!
str_extract(
string = c("abc", "xyz", "123"),
pattern = "\\d+"
)
```
```
## [1] NA NA "123"
```
We can also use the `[[:alpha:]]` special symbol to select alphabetic characters only:
```
## NOTE: No need to change this!
str_extract(
string = c("abc", "xyz", "123"),
pattern = "[[:alpha:]]+"
)
```
```
## [1] "abc" "xyz" NA
```
And finally the wildcard `.` allows us to match any character:
```
## NOTE: No need to change this!
str_extract(
string = c("abc", "xyz", "123"),
pattern = ".+"
)
```
```
## [1] "abc" "xyz" "123"
```
### 34\.1\.5 **q3** Match alphabet characters
Notice that the `trans` column of `mpg` has many entries of the form `auto|manual\\([[:alpha:]]\\d\\)`; use `str_mutate()` to create a new column `tmp` with just the code inside the parentheses extracting `[[:alpha:]]\\d`.
```
## TASK: Mutate `trans` to extract
df_q3 <-
mpg %>%
mutate(tmp = str_extract(trans, "[[:alpha:]]\\d"))
df_q3 %>%
select(tmp)
```
```
## # A tibble: 234 × 1
## tmp
## <chr>
## 1 l5
## 2 m5
## 3 m6
## 4 <NA>
## 5 l5
## 6 m5
## 7 <NA>
## 8 m5
## 9 l5
## 10 m6
## # … with 224 more rows
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
(df_q3 %>% filter(is.na(tmp)) %>% dim(.) %>% .[[1]]) == 5
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
### 34\.1\.6 Match and Capture Groups
The `str_match()` function is similar to `str_extract()`, but it allows us to specify multiple “pieces” of a string to match with *capture groups*. A capture group is a pattern within parentheses; for instance, imagine we were trying to parse phone numbers, all with different formatting. We could use three capture groups for the three pieces of the phone number:
```
## NOTE: No need to edit; execute
phone_numbers <- c(
"(814) 555 1234",
"650-555-1234",
"8005551234"
)
str_match(
phone_numbers,
"(\\d{3}).*(\\d{3}).*(\\d{4})"
)
```
```
## [,1] [,2] [,3] [,4]
## [1,] "814) 555 1234" "814" "555" "1234"
## [2,] "650-555-1234" "650" "555" "1234"
## [3,] "8005551234" "800" "555" "1234"
```
Remember that the `.` character is a wildcard. Here I use the `*` quantifier for *zero or more* instances; this takes care of cases where there is no gap between characters, or when there are spaces or dashes between.
### 34\.1\.7 **q4** Modify the pattern below to extract the x, y pairs separately.
```
## NOTE: No need to edit this setup
points <- c(
"x=1, y=2",
"x=3, y=2",
"x=10, y=4"
)
q4 <-
str_match(
points,
pattern = "x=(\\d+), y=(\\d+)"
)
q4
```
```
## [,1] [,2] [,3]
## [1,] "x=1, y=2" "1" "2"
## [2,] "x=3, y=2" "3" "2"
## [3,] "x=10, y=4" "10" "4"
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
all(
q4[, -1] ==
t(matrix(as.character(c(1, 2, 3, 2, 10, 4)), nrow = 2))
)
)
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
### 34\.1\.1 Detect
The function `str_detect()` allows us to *detect* the presence of a particular pattern. For instance, we can give it a fixed pattern such as:
```
## NOTE: No need to edit
strings <- c(
"Team Alpha",
"Team Beta",
"Group 1",
"Group 2"
)
str_detect(
string = strings,
pattern = "Team"
)
```
```
## [1] TRUE TRUE FALSE FALSE
```
`str_detect()` checks whether the given `pattern` is within the given `string`. This function returns a *boolean*—a `TRUE` or `FALSE` value—and furthermore it is *vectorized*—it returns a boolean vector of `T/F` values corresponding to each original entry.
Since `str_detect()` returns boolean values, we can use it as a helper in
`filter()` calls. For instance, in the `mpg` dataset there are automobiles with
`trans` that are automatic or manual.
```
## NOTE: No need to change this!
mpg %>%
select(trans) %>%
glimpse()
```
```
## Rows: 234
## Columns: 1
## $ trans <chr> "auto(l5)", "manual(m5)", "manual(m6)", "auto(av)", "auto(l5)", …
```
We can’t simply check whether `trans == "auto"`, because no string will *exactly* match that fixed pattern. But we can instead check for a substring.
```
## NOTE: No need to change this!
mpg %>%
filter(str_detect(trans, "auto"))
```
```
## # A tibble: 157 × 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 audi a4 1.8 1999 4 auto… f 18 29 p comp…
## 2 audi a4 2 2008 4 auto… f 21 30 p comp…
## 3 audi a4 2.8 1999 6 auto… f 16 26 p comp…
## 4 audi a4 3.1 2008 6 auto… f 18 27 p comp…
## 5 audi a4 quattro 1.8 1999 4 auto… 4 16 25 p comp…
## 6 audi a4 quattro 2 2008 4 auto… 4 19 27 p comp…
## 7 audi a4 quattro 2.8 1999 6 auto… 4 15 25 p comp…
## 8 audi a4 quattro 3.1 2008 6 auto… 4 17 25 p comp…
## 9 audi a6 quattro 2.8 1999 6 auto… 4 15 24 p mids…
## 10 audi a6 quattro 3.1 2008 6 auto… 4 17 25 p mids…
## # … with 147 more rows
```
### 34\.1\.2 **q1** Filter the `mpg` dataset down to `manual` vehicles using `str_detect()`.
```
df_q1 <-
mpg %>%
filter(str_detect(trans, "manual"))
df_q1 %>% glimpse()
```
```
## Rows: 77
## Columns: 11
## $ manufacturer <chr> "audi", "audi", "audi", "audi", "audi", "audi", "audi", "…
## $ model <chr> "a4", "a4", "a4", "a4 quattro", "a4 quattro", "a4 quattro…
## $ displ <dbl> 1.8, 2.0, 2.8, 1.8, 2.0, 2.8, 3.1, 5.7, 6.2, 7.0, 3.7, 3.…
## $ year <int> 1999, 2008, 1999, 1999, 2008, 1999, 2008, 1999, 2008, 200…
## $ cyl <int> 4, 4, 6, 4, 4, 6, 6, 8, 8, 8, 6, 6, 8, 8, 8, 8, 8, 6, 6, …
## $ trans <chr> "manual(m5)", "manual(m6)", "manual(m5)", "manual(m5)", "…
## $ drv <chr> "f", "f", "f", "4", "4", "4", "4", "r", "r", "r", "4", "4…
## $ cty <int> 21, 20, 18, 18, 20, 17, 15, 16, 16, 15, 15, 14, 11, 12, 1…
## $ hwy <int> 29, 31, 26, 26, 28, 25, 25, 26, 26, 24, 19, 17, 17, 16, 1…
## $ fl <chr> "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "r", "r…
## $ class <chr> "compact", "compact", "compact", "compact", "compact", "c…
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
all(
df_q1 %>%
pull(trans) %>%
str_detect(., "manual")
)
)
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
Part of the power of learning *regular expressions* is that we can write *patterns*, rather than exact matches. Notice that the `drv` variable in `mpg` takes either character or digit values. What if we wanted to filter out all the cases that had digits?
```
mpg %>%
filter(
!str_detect(drv, "\\d")
) %>%
glimpse()
```
```
## Rows: 131
## Columns: 11
## $ manufacturer <chr> "audi", "audi", "audi", "audi", "audi", "audi", "audi", "…
## $ model <chr> "a4", "a4", "a4", "a4", "a4", "a4", "a4", "c1500 suburban…
## $ displ <dbl> 1.8, 1.8, 2.0, 2.0, 2.8, 2.8, 3.1, 5.3, 5.3, 5.3, 5.7, 6.…
## $ year <int> 1999, 1999, 2008, 2008, 1999, 1999, 2008, 2008, 2008, 200…
## $ cyl <int> 4, 4, 4, 4, 6, 6, 6, 8, 8, 8, 8, 8, 8, 8, 8, 8, 8, 4, 4, …
## $ trans <chr> "auto(l5)", "manual(m5)", "manual(m6)", "auto(av)", "auto…
## $ drv <chr> "f", "f", "f", "f", "f", "f", "f", "r", "r", "r", "r", "r…
## $ cty <int> 18, 21, 20, 21, 16, 18, 18, 14, 11, 14, 13, 12, 16, 15, 1…
## $ hwy <int> 29, 29, 31, 30, 26, 26, 27, 20, 15, 20, 17, 17, 26, 23, 2…
## $ fl <chr> "p", "p", "p", "p", "p", "p", "p", "r", "e", "r", "r", "r…
## $ class <chr> "compact", "compact", "compact", "compact", "compact", "c…
```
Recall (from the reading) that `\d` is a regular expression referring to a single digit. However, a trick thing about R is that we have to *double* the slash `\\` in order to get the correct behavior \[1].
### 34\.1\.3 **q2** Use `str_detect()` and an appropriate regular expression to filter `mpg` for *only* those values of `trans` that have a digit.
```
df_q2 <-
mpg %>%
filter(str_detect(trans, "\\d"))
df_q2 %>% glimpse()
```
```
## Rows: 229
## Columns: 11
## $ manufacturer <chr> "audi", "audi", "audi", "audi", "audi", "audi", "audi", "…
## $ model <chr> "a4", "a4", "a4", "a4", "a4", "a4 quattro", "a4 quattro",…
## $ displ <dbl> 1.8, 1.8, 2.0, 2.8, 2.8, 1.8, 1.8, 2.0, 2.0, 2.8, 2.8, 3.…
## $ year <int> 1999, 1999, 2008, 1999, 1999, 1999, 1999, 2008, 2008, 199…
## $ cyl <int> 4, 4, 4, 6, 6, 4, 4, 4, 4, 6, 6, 6, 6, 6, 6, 8, 8, 8, 8, …
## $ trans <chr> "auto(l5)", "manual(m5)", "manual(m6)", "auto(l5)", "manu…
## $ drv <chr> "f", "f", "f", "f", "f", "4", "4", "4", "4", "4", "4", "4…
## $ cty <int> 18, 21, 20, 16, 18, 18, 16, 20, 19, 15, 17, 17, 15, 15, 1…
## $ hwy <int> 29, 29, 31, 26, 26, 26, 25, 28, 27, 25, 25, 25, 25, 24, 2…
## $ fl <chr> "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "p", "p…
## $ class <chr> "compact", "compact", "compact", "compact", "compact", "c…
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
all(
df_q2 %>%
pull(trans) %>%
str_detect(., "\\d")
)
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 34\.1\.4 Extract
While `str_detect()` is useful for filtering, `str_extract()` is useful with `mutate()`. This function returns the *first extracted substring*, as demonstrated below.
```
## NOTE: No need to change this!
str_extract(
string = c("abc", "xyz", "123"),
pattern = "\\d{3}"
)
```
```
## [1] NA NA "123"
```
Note that if `str_extract()` doesn’t find a extract, it will return `NA`. Also, here that I’m using a *quantifier*; as we saw in the reading, `{}` notation will allow us to specify the number of repetitions to seek.
```
## NOTE: No need to change this!
str_extract(
string = c("abc", "xyz", "123"),
pattern = "\\d{2}"
)
```
```
## [1] NA NA "12"
```
Notice that this only returns the first two digits in the extract, and neglects the third. If we don’t know the specific number we’re looking for, we can use `+` to select one or more characters:
```
## NOTE: No need to change this!
str_extract(
string = c("abc", "xyz", "123"),
pattern = "\\d+"
)
```
```
## [1] NA NA "123"
```
We can also use the `[[:alpha:]]` special symbol to select alphabetic characters only:
```
## NOTE: No need to change this!
str_extract(
string = c("abc", "xyz", "123"),
pattern = "[[:alpha:]]+"
)
```
```
## [1] "abc" "xyz" NA
```
And finally the wildcard `.` allows us to match any character:
```
## NOTE: No need to change this!
str_extract(
string = c("abc", "xyz", "123"),
pattern = ".+"
)
```
```
## [1] "abc" "xyz" "123"
```
### 34\.1\.5 **q3** Match alphabet characters
Notice that the `trans` column of `mpg` has many entries of the form `auto|manual\\([[:alpha:]]\\d\\)`; use `str_mutate()` to create a new column `tmp` with just the code inside the parentheses extracting `[[:alpha:]]\\d`.
```
## TASK: Mutate `trans` to extract
df_q3 <-
mpg %>%
mutate(tmp = str_extract(trans, "[[:alpha:]]\\d"))
df_q3 %>%
select(tmp)
```
```
## # A tibble: 234 × 1
## tmp
## <chr>
## 1 l5
## 2 m5
## 3 m6
## 4 <NA>
## 5 l5
## 6 m5
## 7 <NA>
## 8 m5
## 9 l5
## 10 m6
## # … with 224 more rows
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
(df_q3 %>% filter(is.na(tmp)) %>% dim(.) %>% .[[1]]) == 5
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
### 34\.1\.6 Match and Capture Groups
The `str_match()` function is similar to `str_extract()`, but it allows us to specify multiple “pieces” of a string to match with *capture groups*. A capture group is a pattern within parentheses; for instance, imagine we were trying to parse phone numbers, all with different formatting. We could use three capture groups for the three pieces of the phone number:
```
## NOTE: No need to edit; execute
phone_numbers <- c(
"(814) 555 1234",
"650-555-1234",
"8005551234"
)
str_match(
phone_numbers,
"(\\d{3}).*(\\d{3}).*(\\d{4})"
)
```
```
## [,1] [,2] [,3] [,4]
## [1,] "814) 555 1234" "814" "555" "1234"
## [2,] "650-555-1234" "650" "555" "1234"
## [3,] "8005551234" "800" "555" "1234"
```
Remember that the `.` character is a wildcard. Here I use the `*` quantifier for *zero or more* instances; this takes care of cases where there is no gap between characters, or when there are spaces or dashes between.
### 34\.1\.7 **q4** Modify the pattern below to extract the x, y pairs separately.
```
## NOTE: No need to edit this setup
points <- c(
"x=1, y=2",
"x=3, y=2",
"x=10, y=4"
)
q4 <-
str_match(
points,
pattern = "x=(\\d+), y=(\\d+)"
)
q4
```
```
## [,1] [,2] [,3]
## [1,] "x=1, y=2" "1" "2"
## [2,] "x=3, y=2" "3" "2"
## [3,] "x=10, y=4" "10" "4"
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
all(
q4[, -1] ==
t(matrix(as.character(c(1, 2, 3, 2, 10, 4)), nrow = 2))
)
)
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
34\.2 Removal
-------------
One last `stringr` function that’s helpful to know: `str_remove()` will simply remove the *first* matched pattern in a string. This is particularly helpful for dealing with prefixes and suffixes.
```
## NOTE: No need to edit; execute
string_quantiles <- c(
"q0.01",
"q0.5",
"q0.999"
)
string_quantiles %>%
str_remove(., "q") %>%
as.numeric()
```
```
## [1] 0.010 0.500 0.999
```
### 34\.2\.1 **q5** Use `str_remove()` to get mutate `trans` to remove the parentheses and all characters between.
*Hint*: Note that parentheses are *special characters*, so you’ll need to *escape* them as you did above.
```
df_q5 <-
mpg %>%
mutate(trans = str_remove(trans, "\\(.*\\)"))
df_q5
```
```
## # A tibble: 234 × 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 audi a4 1.8 1999 4 auto f 18 29 p comp…
## 2 audi a4 1.8 1999 4 manu… f 21 29 p comp…
## 3 audi a4 2 2008 4 manu… f 20 31 p comp…
## 4 audi a4 2 2008 4 auto f 21 30 p comp…
## 5 audi a4 2.8 1999 6 auto f 16 26 p comp…
## 6 audi a4 2.8 1999 6 manu… f 18 26 p comp…
## 7 audi a4 3.1 2008 6 auto f 18 27 p comp…
## 8 audi a4 quattro 1.8 1999 4 manu… 4 18 26 p comp…
## 9 audi a4 quattro 1.8 1999 4 auto 4 16 25 p comp…
## 10 audi a4 quattro 2 2008 4 manu… 4 20 28 p comp…
## # … with 224 more rows
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
all(
df_q5 %>%
pull(trans) %>%
str_detect(., "\\(.*\\)") %>%
!.
)
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
### 34\.2\.1 **q5** Use `str_remove()` to get mutate `trans` to remove the parentheses and all characters between.
*Hint*: Note that parentheses are *special characters*, so you’ll need to *escape* them as you did above.
```
df_q5 <-
mpg %>%
mutate(trans = str_remove(trans, "\\(.*\\)"))
df_q5
```
```
## # A tibble: 234 × 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 audi a4 1.8 1999 4 auto f 18 29 p comp…
## 2 audi a4 1.8 1999 4 manu… f 21 29 p comp…
## 3 audi a4 2 2008 4 manu… f 20 31 p comp…
## 4 audi a4 2 2008 4 auto f 21 30 p comp…
## 5 audi a4 2.8 1999 6 auto f 16 26 p comp…
## 6 audi a4 2.8 1999 6 manu… f 18 26 p comp…
## 7 audi a4 3.1 2008 6 auto f 18 27 p comp…
## 8 audi a4 quattro 1.8 1999 4 manu… 4 18 26 p comp…
## 9 audi a4 quattro 1.8 1999 4 auto 4 16 25 p comp…
## 10 audi a4 quattro 2 2008 4 manu… 4 20 28 p comp…
## # … with 224 more rows
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
all(
df_q5 %>%
pull(trans) %>%
str_detect(., "\\(.*\\)") %>%
!.
)
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
34\.3 Regex in Other Functions
------------------------------
Now we’re going to put all these ideas together—special characters, quantifiers, and capture groups—in order to solve a data tidying issue.
Other functions like `pivot_longer` and `pivot_wider` also take regex patterns. We can use these to help solve data tidying problems. Let’s return to the alloy data from `e-data03-pivot-basics`; the version of the data below do not have the convenient `_` separators in the column names.
```
## NOTE: No need to edit; execute
alloys <- tribble(
~thick, ~E00, ~mu00, ~E45, ~mu45, ~rep,
0.022, 10600, 0.321, 10700, 0.329, 1,
0.022, 10600, 0.323, 10500, 0.331, 2,
0.032, 10400, 0.329, 10400, 0.318, 1,
0.032, 10300, 0.319, 10500, 0.326, 2
)
alloys
```
```
## # A tibble: 4 × 6
## thick E00 mu00 E45 mu45 rep
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 0.022 10600 0.321 10700 0.329 1
## 2 0.022 10600 0.323 10500 0.331 2
## 3 0.032 10400 0.329 10400 0.318 1
## 4 0.032 10300 0.319 10500 0.326 2
```
As described in the RegexOne tutorial, you can use *capture groups* in parentheses `(...)` to define different groups in your regex pattern. These can be used along with the `pivot_` functions, for instance when you want to break apart column names into multiple groups.
### 34\.3\.1 **q6** Use your knowledge of regular expressions along with the `names_pattern` argument to successfully tidy the `alloys` data.
```
## TASK: Tidy `alloys`
df_q6 <-
alloys %>%
pivot_longer(
names_to = c("property", "angle"),
names_pattern = "([[:alpha:]]+)(\\d+)",
values_to = "value",
cols = matches("\\d")
) %>%
mutate(angle = as.integer(angle))
df_q6
```
```
## # A tibble: 16 × 5
## thick rep property angle value
## <dbl> <dbl> <chr> <int> <dbl>
## 1 0.022 1 E 0 10600
## 2 0.022 1 mu 0 0.321
## 3 0.022 1 E 45 10700
## 4 0.022 1 mu 45 0.329
## 5 0.022 2 E 0 10600
## 6 0.022 2 mu 0 0.323
## 7 0.022 2 E 45 10500
## 8 0.022 2 mu 45 0.331
## 9 0.032 1 E 0 10400
## 10 0.032 1 mu 0 0.329
## 11 0.032 1 E 45 10400
## 12 0.032 1 mu 45 0.318
## 13 0.032 2 E 0 10300
## 14 0.032 2 mu 0 0.319
## 15 0.032 2 E 45 10500
## 16 0.032 2 mu 45 0.326
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(dim(df_q6)[1] == 16)
```
```
## [1] TRUE
```
```
assertthat::assert_that(dim(df_q6)[2] == 5)
```
```
## [1] TRUE
```
```
print("Awesome!")
```
```
## [1] "Awesome!"
```
### 34\.3\.1 **q6** Use your knowledge of regular expressions along with the `names_pattern` argument to successfully tidy the `alloys` data.
```
## TASK: Tidy `alloys`
df_q6 <-
alloys %>%
pivot_longer(
names_to = c("property", "angle"),
names_pattern = "([[:alpha:]]+)(\\d+)",
values_to = "value",
cols = matches("\\d")
) %>%
mutate(angle = as.integer(angle))
df_q6
```
```
## # A tibble: 16 × 5
## thick rep property angle value
## <dbl> <dbl> <chr> <int> <dbl>
## 1 0.022 1 E 0 10600
## 2 0.022 1 mu 0 0.321
## 3 0.022 1 E 45 10700
## 4 0.022 1 mu 45 0.329
## 5 0.022 2 E 0 10600
## 6 0.022 2 mu 0 0.323
## 7 0.022 2 E 45 10500
## 8 0.022 2 mu 45 0.331
## 9 0.032 1 E 0 10400
## 10 0.032 1 mu 0 0.329
## 11 0.032 1 E 45 10400
## 12 0.032 1 mu 45 0.318
## 13 0.032 2 E 0 10300
## 14 0.032 2 mu 0 0.319
## 15 0.032 2 E 45 10500
## 16 0.032 2 mu 45 0.326
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(dim(df_q6)[1] == 16)
```
```
## [1] TRUE
```
```
assertthat::assert_that(dim(df_q6)[2] == 5)
```
```
## [1] TRUE
```
```
print("Awesome!")
```
```
## [1] "Awesome!"
```
34\.4 Notes
-----------
\[1] This is because `\` has a special meaning in R, and we need to “escape” the slash by doubling it `\\`.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/vis-themes.html |
35 Vis: Themes
==============
*Purpose*: Themes are key for aesthetic purposes; to make really good\-looking graphs, we’ll need to use `theme()`.
*Reading*: [`theme()` documentation](https://ggplot2.tidyverse.org/reference/theme.html) (Use as a reference; don’t read the whole thing!)
### 35\.0\.1 **q1** Use `theme_void()` and `guides()` (with an argument) to remove everything in this plot except the points.
```
mpg %>%
ggplot(aes(displ, hwy, color = class)) +
geom_point()
```
```
mpg %>%
ggplot(aes(displ, hwy, color = class)) +
geom_point() +
guides(color = "none") +
theme_void()
```
When I make presentation\-quality figures, I often start with the following stub code:
```
## NOTE: No need to edit; feel free to re-use this code!
theme_common <- function() {
theme_minimal() %+replace%
theme(
axis.text.x = element_text(size = 12),
axis.text.y = element_text(size = 12),
axis.title.x = element_text(margin = margin(4, 4, 4, 4), size = 16),
axis.title.y = element_text(margin = margin(4, 4, 4, 4), size = 16, angle = 90),
legend.title = element_text(size = 16),
legend.text = element_text(size = 12),
strip.text.x = element_text(size = 12),
strip.text.y = element_text(size = 12),
panel.grid.major = element_line(color = "grey90"),
panel.grid.minor = element_line(color = "grey90"),
aspect.ratio = 4 / 4,
plot.margin = unit(c(t = +0, b = +0, r = +0, l = +0), "cm"),
plot.title = element_text(size = 18),
plot.title.position = "plot",
plot.subtitle = element_text(size = 16),
plot.caption = element_text(size = 12)
)
}
```
The `%+replace` magic above allows you to use `theme_common()` within your own ggplot calls.
### 35\.0\.2 **q2** Use `theme_common()` with the following graph. Document what’s changed by the `theme()` arguments.
```
mpg %>%
ggplot(aes(displ, hwy, color = class)) +
geom_point() +
labs(
x = "Engine Displacement (L)",
y = "Highway Fuel Economy (mpg)"
)
```
```
mpg %>%
ggplot(aes(displ, hwy, color = class)) +
geom_point() +
theme_common() +
labs(
x = "Engine Displacement (L)",
y = "Highway Fuel Economy (mpg)"
)
```
**Observations**:
* The text is larger, hence more readable
* The background was flipped grey to white
* The guide lines have been flipped from white to grey
Calling `theme_common()`, along with settings `labs()` and making some smart choices about geoms and annotations is often all you need to make a *really high\-quality graph*.
### 35\.0\.3 **q3** Make the following plot as *ugly as possible*; the more `theme()` arguments you use, the better!
*Hint*: Use the `theme()` settings from q2 above as a starting point, and read the documentation for `theme()` to learn how to do more horrible things to this graph.
```
mpg %>%
ggplot(aes(displ, hwy, color = class)) +
geom_point() +
theme(
axis.text.x = element_text(size = 32)
)
```
Here’s one possible graph:
```
mpg %>%
ggplot(aes(displ, hwy, color = class)) +
geom_point() +
guides(color = "none") +
theme(
line = element_line(size = 3, color = "purple"),
rect = element_rect(fill = "red"),
axis.text.x = element_text(size = 32, angle = 117),
axis.text.y = element_text(size = 32, angle = 129),
axis.title.x = element_text(size = 32, family = "Comic Sans MS"),
axis.title.y = element_text(size = 32, family = "Comic Sans MS")
)
```
```
## Warning: The `size` argument of `element_line()` is deprecated as of ggplot2 3.4.0.
## ℹ Please use the `linewidth` argument instead.
```
### 35\.0\.1 **q1** Use `theme_void()` and `guides()` (with an argument) to remove everything in this plot except the points.
```
mpg %>%
ggplot(aes(displ, hwy, color = class)) +
geom_point()
```
```
mpg %>%
ggplot(aes(displ, hwy, color = class)) +
geom_point() +
guides(color = "none") +
theme_void()
```
When I make presentation\-quality figures, I often start with the following stub code:
```
## NOTE: No need to edit; feel free to re-use this code!
theme_common <- function() {
theme_minimal() %+replace%
theme(
axis.text.x = element_text(size = 12),
axis.text.y = element_text(size = 12),
axis.title.x = element_text(margin = margin(4, 4, 4, 4), size = 16),
axis.title.y = element_text(margin = margin(4, 4, 4, 4), size = 16, angle = 90),
legend.title = element_text(size = 16),
legend.text = element_text(size = 12),
strip.text.x = element_text(size = 12),
strip.text.y = element_text(size = 12),
panel.grid.major = element_line(color = "grey90"),
panel.grid.minor = element_line(color = "grey90"),
aspect.ratio = 4 / 4,
plot.margin = unit(c(t = +0, b = +0, r = +0, l = +0), "cm"),
plot.title = element_text(size = 18),
plot.title.position = "plot",
plot.subtitle = element_text(size = 16),
plot.caption = element_text(size = 12)
)
}
```
The `%+replace` magic above allows you to use `theme_common()` within your own ggplot calls.
### 35\.0\.2 **q2** Use `theme_common()` with the following graph. Document what’s changed by the `theme()` arguments.
```
mpg %>%
ggplot(aes(displ, hwy, color = class)) +
geom_point() +
labs(
x = "Engine Displacement (L)",
y = "Highway Fuel Economy (mpg)"
)
```
```
mpg %>%
ggplot(aes(displ, hwy, color = class)) +
geom_point() +
theme_common() +
labs(
x = "Engine Displacement (L)",
y = "Highway Fuel Economy (mpg)"
)
```
**Observations**:
* The text is larger, hence more readable
* The background was flipped grey to white
* The guide lines have been flipped from white to grey
Calling `theme_common()`, along with settings `labs()` and making some smart choices about geoms and annotations is often all you need to make a *really high\-quality graph*.
### 35\.0\.3 **q3** Make the following plot as *ugly as possible*; the more `theme()` arguments you use, the better!
*Hint*: Use the `theme()` settings from q2 above as a starting point, and read the documentation for `theme()` to learn how to do more horrible things to this graph.
```
mpg %>%
ggplot(aes(displ, hwy, color = class)) +
geom_point() +
theme(
axis.text.x = element_text(size = 32)
)
```
Here’s one possible graph:
```
mpg %>%
ggplot(aes(displ, hwy, color = class)) +
geom_point() +
guides(color = "none") +
theme(
line = element_line(size = 3, color = "purple"),
rect = element_rect(fill = "red"),
axis.text.x = element_text(size = 32, angle = 117),
axis.text.y = element_text(size = 32, angle = 129),
axis.title.x = element_text(size = 32, family = "Comic Sans MS"),
axis.title.y = element_text(size = 32, family = "Comic Sans MS")
)
```
```
## Warning: The `size` argument of `element_line()` is deprecated as of ggplot2 3.4.0.
## ℹ Please use the `linewidth` argument instead.
```
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/data-pipes-and-placeholders.html |
36 Data: Pipes and Placeholders
===============================
*Purpose*: The pipe `%>%` has additional functionality than what we’ve used so far. In this exercise we’ll learn about the *placeholder* `.`, which will give us more control over how data flows between our functions.
*Reading*: [The Pipe](https://magrittr.tidyverse.org/reference/pipe.html)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
### 36\.0\.1 **q1** Re\-write the following code to use the *placeholder*.
*Hint*: This may feel very simple, in which case good. This is not a trick question.
```
diamonds %>% glimpse(.)
```
```
## Rows: 53,940
## Columns: 10
## $ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.…
## $ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Ver…
## $ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I,…
## $ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, …
## $ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64…
## $ table <dbl> 55, 61, 65, 58, 58, 57, 57, 55, 61, 61, 55, 56, 61, 54, 62, 58…
## $ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 34…
## $ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.…
## $ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.…
## $ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.…
```
### 36\.0\.2 **q2** Fix the lambda expression
The reading discussed *Using lambda expressions with `%>%`*; use this part of the reading to explain why the following code fails. Then fix the code so it runs without error.
```
2 %>%
{. * .}
```
```
## [1] 4
```
### 36\.0\.3 **q3** Re\-write the following code using the placeholder `.` operator to simplify the second filter.
*Hint*: You should be able to simplify the second call to `filter` down to just
`filter(cut == "Fair")`.
```
diamonds %>%
filter(carat <= 0.3) %>%
ggplot(aes(carat, price)) +
geom_point() +
geom_point(
data = . %>% filter(cut == "Fair"),
color = "red"
)
```
The placeholder even works at “later” points in a pipeline. We can use it to
help simplify code, as you did above.
### 36\.0\.1 **q1** Re\-write the following code to use the *placeholder*.
*Hint*: This may feel very simple, in which case good. This is not a trick question.
```
diamonds %>% glimpse(.)
```
```
## Rows: 53,940
## Columns: 10
## $ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.…
## $ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Ver…
## $ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I,…
## $ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1, …
## $ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 64…
## $ table <dbl> 55, 61, 65, 58, 58, 57, 57, 55, 61, 61, 55, 56, 61, 54, 62, 58…
## $ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 34…
## $ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4.…
## $ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4.…
## $ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2.…
```
### 36\.0\.2 **q2** Fix the lambda expression
The reading discussed *Using lambda expressions with `%>%`*; use this part of the reading to explain why the following code fails. Then fix the code so it runs without error.
```
2 %>%
{. * .}
```
```
## [1] 4
```
### 36\.0\.3 **q3** Re\-write the following code using the placeholder `.` operator to simplify the second filter.
*Hint*: You should be able to simplify the second call to `filter` down to just
`filter(cut == "Fair")`.
```
diamonds %>%
filter(carat <= 0.3) %>%
ggplot(aes(carat, price)) +
geom_point() +
geom_point(
data = . %>% filter(cut == "Fair"),
color = "red"
)
```
The placeholder even works at “later” points in a pipeline. We can use it to
help simplify code, as you did above.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/stats-populations-and-estimation.html |
37 Stats: Populations and Estimation
====================================
*Purpose*: Often, our data do not include all the facts that are relevant to the decision we are trying to make. Statistics is the science of determining the conclusions we can confidently make, based on our available data. To make sense of this, we need to understand the distinction between a *sample* and a *population*, and how this distinction leads to *estimation*.
*Reading*: [Statistician proves that statistics are boring](https://towardsdatascience.com/statistician-proves-that-statistics-are-boring-4fc22c95031b)
*Topics*: Population, sample, estimate, sampling distribution, standard error
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(nycflights13)
```
When using descriptive statistics to help us answer a question, there are (at least) two questions we should ask ourselves:
1. Does the statistic we chose relate to the problem we care about?
2. Do we have all the facts we need (the population) or do we have limited information (a sample from some well\-defined population)?
We already discussed (1\) by learning about descriptive statistics and their meaning. Now we’ll discuss (2\) by learning the distinction between populations and samples.
37\.1 Population
----------------
Let’s start by looking at an artificial population:
```
## NOTE: No need to change this!
tibble(z = seq(-4, +4, length.out = 500)) %>%
mutate(d = dnorm(z)) %>%
ggplot(aes(z, d)) +
geom_line()
```
Here our population is an infinite pool of observations all following the standard normal distribution. If this sounds abstract and unrealistic, good! Remember that the normal distribution (and indeed all named distributions) are *abstract, mathematical objects* that we use to model real phenomena.
Remember that a *sample* is a set of observations “drawn” from the population. The following is an example of three different samples from the same normal distribution, with different sample sizes.
```
## NOTE: No need to change this!
set.seed(101)
tibble(z = seq(-4, +4, length.out = 500)) %>%
mutate(d = dnorm(z)) %>%
ggplot() +
geom_histogram(
data = map_dfr(
c(10, 1e2, 1e3),
function(n) {tibble(Z = rnorm(n), n = n)}
),
mapping = aes(Z, y = ..density.., color = "Sample")
) +
geom_line(aes(z, d, color = "Population")) +
facet_grid(~n)
```
```
## Warning: The dot-dot notation (`..density..`) was deprecated in ggplot2 3.4.0.
## ℹ Please use `after_stat(density)` instead.
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
As we’ve seen before, as we draw more samples, their histogram tends to look more like the underlying population.
Now let’s look at a real example of a population:
```
## NOTE: No need to change this!
flights %>%
ggplot() +
geom_freqpoly(aes(air_time, color = "(All)")) +
geom_freqpoly(aes(air_time, color = origin))
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
```
## Warning: Removed 9430 rows containing non-finite values (`stat_bin()`).
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
```
## Warning: Removed 9430 rows containing non-finite values (`stat_bin()`).
```
This is the set of **all** flights originating from `EWR`, `LGA`, and `JFK` in 2013, in terms of their `air_time`. Note that this distribution is decidedly *not normal*; we would be foolish to try to model it as such!
As we saw in the reading, the choice of the “correct” population is not an exercise in math. This is a decision that you must make based on the problem you are trying to solve. For instance, if we care about all flights into the NYC area, then the `(All)` population is correct. But if we care only about flights out of `LGA`, the population is different. No amount of math can save you if you can’t pick the appropriate population for your problem!
When your data are not the entire population, any statistic you compute is an *estimate*.
37\.2 Estimates
---------------
When we don’t have all the facts and instead only have a sample, we perform *estimation* to extrapolate from our available data to the population we care about.
The following code draws multiple samples from a standard normal of size `n_observations`, and does so `n_samples` times. We’ll visualize these data in a later chunk.
```
## NOTE: No need to change this!
n_observations <- 3
n_samples <- 5e3
df_sample <-
map_dfr(
1:n_samples,
function(id) {
tibble(
Z = rnorm(n_observations),
id = id
)
}
)
```
Some terminology:
* We call a statistic of a population a *population statistic*; for instance, the population mean. A population statistic is also called a *parameter*.
* We call a statistics of a sample a *sample statistic*; for instance, the sample mean. A sample statistic is also called an *estimate*.
The chunk `compute-samples` generated `n_samples = 5e3` of `n_observations = 3` each. You can think of each sample as an “alternative universe” where we happened to pick `3` particular values. The following chunk visualizes just the first few samples:
```
df_sample %>%
filter(id <= 6) %>%
ggplot(aes(Z, "")) +
geom_point() +
facet_grid(id ~ .) +
labs(
x = "Realized Values",
y = "Samples"
)
```
Every one of these samples has its own sample mean; let’s add that as an additional point:
```
df_sample %>%
filter(id <= 6) %>%
ggplot(aes(Z, "")) +
geom_point() +
geom_point(
data = . %>% group_by(id) %>% summarize(Z = mean(Z)),
mapping = aes(color = "Sample Mean"),
size = 4
) +
scale_color_discrete(name = "") +
theme(legend.position = "bottom") +
facet_grid(id ~ .) +
labs(
x = "Realized Values",
y = "Samples"
)
```
Thus, there is a “red dot” associated with each of the 5,000 samples. Let’s visualize the individual sample mean values as a histogram:
```
## NOTE: No need to change this!
df_sample %>%
group_by(id) %>%
summarize(mean = mean(Z)) %>%
ggplot(aes(mean)) +
geom_histogram() +
geom_vline(xintercept = 0, linetype = 2) +
labs(
x = "Sample Mean"
)
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
Remember that the standard normal has population mean zero (vertical line); the distribution we see here is of the sample mean values. These results indicate that we frequently land near zero (the true population value) but we obtain values as wide as `-2` and `+2`. This is because we have limited data from our population, and our estimate is not guaranteed to be close to its population value. As we gather more data, we’ll tend to produce better estimates.
To illustrate the effects of more data, I use a little mathematical theory to quickly visualize estimates of the mean at different sample sizes.
```
## NOTE: No need to change this!
map_dfr(
c(3, 12, 48, 192),
function(n) {
tibble(z = seq(-4, +4, length.out = 500)) %>%
mutate(
d = dnorm(z, sd = 1 / sqrt(n)),
n = n
)
}
) %>%
ggplot() +
geom_line(aes(z, d, color = as.factor(n), group = as.factor(n))) +
scale_color_discrete(name = "Samples") +
labs(
x = "Estimated Mean",
title = "Sampling Distributions: Estimated Mean",
caption = "Population: Normal"
)
```
As we might expect, the distribution of estimated means concentrates on the true value of zero as we increase the sample size \\(n\\). As we gather more data, our estimate has a greater probability of landing close to the true value.
The distribution for an estimate is called a *sampling distribution*; the visualization above is a lineup of sampling distributions for the estimated mean. It happens that all of those distributions are normal. However, the sampling distribution is *not guaranteed* to look like the underlying population. For example, let’s look at the sample standard deviation.
```
## NOTE: No need to change this!
df_sample %>%
group_by(id) %>%
summarize(sd = sd(Z)) %>%
ggplot(aes(sd)) +
geom_histogram() +
labs(
x = "Estimated Standard Deviation"
)
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
Note that this doesn’t look much like a normal distribution. This should make some intuitive sense: The standard deviation is guaranteed to be non\-negative, so it can’t possibly follow a normal distribution, which can take values anywhere from \\(\-\\infty\\) to \\(\+\\infty\\).
### 37\.2\.1 **q1** Modify the code below to draw samples from a uniform distribution (rather than a normal). Describe (in words) what the resulting sampling distribution looks like. Does the sampling distribution look like a normal distribution?
```
## TASK: Modify the code below to sample from a uniform distribution
df_samp_unif <-
map_dfr(
1:n_samples,
function(id) {
tibble(
Z = runif(n_observations),
id = id
)
}
)
df_samp_unif %>%
group_by(id) %>%
summarize(stat = mean(Z)) %>%
ggplot(aes(stat)) +
geom_histogram() +
labs(
x = "Estimated Mean",
title = "Sampling Distribution: Estimated Mean",
caption = "Population: Uniform"
)
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
**Observations**:
37\.3 Intermediate conclusion
-----------------------------
A *sampling distribution* is the distribution for a *sample estimate*. It is induced by the population, but is also a function of the specific statistic we’re considering. We will use statistics to help make sense of sampling distributions.
37\.4 Standard Error
--------------------
The standard deviation of a sampling distribution gets a special name: the [*standard error*](https://en.wikipedia.org/wiki/Sampling_distribution#Standard_error). The standard error of an estimated mean is
\\\[\\text{SE} \= \\sigma / \\sqrt{n}.\\]
This is a formula worth memorizing; it implies that doubling the precision of an estimated mean requires *quadrupling* the sample size. It also tells us that a more variable population (larger \\(\\sigma\\)) will make estimation more difficult (larger \\(\\text{SE}\\)).
The standard error is a convenient way to summarize the accuracy of an estimation setting; the larger our standard error, the less accurate our estimates will tend to be.
### 37\.4\.1 **q2** Compute the standard error for the sample mean under the following settings. Which setting will tend to produce more accurate estimates?
```
## TASK: Compute the standard error
se_q2.1 <- 4 / sqrt(16)
se_q2.2 <- 8 / sqrt(32)
```
Use the following tests to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(se_q2.1, 1))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(se_q2.2, sqrt(2)))
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
**Observations**:
* Setting q2\.1 will tend to be more accurate because its standard error is lower
Two notes:
1. Note the language above: The standard error tells us about *settings* (population \\(\\sigma\\) and sample count \\(n\\)), not *estimates* themselves. The accuracy of *an individual estimate* would depend on \\(\\hat{\\mu} \- \\mu\\), but we in practice never know \\(\\mu\\) exactly. The standard error will tell us how variable \\(\\hat{\\mu}\\) will be on average, but does not give us any information about the specific value of \\(\\hat{\\mu} \- \\mu\\) for any given estimate \\(\\hat{\\mu}\\).
The standard error gives us an idea of how accurate our estimate will tend to be, but due to randomness we don’t know the true accuracy of our estimate.
2. Note that we used the *population* standard deviation above; in practice we’ll only have a *sample* standard deviation. In this case we can use a *plug\-in* estimate for the standard error
\\\[\\hat{\\text{SE}} \= s / \\sqrt{n},\\]
where the hat on \\(\\text{SE}\\) denotes that this quantity is an estimate, and \\(s\\) is the sample standard deviation.
### 37\.4\.2 **q3** Compute the sample standard error of the sample mean for the sample below. Compare your estimate against the true value `se_q2.1`. State how similar or different the values are, and explain the difference.
```
## NOTE: No need to change this!
set.seed(101)
n_sample <- 20
z_sample <- rnorm(n = n_sample, mean = 2, sd = 4)
## TASK: Compute the sample standard error for `z_sample`
se_sample <- sd(z_sample) / sqrt(n_sample)
```
Use the following tests to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
assertthat::are_equal(
se_sample,
sd(z_sample) / sqrt(n_sample)
)
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
**Observations**:
* I find that `se_sample` is about 3/4 of the true value, which is a large difference.
* The value `se_sample` is just an estimate; it is inaccurate due to randomness.
37\.5 Fast Summary
------------------
The *population* is the set of all things we care about. No amount of math can help you here: *You* are responsible for defining your population. If we have the whole population, we don’t need statistics!
When we *don’t* have all the data from the population, we need to *estimate*. The combined effects of random sampling, the shape of the population, and our chosen statistic all give rise to a *sampling distribution* for our estimated statistic. The standard deviation of the sampling distribution is called the *standard error*; it is a measure of accuracy of the *sampling procedure*, not the estimate itself.
37\.1 Population
----------------
Let’s start by looking at an artificial population:
```
## NOTE: No need to change this!
tibble(z = seq(-4, +4, length.out = 500)) %>%
mutate(d = dnorm(z)) %>%
ggplot(aes(z, d)) +
geom_line()
```
Here our population is an infinite pool of observations all following the standard normal distribution. If this sounds abstract and unrealistic, good! Remember that the normal distribution (and indeed all named distributions) are *abstract, mathematical objects* that we use to model real phenomena.
Remember that a *sample* is a set of observations “drawn” from the population. The following is an example of three different samples from the same normal distribution, with different sample sizes.
```
## NOTE: No need to change this!
set.seed(101)
tibble(z = seq(-4, +4, length.out = 500)) %>%
mutate(d = dnorm(z)) %>%
ggplot() +
geom_histogram(
data = map_dfr(
c(10, 1e2, 1e3),
function(n) {tibble(Z = rnorm(n), n = n)}
),
mapping = aes(Z, y = ..density.., color = "Sample")
) +
geom_line(aes(z, d, color = "Population")) +
facet_grid(~n)
```
```
## Warning: The dot-dot notation (`..density..`) was deprecated in ggplot2 3.4.0.
## ℹ Please use `after_stat(density)` instead.
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
As we’ve seen before, as we draw more samples, their histogram tends to look more like the underlying population.
Now let’s look at a real example of a population:
```
## NOTE: No need to change this!
flights %>%
ggplot() +
geom_freqpoly(aes(air_time, color = "(All)")) +
geom_freqpoly(aes(air_time, color = origin))
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
```
## Warning: Removed 9430 rows containing non-finite values (`stat_bin()`).
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
```
## Warning: Removed 9430 rows containing non-finite values (`stat_bin()`).
```
This is the set of **all** flights originating from `EWR`, `LGA`, and `JFK` in 2013, in terms of their `air_time`. Note that this distribution is decidedly *not normal*; we would be foolish to try to model it as such!
As we saw in the reading, the choice of the “correct” population is not an exercise in math. This is a decision that you must make based on the problem you are trying to solve. For instance, if we care about all flights into the NYC area, then the `(All)` population is correct. But if we care only about flights out of `LGA`, the population is different. No amount of math can save you if you can’t pick the appropriate population for your problem!
When your data are not the entire population, any statistic you compute is an *estimate*.
37\.2 Estimates
---------------
When we don’t have all the facts and instead only have a sample, we perform *estimation* to extrapolate from our available data to the population we care about.
The following code draws multiple samples from a standard normal of size `n_observations`, and does so `n_samples` times. We’ll visualize these data in a later chunk.
```
## NOTE: No need to change this!
n_observations <- 3
n_samples <- 5e3
df_sample <-
map_dfr(
1:n_samples,
function(id) {
tibble(
Z = rnorm(n_observations),
id = id
)
}
)
```
Some terminology:
* We call a statistic of a population a *population statistic*; for instance, the population mean. A population statistic is also called a *parameter*.
* We call a statistics of a sample a *sample statistic*; for instance, the sample mean. A sample statistic is also called an *estimate*.
The chunk `compute-samples` generated `n_samples = 5e3` of `n_observations = 3` each. You can think of each sample as an “alternative universe” where we happened to pick `3` particular values. The following chunk visualizes just the first few samples:
```
df_sample %>%
filter(id <= 6) %>%
ggplot(aes(Z, "")) +
geom_point() +
facet_grid(id ~ .) +
labs(
x = "Realized Values",
y = "Samples"
)
```
Every one of these samples has its own sample mean; let’s add that as an additional point:
```
df_sample %>%
filter(id <= 6) %>%
ggplot(aes(Z, "")) +
geom_point() +
geom_point(
data = . %>% group_by(id) %>% summarize(Z = mean(Z)),
mapping = aes(color = "Sample Mean"),
size = 4
) +
scale_color_discrete(name = "") +
theme(legend.position = "bottom") +
facet_grid(id ~ .) +
labs(
x = "Realized Values",
y = "Samples"
)
```
Thus, there is a “red dot” associated with each of the 5,000 samples. Let’s visualize the individual sample mean values as a histogram:
```
## NOTE: No need to change this!
df_sample %>%
group_by(id) %>%
summarize(mean = mean(Z)) %>%
ggplot(aes(mean)) +
geom_histogram() +
geom_vline(xintercept = 0, linetype = 2) +
labs(
x = "Sample Mean"
)
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
Remember that the standard normal has population mean zero (vertical line); the distribution we see here is of the sample mean values. These results indicate that we frequently land near zero (the true population value) but we obtain values as wide as `-2` and `+2`. This is because we have limited data from our population, and our estimate is not guaranteed to be close to its population value. As we gather more data, we’ll tend to produce better estimates.
To illustrate the effects of more data, I use a little mathematical theory to quickly visualize estimates of the mean at different sample sizes.
```
## NOTE: No need to change this!
map_dfr(
c(3, 12, 48, 192),
function(n) {
tibble(z = seq(-4, +4, length.out = 500)) %>%
mutate(
d = dnorm(z, sd = 1 / sqrt(n)),
n = n
)
}
) %>%
ggplot() +
geom_line(aes(z, d, color = as.factor(n), group = as.factor(n))) +
scale_color_discrete(name = "Samples") +
labs(
x = "Estimated Mean",
title = "Sampling Distributions: Estimated Mean",
caption = "Population: Normal"
)
```
As we might expect, the distribution of estimated means concentrates on the true value of zero as we increase the sample size \\(n\\). As we gather more data, our estimate has a greater probability of landing close to the true value.
The distribution for an estimate is called a *sampling distribution*; the visualization above is a lineup of sampling distributions for the estimated mean. It happens that all of those distributions are normal. However, the sampling distribution is *not guaranteed* to look like the underlying population. For example, let’s look at the sample standard deviation.
```
## NOTE: No need to change this!
df_sample %>%
group_by(id) %>%
summarize(sd = sd(Z)) %>%
ggplot(aes(sd)) +
geom_histogram() +
labs(
x = "Estimated Standard Deviation"
)
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
Note that this doesn’t look much like a normal distribution. This should make some intuitive sense: The standard deviation is guaranteed to be non\-negative, so it can’t possibly follow a normal distribution, which can take values anywhere from \\(\-\\infty\\) to \\(\+\\infty\\).
### 37\.2\.1 **q1** Modify the code below to draw samples from a uniform distribution (rather than a normal). Describe (in words) what the resulting sampling distribution looks like. Does the sampling distribution look like a normal distribution?
```
## TASK: Modify the code below to sample from a uniform distribution
df_samp_unif <-
map_dfr(
1:n_samples,
function(id) {
tibble(
Z = runif(n_observations),
id = id
)
}
)
df_samp_unif %>%
group_by(id) %>%
summarize(stat = mean(Z)) %>%
ggplot(aes(stat)) +
geom_histogram() +
labs(
x = "Estimated Mean",
title = "Sampling Distribution: Estimated Mean",
caption = "Population: Uniform"
)
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
**Observations**:
### 37\.2\.1 **q1** Modify the code below to draw samples from a uniform distribution (rather than a normal). Describe (in words) what the resulting sampling distribution looks like. Does the sampling distribution look like a normal distribution?
```
## TASK: Modify the code below to sample from a uniform distribution
df_samp_unif <-
map_dfr(
1:n_samples,
function(id) {
tibble(
Z = runif(n_observations),
id = id
)
}
)
df_samp_unif %>%
group_by(id) %>%
summarize(stat = mean(Z)) %>%
ggplot(aes(stat)) +
geom_histogram() +
labs(
x = "Estimated Mean",
title = "Sampling Distribution: Estimated Mean",
caption = "Population: Uniform"
)
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
**Observations**:
37\.3 Intermediate conclusion
-----------------------------
A *sampling distribution* is the distribution for a *sample estimate*. It is induced by the population, but is also a function of the specific statistic we’re considering. We will use statistics to help make sense of sampling distributions.
37\.4 Standard Error
--------------------
The standard deviation of a sampling distribution gets a special name: the [*standard error*](https://en.wikipedia.org/wiki/Sampling_distribution#Standard_error). The standard error of an estimated mean is
\\\[\\text{SE} \= \\sigma / \\sqrt{n}.\\]
This is a formula worth memorizing; it implies that doubling the precision of an estimated mean requires *quadrupling* the sample size. It also tells us that a more variable population (larger \\(\\sigma\\)) will make estimation more difficult (larger \\(\\text{SE}\\)).
The standard error is a convenient way to summarize the accuracy of an estimation setting; the larger our standard error, the less accurate our estimates will tend to be.
### 37\.4\.1 **q2** Compute the standard error for the sample mean under the following settings. Which setting will tend to produce more accurate estimates?
```
## TASK: Compute the standard error
se_q2.1 <- 4 / sqrt(16)
se_q2.2 <- 8 / sqrt(32)
```
Use the following tests to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(se_q2.1, 1))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(se_q2.2, sqrt(2)))
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
**Observations**:
* Setting q2\.1 will tend to be more accurate because its standard error is lower
Two notes:
1. Note the language above: The standard error tells us about *settings* (population \\(\\sigma\\) and sample count \\(n\\)), not *estimates* themselves. The accuracy of *an individual estimate* would depend on \\(\\hat{\\mu} \- \\mu\\), but we in practice never know \\(\\mu\\) exactly. The standard error will tell us how variable \\(\\hat{\\mu}\\) will be on average, but does not give us any information about the specific value of \\(\\hat{\\mu} \- \\mu\\) for any given estimate \\(\\hat{\\mu}\\).
The standard error gives us an idea of how accurate our estimate will tend to be, but due to randomness we don’t know the true accuracy of our estimate.
2. Note that we used the *population* standard deviation above; in practice we’ll only have a *sample* standard deviation. In this case we can use a *plug\-in* estimate for the standard error
\\\[\\hat{\\text{SE}} \= s / \\sqrt{n},\\]
where the hat on \\(\\text{SE}\\) denotes that this quantity is an estimate, and \\(s\\) is the sample standard deviation.
### 37\.4\.2 **q3** Compute the sample standard error of the sample mean for the sample below. Compare your estimate against the true value `se_q2.1`. State how similar or different the values are, and explain the difference.
```
## NOTE: No need to change this!
set.seed(101)
n_sample <- 20
z_sample <- rnorm(n = n_sample, mean = 2, sd = 4)
## TASK: Compute the sample standard error for `z_sample`
se_sample <- sd(z_sample) / sqrt(n_sample)
```
Use the following tests to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
assertthat::are_equal(
se_sample,
sd(z_sample) / sqrt(n_sample)
)
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
**Observations**:
* I find that `se_sample` is about 3/4 of the true value, which is a large difference.
* The value `se_sample` is just an estimate; it is inaccurate due to randomness.
### 37\.4\.1 **q2** Compute the standard error for the sample mean under the following settings. Which setting will tend to produce more accurate estimates?
```
## TASK: Compute the standard error
se_q2.1 <- 4 / sqrt(16)
se_q2.2 <- 8 / sqrt(32)
```
Use the following tests to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(se_q2.1, 1))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(se_q2.2, sqrt(2)))
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
**Observations**:
* Setting q2\.1 will tend to be more accurate because its standard error is lower
Two notes:
1. Note the language above: The standard error tells us about *settings* (population \\(\\sigma\\) and sample count \\(n\\)), not *estimates* themselves. The accuracy of *an individual estimate* would depend on \\(\\hat{\\mu} \- \\mu\\), but we in practice never know \\(\\mu\\) exactly. The standard error will tell us how variable \\(\\hat{\\mu}\\) will be on average, but does not give us any information about the specific value of \\(\\hat{\\mu} \- \\mu\\) for any given estimate \\(\\hat{\\mu}\\).
The standard error gives us an idea of how accurate our estimate will tend to be, but due to randomness we don’t know the true accuracy of our estimate.
2. Note that we used the *population* standard deviation above; in practice we’ll only have a *sample* standard deviation. In this case we can use a *plug\-in* estimate for the standard error
\\\[\\hat{\\text{SE}} \= s / \\sqrt{n},\\]
where the hat on \\(\\text{SE}\\) denotes that this quantity is an estimate, and \\(s\\) is the sample standard deviation.
### 37\.4\.2 **q3** Compute the sample standard error of the sample mean for the sample below. Compare your estimate against the true value `se_q2.1`. State how similar or different the values are, and explain the difference.
```
## NOTE: No need to change this!
set.seed(101)
n_sample <- 20
z_sample <- rnorm(n = n_sample, mean = 2, sd = 4)
## TASK: Compute the sample standard error for `z_sample`
se_sample <- sd(z_sample) / sqrt(n_sample)
```
Use the following tests to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(
assertthat::are_equal(
se_sample,
sd(z_sample) / sqrt(n_sample)
)
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
**Observations**:
* I find that `se_sample` is about 3/4 of the true value, which is a large difference.
* The value `se_sample` is just an estimate; it is inaccurate due to randomness.
37\.5 Fast Summary
------------------
The *population* is the set of all things we care about. No amount of math can help you here: *You* are responsible for defining your population. If we have the whole population, we don’t need statistics!
When we *don’t* have all the data from the population, we need to *estimate*. The combined effects of random sampling, the shape of the population, and our chosen statistic all give rise to a *sampling distribution* for our estimated statistic. The standard deviation of the sampling distribution is called the *standard error*; it is a measure of accuracy of the *sampling procedure*, not the estimate itself.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/data-window-functions.html |
38 Data: Window Functions
=========================
*Purpose*: Window functions are another family of `dplyr` verbs that are related to aggregates like `mean` and `sd`. These functions are useful for building up more complicated filters, enabling aesthetic tricks in plots, and some advanced data wrangling we’ll do next exercise.
*Reading*: [Window Functions](https://dplyr.tidyverse.org/articles/window-functions.html#cumulative-aggregates-1), *Types of window functions*, *Ranking functions*, and *Lead and lag*
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(ggrepel)
```
38\.1 Lead and Lag
------------------
The lead and lag functions simply provide a “shifted” copy of a vector.
```
## NOTE: No need to edit this; just an example
v <- c(1, 2, 3, 4, 5)
lead(v)
```
```
## [1] 2 3 4 5 NA
```
```
lag(v)
```
```
## [1] NA 1 2 3 4
```
These are particularly useful for computing things like differences:
```
## NOTE: No need to edit this; just an example
x <- seq(-1, +1, length.out = 6)
f <- x ^ 2
## Forward finite difference
df_dx <- (lead(f) - f) / (lead(x) - x)
df_dx
```
```
## [1] -1.600000e+00 -8.000000e-01 2.255141e-16 8.000000e-01 1.600000e+00
## [6] NA
```
Make sure to order your data or use the `order_by` argument when using `lead` or `lag`! GGplot automatically reorders your data when making a line plot, but `lead` and `lag` will use the order of the data you provide.
### 38\.1\.1 **q1** Use a window function modify the following visual to color each segment differently based on whether the period of time was increasing or decreasing.
```
economics %>%
arrange(date) %>%
mutate(
delta = lead(unemploy, order_by = date) - unemploy,
Positive = delta > 0
) %>%
ggplot(aes(date, unemploy, color = Positive)) +
geom_segment(aes(
xend = lead(date, order_by = date),
yend = lead(unemploy, order_by = date)
))
```
```
## Warning: Removed 1 rows containing missing values (`geom_segment()`).
```
38\.2 Ranks
-----------
The rank functions allow you to assign (integer) ranks to smallest (or largest) values of a vector.
```
## NOTE: No need to edit this; just an example
v <- c(1, 1, 2, 3, 5)
row_number(v)
```
```
## [1] 1 2 3 4 5
```
```
min_rank(v)
```
```
## [1] 1 1 3 4 5
```
```
dense_rank(v)
```
```
## [1] 1 1 2 3 4
```
You can use the `desc()` function (or a negative sign) to reverse the ranking order.
```
## NOTE: No need to edit this; just an example
v <- c(1, 1, 2, 3, 5)
row_number(desc(v))
```
```
## [1] 4 5 3 2 1
```
```
min_rank(desc(v))
```
```
## [1] 4 4 3 2 1
```
```
dense_rank(-v)
```
```
## [1] 4 4 3 2 1
```
I find it difficult to remember how the rank functions behave, so I created the following visual to help remind myself how they function..
```
## NOTE: No need to edit this; just an example
set.seed(101)
tribble(
~x, ~y,
0, 0,
1, 0,
1, 1,
0, 2,
2, 2,
0, 3,
2, 3,
3, 3
) %>%
mutate(
rk_row = row_number(y),
rk_min = min_rank(y),
rk_dense = dense_rank(y)
) %>%
pivot_longer(
names_to = "fcn",
names_prefix = "rk_",
values_to = "rk",
cols = c(-x, -y)
) %>%
ggplot(aes(x, y)) +
geom_point(size = 4) +
geom_point(
data = . %>% filter(rk <= 3),
size = 3,
color = "orange"
) +
geom_label(aes(label = rk), nudge_x = 0.2, nudge_y = 0.2) +
facet_wrap(~fcn) +
theme_minimal() +
theme(panel.border = element_rect(color = "black", fill = NA, size = 1)) +
labs(
x = "",
y = "Minimum Three Ranks"
)
```
```
## Warning: The `size` argument of `element_rect()` is deprecated as of ggplot2 3.4.0.
## ℹ Please use the `linewidth` argument instead.
```
### 38\.2\.1 **q2** Use a rank function to filter the largest 3 `hwy` values and **all** vehicles that have those values.
```
q2 <-
mpg %>%
filter(dense_rank(desc(hwy)) <= 3)
q2
```
```
## # A tibble: 4 × 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 toyota corolla 1.8 2008 4 manua… f 28 37 r comp…
## 2 volkswagen jetta 1.9 1999 4 manua… f 33 44 d comp…
## 3 volkswagen new beetle 1.9 1999 4 manua… f 35 44 d subc…
## 4 volkswagen new beetle 1.9 1999 4 auto(… f 29 41 d subc…
```
Use the following test to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(dim(q2)[1] == 4)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
38\.1 Lead and Lag
------------------
The lead and lag functions simply provide a “shifted” copy of a vector.
```
## NOTE: No need to edit this; just an example
v <- c(1, 2, 3, 4, 5)
lead(v)
```
```
## [1] 2 3 4 5 NA
```
```
lag(v)
```
```
## [1] NA 1 2 3 4
```
These are particularly useful for computing things like differences:
```
## NOTE: No need to edit this; just an example
x <- seq(-1, +1, length.out = 6)
f <- x ^ 2
## Forward finite difference
df_dx <- (lead(f) - f) / (lead(x) - x)
df_dx
```
```
## [1] -1.600000e+00 -8.000000e-01 2.255141e-16 8.000000e-01 1.600000e+00
## [6] NA
```
Make sure to order your data or use the `order_by` argument when using `lead` or `lag`! GGplot automatically reorders your data when making a line plot, but `lead` and `lag` will use the order of the data you provide.
### 38\.1\.1 **q1** Use a window function modify the following visual to color each segment differently based on whether the period of time was increasing or decreasing.
```
economics %>%
arrange(date) %>%
mutate(
delta = lead(unemploy, order_by = date) - unemploy,
Positive = delta > 0
) %>%
ggplot(aes(date, unemploy, color = Positive)) +
geom_segment(aes(
xend = lead(date, order_by = date),
yend = lead(unemploy, order_by = date)
))
```
```
## Warning: Removed 1 rows containing missing values (`geom_segment()`).
```
### 38\.1\.1 **q1** Use a window function modify the following visual to color each segment differently based on whether the period of time was increasing or decreasing.
```
economics %>%
arrange(date) %>%
mutate(
delta = lead(unemploy, order_by = date) - unemploy,
Positive = delta > 0
) %>%
ggplot(aes(date, unemploy, color = Positive)) +
geom_segment(aes(
xend = lead(date, order_by = date),
yend = lead(unemploy, order_by = date)
))
```
```
## Warning: Removed 1 rows containing missing values (`geom_segment()`).
```
38\.2 Ranks
-----------
The rank functions allow you to assign (integer) ranks to smallest (or largest) values of a vector.
```
## NOTE: No need to edit this; just an example
v <- c(1, 1, 2, 3, 5)
row_number(v)
```
```
## [1] 1 2 3 4 5
```
```
min_rank(v)
```
```
## [1] 1 1 3 4 5
```
```
dense_rank(v)
```
```
## [1] 1 1 2 3 4
```
You can use the `desc()` function (or a negative sign) to reverse the ranking order.
```
## NOTE: No need to edit this; just an example
v <- c(1, 1, 2, 3, 5)
row_number(desc(v))
```
```
## [1] 4 5 3 2 1
```
```
min_rank(desc(v))
```
```
## [1] 4 4 3 2 1
```
```
dense_rank(-v)
```
```
## [1] 4 4 3 2 1
```
I find it difficult to remember how the rank functions behave, so I created the following visual to help remind myself how they function..
```
## NOTE: No need to edit this; just an example
set.seed(101)
tribble(
~x, ~y,
0, 0,
1, 0,
1, 1,
0, 2,
2, 2,
0, 3,
2, 3,
3, 3
) %>%
mutate(
rk_row = row_number(y),
rk_min = min_rank(y),
rk_dense = dense_rank(y)
) %>%
pivot_longer(
names_to = "fcn",
names_prefix = "rk_",
values_to = "rk",
cols = c(-x, -y)
) %>%
ggplot(aes(x, y)) +
geom_point(size = 4) +
geom_point(
data = . %>% filter(rk <= 3),
size = 3,
color = "orange"
) +
geom_label(aes(label = rk), nudge_x = 0.2, nudge_y = 0.2) +
facet_wrap(~fcn) +
theme_minimal() +
theme(panel.border = element_rect(color = "black", fill = NA, size = 1)) +
labs(
x = "",
y = "Minimum Three Ranks"
)
```
```
## Warning: The `size` argument of `element_rect()` is deprecated as of ggplot2 3.4.0.
## ℹ Please use the `linewidth` argument instead.
```
### 38\.2\.1 **q2** Use a rank function to filter the largest 3 `hwy` values and **all** vehicles that have those values.
```
q2 <-
mpg %>%
filter(dense_rank(desc(hwy)) <= 3)
q2
```
```
## # A tibble: 4 × 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 toyota corolla 1.8 2008 4 manua… f 28 37 r comp…
## 2 volkswagen jetta 1.9 1999 4 manua… f 33 44 d comp…
## 3 volkswagen new beetle 1.9 1999 4 manua… f 35 44 d subc…
## 4 volkswagen new beetle 1.9 1999 4 auto(… f 29 41 d subc…
```
Use the following test to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(dim(q2)[1] == 4)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 38\.2\.1 **q2** Use a rank function to filter the largest 3 `hwy` values and **all** vehicles that have those values.
```
q2 <-
mpg %>%
filter(dense_rank(desc(hwy)) <= 3)
q2
```
```
## # A tibble: 4 × 11
## manufacturer model displ year cyl trans drv cty hwy fl class
## <chr> <chr> <dbl> <int> <int> <chr> <chr> <int> <int> <chr> <chr>
## 1 toyota corolla 1.8 2008 4 manua… f 28 37 r comp…
## 2 volkswagen jetta 1.9 1999 4 manua… f 33 44 d comp…
## 3 volkswagen new beetle 1.9 1999 4 manua… f 35 44 d subc…
## 4 volkswagen new beetle 1.9 1999 4 auto(… f 29 41 d subc…
```
Use the following test to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(dim(q2)[1] == 4)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/stats-moment-arithmetic.html |
39 Stats: Moment Arithmetic
===========================
*Purpose*: In a future exercise, we will need to be able to do some basic arithmetic with *moments* of a distribution. To prepare for this later exercise, we’ll do some practice now.
*Reading*: (None, this is the reading)
*Topics*: Moments, moment arithmetic, standardization
39\.1 Moments
-------------
Moments are a particular kind of statistic. There is a general, mathematical definition of a [moment](https://en.wikipedia.org/wiki/Moment_(mathematics)), but we will only need to talk about two in this class.
We’ve already seen the *mean*; this is also called the expectation. For a random variable \\(X\\), the expectation is defined in terms of its pdf \\(\\rho(x)\\) via
\\\[\\mathbb{E}\[X] \= \\int x \\rho(x) dx.\\]
We’ve also seen the standard deviation \\(\\sigma\\). This is related to the variance \\(\\sigma^2\\), which is defined for a random variable \\(X\\) in terms of the expectation
\\\[\\sigma^2 \\equiv \\mathbb{V}\[X] \= \\mathbb{E}\[(X \- \\mathbb{E}\[X])^2].\\]
For instance, a standard normal \\(Z\\) has
\\\[ \\begin{aligned}
\\mathbb{E}\[Z] \&\= 0 \\\\
\\mathbb{V}\[Z] \&\= 1
\\end{aligned} \\]
For future exercises, we’ll need to learn how to do basic arithmetic with these two moments.
39\.2 Moment Arithmetic
-----------------------
We will need to be able to do some basic arithmetic with the mean and variance. The following exercises will help you remember this basic arithmetic.
39\.3 Expectation
-----------------
The expectation is *linear*, that is
\\\[\\mathbb{E}\[aX \+ c] \= a \\mathbb{E}\[X] \+ c.\\]
We can use this fact to compute the mean of simply transformed random variables.
### 39\.3\.1 **q1** Compute the mean of \\(2 Z \+ 3\\), where \\(Z\\) is a standard normal.
```
## TASK: Compute the mean of 2 Z + 3
E_q1 <- 3
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(E_q1, 3))
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
Since the expectation is linear, it also satisfies
\\\[\\mathbb{E}\[aX \+ bY] \= a \\mathbb{E}\[X] \+ b \\mathbb{E}\[Y].\\]
### 39\.3\.2 **q2** Compute the mean of \\(2 Z\_1 \+ 3 Z\_2\\), where \\(Z\_1, Z\_2\\) are separate standard normals.
```
## TASK: Compute the mean of 2 Z1 + 3 Z2
E_q2 <- 2 * 0 + 3 * 0
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(E_q2, 0))
```
```
## [1] TRUE
```
```
print("Great!")
```
```
## [1] "Great!"
```
39\.4 Variance
--------------
Remember that variance is the square of standard deviation. Variance satisfies the property
\\\[\\mathbb{V}\[aX \+ c] \= a^2 \\mathbb{V}\[X].\\]
### 39\.4\.1 **q3** Compute the variance of \\(2 Z \+ 3\\), where \\(Z\\) is a standard normal.
```
## TASK: Compute the mean of 2 Z + 3
V_q3 <- 2 ^ 2
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(V_q3, 4))
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
The variance of a *sum* of random variables is a bit more complicated
\\\[\\mathbb{V}\[aX \+ bY] \= a^2 \\mathbb{V}\[X] \+ b^2 \\mathbb{V}\[Y] \+ 2ab \\text{Cov}\[X, Y],\\]
where \\(\\text{Cov}\[X, Y]\\) denotes the [covariance](https://en.wikipedia.org/wiki/Covariance) of \\(X, Y\\). Covariance is closely related to correlation, which we discussed in `e-stat03-descriptive`. If two random variables \\(X, Y\\) are *uncorrelated*, then \\(\\text{Cov}\[X, Y] \= 0\\).
### 39\.4\.2 **q4** Compute the variance of \\(2 Z\_1 \+ 3 Z\_2\\), where \\(Z\_1, Z\_2\\) are *uncorrelated* standard normals.
```
## TASK: Compute the variance of 2 Z1 + 3 Z2
V_q4 <- 2^2 + 3^2
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(V_q4, 13))
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
39\.5 Standardization
---------------------
The following two exercises illustrate two important transformations.
### 39\.5\.1 **q5** Compute the mean and variance of \\((X \- 1\) / 2\\), where
\\\[\\mathbb{E}\[X] \= 1, \\mathbb{V}\[X] \= 4\\].
```
## TASK: Compute the mean and variance
E_q3 <- 0
V_q3 <- 1
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(E_q3, 0))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(V_q3, 1))
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
This process of centering (setting the mean to zero) and scaling a random variable is called *standardization*. For instance, if \\(X\\) is a normal random variable, then \\((X \- \\mu) / \\sigma \= Z\\) is a standard normal.
### 39\.5\.2 **q6** Compute the mean and variance of \\(1 \+ 2 Z\\), where \\(Z\\) is a standard normal.
```
## TASK: Compute the mean and variance
E_q4 <- 1
V_q4 <- 4
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(E_q4, 1))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(V_q4, 4))
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
This example illustrates that we can create a normal with desired mean and standard deviation by transforming a standard normal \\(\\mu \+ \\sigma Z \= X\\).
39\.6 Standard Error
--------------------
The variance satisfies the property
\\\[\\mathbb{V}\[aX \+ bY] \= a^2 \\mathbb{V}\[X] \+ b^2 \\mathbb{V}\[Y] \+ 2 \\text{Cov}\[X, Y],\\]
where
\\\[\\text{Cov}\[X, Y] \= \\mathbb{E}\[(X \- \\mathbb{E}\[X])(Y \- \\mathbb{E}\[Y])]\\]
is the *covariance* between \\(X\\) and \\(Y\\). If \\(X, Y\\) are independent, then the covariance between them is zero.
Using this expression, we can prove that the standard error of the sample mean \\(\\overline{X}\\) is \\(\\sigma / \\sqrt{n}\\).
### 39\.6\.1 **q7** (Bonus) Use the identity above to prove that
\\\[\\mathbb{V}\[\\overline{X}] \= \\sigma^2 / n,\\]
where \\\[\\overline{X} \= \\frac{1}{n}\\sum\_{i\=1}^n X\_i\\], \\(\\sigma^2 \= \\mathbb{V}\[X]\\), and the \\(X\_i\\) are mutually independent.
The quantity
\\\[\\sqrt{\\mathbb{V}\[\\overline{X}]}\\]
is called the *standard error of the mean*; more generally the *standard error*
for a statistic is the standard deviation of its sampling distribution. We’ll return to this concept in `e-stat06`.
39\.1 Moments
-------------
Moments are a particular kind of statistic. There is a general, mathematical definition of a [moment](https://en.wikipedia.org/wiki/Moment_(mathematics)), but we will only need to talk about two in this class.
We’ve already seen the *mean*; this is also called the expectation. For a random variable \\(X\\), the expectation is defined in terms of its pdf \\(\\rho(x)\\) via
\\\[\\mathbb{E}\[X] \= \\int x \\rho(x) dx.\\]
We’ve also seen the standard deviation \\(\\sigma\\). This is related to the variance \\(\\sigma^2\\), which is defined for a random variable \\(X\\) in terms of the expectation
\\\[\\sigma^2 \\equiv \\mathbb{V}\[X] \= \\mathbb{E}\[(X \- \\mathbb{E}\[X])^2].\\]
For instance, a standard normal \\(Z\\) has
\\\[ \\begin{aligned}
\\mathbb{E}\[Z] \&\= 0 \\\\
\\mathbb{V}\[Z] \&\= 1
\\end{aligned} \\]
For future exercises, we’ll need to learn how to do basic arithmetic with these two moments.
39\.2 Moment Arithmetic
-----------------------
We will need to be able to do some basic arithmetic with the mean and variance. The following exercises will help you remember this basic arithmetic.
39\.3 Expectation
-----------------
The expectation is *linear*, that is
\\\[\\mathbb{E}\[aX \+ c] \= a \\mathbb{E}\[X] \+ c.\\]
We can use this fact to compute the mean of simply transformed random variables.
### 39\.3\.1 **q1** Compute the mean of \\(2 Z \+ 3\\), where \\(Z\\) is a standard normal.
```
## TASK: Compute the mean of 2 Z + 3
E_q1 <- 3
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(E_q1, 3))
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
Since the expectation is linear, it also satisfies
\\\[\\mathbb{E}\[aX \+ bY] \= a \\mathbb{E}\[X] \+ b \\mathbb{E}\[Y].\\]
### 39\.3\.2 **q2** Compute the mean of \\(2 Z\_1 \+ 3 Z\_2\\), where \\(Z\_1, Z\_2\\) are separate standard normals.
```
## TASK: Compute the mean of 2 Z1 + 3 Z2
E_q2 <- 2 * 0 + 3 * 0
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(E_q2, 0))
```
```
## [1] TRUE
```
```
print("Great!")
```
```
## [1] "Great!"
```
### 39\.3\.1 **q1** Compute the mean of \\(2 Z \+ 3\\), where \\(Z\\) is a standard normal.
```
## TASK: Compute the mean of 2 Z + 3
E_q1 <- 3
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(E_q1, 3))
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
Since the expectation is linear, it also satisfies
\\\[\\mathbb{E}\[aX \+ bY] \= a \\mathbb{E}\[X] \+ b \\mathbb{E}\[Y].\\]
### 39\.3\.2 **q2** Compute the mean of \\(2 Z\_1 \+ 3 Z\_2\\), where \\(Z\_1, Z\_2\\) are separate standard normals.
```
## TASK: Compute the mean of 2 Z1 + 3 Z2
E_q2 <- 2 * 0 + 3 * 0
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(E_q2, 0))
```
```
## [1] TRUE
```
```
print("Great!")
```
```
## [1] "Great!"
```
39\.4 Variance
--------------
Remember that variance is the square of standard deviation. Variance satisfies the property
\\\[\\mathbb{V}\[aX \+ c] \= a^2 \\mathbb{V}\[X].\\]
### 39\.4\.1 **q3** Compute the variance of \\(2 Z \+ 3\\), where \\(Z\\) is a standard normal.
```
## TASK: Compute the mean of 2 Z + 3
V_q3 <- 2 ^ 2
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(V_q3, 4))
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
The variance of a *sum* of random variables is a bit more complicated
\\\[\\mathbb{V}\[aX \+ bY] \= a^2 \\mathbb{V}\[X] \+ b^2 \\mathbb{V}\[Y] \+ 2ab \\text{Cov}\[X, Y],\\]
where \\(\\text{Cov}\[X, Y]\\) denotes the [covariance](https://en.wikipedia.org/wiki/Covariance) of \\(X, Y\\). Covariance is closely related to correlation, which we discussed in `e-stat03-descriptive`. If two random variables \\(X, Y\\) are *uncorrelated*, then \\(\\text{Cov}\[X, Y] \= 0\\).
### 39\.4\.2 **q4** Compute the variance of \\(2 Z\_1 \+ 3 Z\_2\\), where \\(Z\_1, Z\_2\\) are *uncorrelated* standard normals.
```
## TASK: Compute the variance of 2 Z1 + 3 Z2
V_q4 <- 2^2 + 3^2
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(V_q4, 13))
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
### 39\.4\.1 **q3** Compute the variance of \\(2 Z \+ 3\\), where \\(Z\\) is a standard normal.
```
## TASK: Compute the mean of 2 Z + 3
V_q3 <- 2 ^ 2
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(V_q3, 4))
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
The variance of a *sum* of random variables is a bit more complicated
\\\[\\mathbb{V}\[aX \+ bY] \= a^2 \\mathbb{V}\[X] \+ b^2 \\mathbb{V}\[Y] \+ 2ab \\text{Cov}\[X, Y],\\]
where \\(\\text{Cov}\[X, Y]\\) denotes the [covariance](https://en.wikipedia.org/wiki/Covariance) of \\(X, Y\\). Covariance is closely related to correlation, which we discussed in `e-stat03-descriptive`. If two random variables \\(X, Y\\) are *uncorrelated*, then \\(\\text{Cov}\[X, Y] \= 0\\).
### 39\.4\.2 **q4** Compute the variance of \\(2 Z\_1 \+ 3 Z\_2\\), where \\(Z\_1, Z\_2\\) are *uncorrelated* standard normals.
```
## TASK: Compute the variance of 2 Z1 + 3 Z2
V_q4 <- 2^2 + 3^2
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(V_q4, 13))
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
39\.5 Standardization
---------------------
The following two exercises illustrate two important transformations.
### 39\.5\.1 **q5** Compute the mean and variance of \\((X \- 1\) / 2\\), where
\\\[\\mathbb{E}\[X] \= 1, \\mathbb{V}\[X] \= 4\\].
```
## TASK: Compute the mean and variance
E_q3 <- 0
V_q3 <- 1
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(E_q3, 0))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(V_q3, 1))
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
This process of centering (setting the mean to zero) and scaling a random variable is called *standardization*. For instance, if \\(X\\) is a normal random variable, then \\((X \- \\mu) / \\sigma \= Z\\) is a standard normal.
### 39\.5\.2 **q6** Compute the mean and variance of \\(1 \+ 2 Z\\), where \\(Z\\) is a standard normal.
```
## TASK: Compute the mean and variance
E_q4 <- 1
V_q4 <- 4
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(E_q4, 1))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(V_q4, 4))
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
This example illustrates that we can create a normal with desired mean and standard deviation by transforming a standard normal \\(\\mu \+ \\sigma Z \= X\\).
### 39\.5\.1 **q5** Compute the mean and variance of \\((X \- 1\) / 2\\), where
\\\[\\mathbb{E}\[X] \= 1, \\mathbb{V}\[X] \= 4\\].
```
## TASK: Compute the mean and variance
E_q3 <- 0
V_q3 <- 1
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(E_q3, 0))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(V_q3, 1))
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
This process of centering (setting the mean to zero) and scaling a random variable is called *standardization*. For instance, if \\(X\\) is a normal random variable, then \\((X \- \\mu) / \\sigma \= Z\\) is a standard normal.
### 39\.5\.2 **q6** Compute the mean and variance of \\(1 \+ 2 Z\\), where \\(Z\\) is a standard normal.
```
## TASK: Compute the mean and variance
E_q4 <- 1
V_q4 <- 4
```
Use the following test to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(assertthat::are_equal(E_q4, 1))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(V_q4, 4))
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
This example illustrates that we can create a normal with desired mean and standard deviation by transforming a standard normal \\(\\mu \+ \\sigma Z \= X\\).
39\.6 Standard Error
--------------------
The variance satisfies the property
\\\[\\mathbb{V}\[aX \+ bY] \= a^2 \\mathbb{V}\[X] \+ b^2 \\mathbb{V}\[Y] \+ 2 \\text{Cov}\[X, Y],\\]
where
\\\[\\text{Cov}\[X, Y] \= \\mathbb{E}\[(X \- \\mathbb{E}\[X])(Y \- \\mathbb{E}\[Y])]\\]
is the *covariance* between \\(X\\) and \\(Y\\). If \\(X, Y\\) are independent, then the covariance between them is zero.
Using this expression, we can prove that the standard error of the sample mean \\(\\overline{X}\\) is \\(\\sigma / \\sqrt{n}\\).
### 39\.6\.1 **q7** (Bonus) Use the identity above to prove that
\\\[\\mathbb{V}\[\\overline{X}] \= \\sigma^2 / n,\\]
where \\\[\\overline{X} \= \\frac{1}{n}\\sum\_{i\=1}^n X\_i\\], \\(\\sigma^2 \= \\mathbb{V}\[X]\\), and the \\(X\_i\\) are mutually independent.
The quantity
\\\[\\sqrt{\\mathbb{V}\[\\overline{X}]}\\]
is called the *standard error of the mean*; more generally the *standard error*
for a statistic is the standard deviation of its sampling distribution. We’ll return to this concept in `e-stat06`.
### 39\.6\.1 **q7** (Bonus) Use the identity above to prove that
\\\[\\mathbb{V}\[\\overline{X}] \= \\sigma^2 / n,\\]
where \\\[\\overline{X} \= \\frac{1}{n}\\sum\_{i\=1}^n X\_i\\], \\(\\sigma^2 \= \\mathbb{V}\[X]\\), and the \\(X\_i\\) are mutually independent.
The quantity
\\\[\\sqrt{\\mathbb{V}\[\\overline{X}]}\\]
is called the *standard error of the mean*; more generally the *standard error*
for a statistic is the standard deviation of its sampling distribution. We’ll return to this concept in `e-stat06`.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/stats-the-central-limit-theorem-and-confidence-intervals.html |
40 Stats: The Central Limit Theorem and Confidence Intervals
============================================================
*Purpose*: When studying sampled data, we need a principled way to report our results with their uncertainties. Confidence intervals (CI) are an excellent way to summarize results, and the central limit theorem (CLT) helps us to construct these intervals.
*Reading*: (None, this is the reading)
*Topics*: The central limit theorem (CLT), confidence intervals
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(nycflights13)
```
40\.1 Central Limit Theorem
---------------------------
Let’s return to a result from `e-stat04-population`:
```
## NOTE: No need to edit this
set.seed(101)
n_observations <- 9
n_samples <- 5e3
df_samp_unif <-
map_dfr(
1:n_samples,
function(id) {
tibble(
Z = runif(n_observations),
id = id
)
}
)
df_samp_unif %>%
group_by(id) %>%
summarize(stat = mean(Z)) %>%
ggplot(aes(stat)) +
geom_histogram() +
labs(
x = "Estimated Mean",
title = "Sampling Distribution: Estimated Mean",
caption = "Population: Uniform"
)
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
If you said that the sampling distribution from the exercise above looks roughly normal, then you are correct! This is an example of the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem), a central idea in statistics. Here we’ll introduce the central limit theorem (CLT), use it to approximate the sampling distribution for the sample mean, and in turn use that to construct an approximate *confidence interval*.
For populations satisfying mild conditions\[1], the sample mean \\(\\overline{X}\\) converges to a normal distribution as the sample size \\(n\\) approaches infinity. Specifically
\\\[\\overline{X} \\stackrel{d}{\\to} N(\\mu, \\sigma^2 / n),\\]
where \\(\\mu\\) is the mean of the population, \\(\\sigma\\) is the standard deviation of the population, and \\(\\stackrel{d}{\\to}\\) means [*converges in distribution*](https://en.wikipedia.org/wiki/Convergence_of_random_variables#Convergence_in_distribution), a technical definition that is beyond the scope of this lesson.
Below I simulate sampling from a uniform distribution and compute the mean at different sample sizes to illustrate the CLT:
```
## NOTE: No need to change this!
set.seed(101)
n_repl <- 5e3
df_clt <-
map_dfr(
1:n_repl,
function(id) {
map_dfr(
c(1, 2, 9, 81, 729),
function(n) {
tibble(
Z = runif(n),
n = n,
id = id
)
}
)
}
) %>%
group_by(n, id) %>%
summarize(mean = mean(Z), sd = sd(Z))
```
```
## `summarise()` has grouped output by 'n'. You can override using the `.groups`
## argument.
```
Let’s visualize the sampling distribution for each sample size:
```
df_clt %>%
ggplot(aes(mean)) +
geom_density() +
facet_wrap(~n, scales = "free")
```
At just `1` our sample mean is \\(X\_1 / 1\\)—we’re just drawing single observations from the population, so we see a uniform. At `2` we something that looks like a tent. By `9` samples we see a distribution that looks quite normal.
The CLT doesn’t work for *all* problems. The CLT is often used for sums of random variables—the mean is one such sum. However, something like a quantile is not estimated by a sum of random variables, so we can’t use the CLT to approximate a sampling distribution. Later we’ll learn a more general tool to approximate sampling distributions for general statistics—*the bootstrap*.
Note that the CLT tells us about estimates like the sample mean, it does *not* tell us anything about the distribution of the underlying population. We will use the CLT to help construct *confidence intervals*.
40\.2 Confidence Intervals
--------------------------
Let’s learn about confidence intervals by way of example. I’ll lay out a procedure, then explain how it works.
First, let’s use some moment arithmetic to build a normal distribution with mean \\(\\mu\\) and standard deviation \\(\\sigma / \\sqrt{n}\\) out of a standard normal \\(Z\\). This gives us
\\\[X \= \\mu \+ (\\sigma / \\sqrt{n}) Z.\\]
Now imagine we wanted to select two endpoints to give us the middle \\(95%\\) of this distribution. We could do this with \\(qnorm()\\) with the appropriate values of `mean, sd`. But using the definition of \\(X\\) above, we can also do this by using the appropriate quantiles of the standard normal \\(Z\\). The following code gives the upper quantile.
```
## NOTE: No need to change this!
q95 <- qnorm( 1 - (1 - 0.95) / 2 )
q95
```
```
## [1] 1.959964
```
This is approximately `1.96` when seeking a \\(95%\\) confidence level. Since the standard normal distribution is symmetric about zero, we can use the same value `q95` with a negative sign for the appropriate lower quantile.
*Here’s the procedure*, we’ll build lower and upper bounds for an interval based on the sample mean and sample standard error \\(\[\\hat{mu} \- q\_{95} \\hat{\\text{SE}}, \\hat{mu} \+ q\_{95} \\hat{\\text{SE}}]\\). I construct this interval for each sample in `df_clt`, and check whether the interval contains the population mean of `0.5`. The following code visualizes the first `100` intervals.
```
## NOTE: No need to change this!
df_clt %>%
filter(
n > 1,
id <= 100
) %>%
mutate(
se = sd / sqrt(n),
lo = mean - q95 * se,
hi = mean + q95 * se
) %>%
ggplot(aes(id)) +
geom_hline(yintercept = 0.5, linetype = 2) +
geom_errorbar(aes(
ymin = lo,
ymax = hi,
color = (lo <= 0.5) & (0.5 <= hi)
)) +
facet_grid(n~.) +
scale_color_discrete(name = "CI Contains True Mean") +
theme(legend.position = "bottom") +
labs(
x = "Replication",
y = "Estimated Mean"
)
```
Some observations to note:
* The confidence intervals tend to be larger when \\(n\\) is small, and shrink as \\(n\\) increases.
* We tend to have more “misses” when \\(n\\) is small.
* Every confidence interval either **does** or **does not** include the true value. Therefore a single confidence interval actually has no probability associated with it. The “confidence” is not in a single interval, but rather in the procedure that generated the interval.
The following code estimates the frequency with which each interval includes the true mean; this quantity is called *coverage*, and it should match the nominal \\(95%\\) we selected above.
```
## NOTE: No need to change this!
df_clt %>%
filter(n > 1) %>%
mutate(
se = sd / sqrt(n),
lo = mean - q95 * se,
hi = mean + q95 * se,
flag = (lo <= 0.5) & (0.5 <= hi)
) %>%
group_by(n) %>%
summarize(coverage = mean(flag))
```
```
## # A tibble: 4 × 2
## n coverage
## <dbl> <dbl>
## 1 2 0.661
## 2 9 0.908
## 3 81 0.945
## 4 729 0.950
```
Some observations to note:
* The coverage is well below our desired \\(95%\\) when \\(n\\) is small; this is because we are making an approximation.
* As \\(n\\) increases, the coverage tends towards our desired \\(95%\\).
[This animation](https://seeing-theory.brown.edu/frequentist-inference/index.html) is the best visual explain I’ve found on how confidence intervals are constructed \[2].
### 40\.2\.1 **q1** Using the CLT, approximate a \\(99\\%\\) confidence interval for the population mean using the sample `z_q1`.
```
## TASK: Estimate a 99% confidence interval with the sample below
set.seed(101)
z_q1 <- rnorm(n = 100, mean = 1, sd = 2)
lo_q1 <- mean(z_q1) - qnorm( 1 - (1 - 0.99) / 2 ) * sd(z_q1) / sqrt(100)
hi_q1 <- mean(z_q1) + qnorm( 1 - (1 - 0.99) / 2 ) * sd(z_q1) / sqrt(100)
```
Use the following tests to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(abs(lo_q1 - 0.4444163) < 1e-6)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(hi_q1 - 1.406819) < 1e-6)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
40\.3 Making Comparisons with CI
--------------------------------
Why would we bother with constructing a confidence interval? Let’s take a look at a real example with the NYC flight data.
Let’s suppose we were trying to determine whether the mean arrival delay time of American Airlines (`AA`) flights is greater than zero. We have the population of 2013 flights, so we can answer this definitively:
```
## NOTE: No need to change this!
df_flights_aa <-
flights %>%
filter(carrier == "AA") %>%
summarize(across(
arr_delay,
c(
"mean" = ~mean(., na.rm = TRUE),
"sd" = ~sd(., na.rm = TRUE),
"n" = ~length(.)
)
))
df_flights_aa
```
```
## # A tibble: 1 × 3
## arr_delay_mean arr_delay_sd arr_delay_n
## <dbl> <dbl> <int>
## 1 0.364 42.5 32729
```
The `arr_delay_mean` is greater than zero, so case closed.
But imagine we only had a sample of flights, rather than the whole population. The following code randomly samples the `AA` flights, and repeats this process at a few different sample sizes. I also construct confidence intervals: If the confidence interval has its lower bound greater than zero, then we can be reasonably confident the mean delay time is greater than zero.
```
## NOTE: No need to change this!
set.seed(101)
# Downsample at different sample sizes, construct a confidence interval
df_flights_sampled <-
map_dfr(
c(5, 10, 25, 50, 100, 250, 500), # Sample sizes
function(n) {
flights %>%
filter(carrier == "AA") %>%
slice_sample(n = n) %>%
summarize(across(
arr_delay,
c(
"mean" = ~mean(., na.rm = TRUE),
"se" = ~sd(., na.rm = TRUE) / length(.)
)
)) %>%
mutate(
arr_delay_lo = arr_delay_mean - 1.96 * arr_delay_se,
arr_delay_hi = arr_delay_mean + 1.96 * arr_delay_se,
n = n
)
}
)
# Visualize
df_flights_sampled %>%
ggplot(aes(n, arr_delay_mean)) +
geom_hline(
data = df_flights_aa,
mapping = aes(yintercept = arr_delay_mean),
size = 0.1
) +
geom_hline(yintercept = 0, color = "white", size = 2) +
geom_errorbar(aes(
ymin = arr_delay_lo,
ymax = arr_delay_hi,
color = (0 < arr_delay_lo)
)) +
geom_point() +
scale_x_log10() +
scale_color_discrete(name = "Confidently Greater than Zero?") +
theme(legend.position = "bottom") +
labs(
x = "Samples",
y = "Arrival Delay (minutes)",
title = "American Airlines Delays"
)
```
```
## Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
## ℹ Please use `linewidth` instead.
```
These confidence intervals illustrate a number of different sampling scenarios. In some of them, we correctly determine that the mean arrival delay is confidently greater than zero. The case at \\(n \= 100\\) is inconclusive; the CI is compatible with both positive and negative mean delay times. Note the two lowest \\(n\\) cases; there we “confidently” determine that the mean arrival delay is negative \[3]. Any time we are doing estimation we are in danger of making an incorrect conclusion, even when we do the statistics correctly! Obtaining data simply decreases the probability of making a false conclusion \[4].
However, combining all our available information to form a confidence interval is a principled way to report our results. A confidence interval gives us a plausible range of values for the population value, and by its width gives us a sense of how accurate our estimate is likely to be.
40\.4 (Bonus) Deriving an Approximate Confidence Interval
---------------------------------------------------------
(This is bonus content provided for the curious reader.)
Under the CLT, the sampling distribution for the sample mean is
\\\[\\overline{X} \\sim N(\\mu, \\sigma^2 / n).\\]
We can standardize this quantity to form
\\\[(\\overline{X} \- \\mu) / (\\sigma / \\sqrt{n}) \\sim N(0, 1^2\).\\]
This is called a *pivotal quantity*; it is a quantity whose distribution does not depend on the parameters we are trying to estimate. The lower and upper quantiles corresponding to a symmetric \\(C\\) confidence level are `q_C = qnorm( 1 - (1 - C) / 2 )` and `-q_C`, which means
\\\[\\mathbb{P}\[\-q\_C \< (\\overline{X} \- \\mu) / (\\sigma / \\sqrt{n}) \< \+q\_C] \= C.\\]
With a small amount of arithmetic, we can re\-arrange the inequalities inside the probability statement to write
\\\[\\mathbb{P}\[\\overline{X} \- q\_C (\\sigma / \\sqrt{n}) \< \\mu \< \\overline{X} \+ q\_C (\\sigma / \\sqrt{n})] \= C.\\]
Using a plug\-in estimate for \\(\\sigma\\) gives the procedure defined above.
40\.5 Notes
-----------
\[1] Namely, the population must have finite mean and finite variance.
\[2] [This](https://seeing-theory.brown.edu/frequentist-inference/index.html) **the best** visualization of the confidence interval concept that I have ever found. Click through Frequentist Inference \> Confidence Interval to see the animation.
\[3] Part of the issue here is that we are not accounting for the additional variability that arises from estimating the standard deviation. Using a [t\-distribution](https://en.wikipedia.org/wiki/Student%27s_t-distribution#Confidence_intervals) to construct more conservative confidence intervals helps at lower sample sizes.
\[4] The process of making decisions about what to believe about reality based on data is called [hypothesis testing](https://en.wikipedia.org/wiki/Statistical_hypothesis_testing). We’ll talk about this soon!
40\.1 Central Limit Theorem
---------------------------
Let’s return to a result from `e-stat04-population`:
```
## NOTE: No need to edit this
set.seed(101)
n_observations <- 9
n_samples <- 5e3
df_samp_unif <-
map_dfr(
1:n_samples,
function(id) {
tibble(
Z = runif(n_observations),
id = id
)
}
)
df_samp_unif %>%
group_by(id) %>%
summarize(stat = mean(Z)) %>%
ggplot(aes(stat)) +
geom_histogram() +
labs(
x = "Estimated Mean",
title = "Sampling Distribution: Estimated Mean",
caption = "Population: Uniform"
)
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
If you said that the sampling distribution from the exercise above looks roughly normal, then you are correct! This is an example of the [central limit theorem](https://en.wikipedia.org/wiki/Central_limit_theorem), a central idea in statistics. Here we’ll introduce the central limit theorem (CLT), use it to approximate the sampling distribution for the sample mean, and in turn use that to construct an approximate *confidence interval*.
For populations satisfying mild conditions\[1], the sample mean \\(\\overline{X}\\) converges to a normal distribution as the sample size \\(n\\) approaches infinity. Specifically
\\\[\\overline{X} \\stackrel{d}{\\to} N(\\mu, \\sigma^2 / n),\\]
where \\(\\mu\\) is the mean of the population, \\(\\sigma\\) is the standard deviation of the population, and \\(\\stackrel{d}{\\to}\\) means [*converges in distribution*](https://en.wikipedia.org/wiki/Convergence_of_random_variables#Convergence_in_distribution), a technical definition that is beyond the scope of this lesson.
Below I simulate sampling from a uniform distribution and compute the mean at different sample sizes to illustrate the CLT:
```
## NOTE: No need to change this!
set.seed(101)
n_repl <- 5e3
df_clt <-
map_dfr(
1:n_repl,
function(id) {
map_dfr(
c(1, 2, 9, 81, 729),
function(n) {
tibble(
Z = runif(n),
n = n,
id = id
)
}
)
}
) %>%
group_by(n, id) %>%
summarize(mean = mean(Z), sd = sd(Z))
```
```
## `summarise()` has grouped output by 'n'. You can override using the `.groups`
## argument.
```
Let’s visualize the sampling distribution for each sample size:
```
df_clt %>%
ggplot(aes(mean)) +
geom_density() +
facet_wrap(~n, scales = "free")
```
At just `1` our sample mean is \\(X\_1 / 1\\)—we’re just drawing single observations from the population, so we see a uniform. At `2` we something that looks like a tent. By `9` samples we see a distribution that looks quite normal.
The CLT doesn’t work for *all* problems. The CLT is often used for sums of random variables—the mean is one such sum. However, something like a quantile is not estimated by a sum of random variables, so we can’t use the CLT to approximate a sampling distribution. Later we’ll learn a more general tool to approximate sampling distributions for general statistics—*the bootstrap*.
Note that the CLT tells us about estimates like the sample mean, it does *not* tell us anything about the distribution of the underlying population. We will use the CLT to help construct *confidence intervals*.
40\.2 Confidence Intervals
--------------------------
Let’s learn about confidence intervals by way of example. I’ll lay out a procedure, then explain how it works.
First, let’s use some moment arithmetic to build a normal distribution with mean \\(\\mu\\) and standard deviation \\(\\sigma / \\sqrt{n}\\) out of a standard normal \\(Z\\). This gives us
\\\[X \= \\mu \+ (\\sigma / \\sqrt{n}) Z.\\]
Now imagine we wanted to select two endpoints to give us the middle \\(95%\\) of this distribution. We could do this with \\(qnorm()\\) with the appropriate values of `mean, sd`. But using the definition of \\(X\\) above, we can also do this by using the appropriate quantiles of the standard normal \\(Z\\). The following code gives the upper quantile.
```
## NOTE: No need to change this!
q95 <- qnorm( 1 - (1 - 0.95) / 2 )
q95
```
```
## [1] 1.959964
```
This is approximately `1.96` when seeking a \\(95%\\) confidence level. Since the standard normal distribution is symmetric about zero, we can use the same value `q95` with a negative sign for the appropriate lower quantile.
*Here’s the procedure*, we’ll build lower and upper bounds for an interval based on the sample mean and sample standard error \\(\[\\hat{mu} \- q\_{95} \\hat{\\text{SE}}, \\hat{mu} \+ q\_{95} \\hat{\\text{SE}}]\\). I construct this interval for each sample in `df_clt`, and check whether the interval contains the population mean of `0.5`. The following code visualizes the first `100` intervals.
```
## NOTE: No need to change this!
df_clt %>%
filter(
n > 1,
id <= 100
) %>%
mutate(
se = sd / sqrt(n),
lo = mean - q95 * se,
hi = mean + q95 * se
) %>%
ggplot(aes(id)) +
geom_hline(yintercept = 0.5, linetype = 2) +
geom_errorbar(aes(
ymin = lo,
ymax = hi,
color = (lo <= 0.5) & (0.5 <= hi)
)) +
facet_grid(n~.) +
scale_color_discrete(name = "CI Contains True Mean") +
theme(legend.position = "bottom") +
labs(
x = "Replication",
y = "Estimated Mean"
)
```
Some observations to note:
* The confidence intervals tend to be larger when \\(n\\) is small, and shrink as \\(n\\) increases.
* We tend to have more “misses” when \\(n\\) is small.
* Every confidence interval either **does** or **does not** include the true value. Therefore a single confidence interval actually has no probability associated with it. The “confidence” is not in a single interval, but rather in the procedure that generated the interval.
The following code estimates the frequency with which each interval includes the true mean; this quantity is called *coverage*, and it should match the nominal \\(95%\\) we selected above.
```
## NOTE: No need to change this!
df_clt %>%
filter(n > 1) %>%
mutate(
se = sd / sqrt(n),
lo = mean - q95 * se,
hi = mean + q95 * se,
flag = (lo <= 0.5) & (0.5 <= hi)
) %>%
group_by(n) %>%
summarize(coverage = mean(flag))
```
```
## # A tibble: 4 × 2
## n coverage
## <dbl> <dbl>
## 1 2 0.661
## 2 9 0.908
## 3 81 0.945
## 4 729 0.950
```
Some observations to note:
* The coverage is well below our desired \\(95%\\) when \\(n\\) is small; this is because we are making an approximation.
* As \\(n\\) increases, the coverage tends towards our desired \\(95%\\).
[This animation](https://seeing-theory.brown.edu/frequentist-inference/index.html) is the best visual explain I’ve found on how confidence intervals are constructed \[2].
### 40\.2\.1 **q1** Using the CLT, approximate a \\(99\\%\\) confidence interval for the population mean using the sample `z_q1`.
```
## TASK: Estimate a 99% confidence interval with the sample below
set.seed(101)
z_q1 <- rnorm(n = 100, mean = 1, sd = 2)
lo_q1 <- mean(z_q1) - qnorm( 1 - (1 - 0.99) / 2 ) * sd(z_q1) / sqrt(100)
hi_q1 <- mean(z_q1) + qnorm( 1 - (1 - 0.99) / 2 ) * sd(z_q1) / sqrt(100)
```
Use the following tests to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(abs(lo_q1 - 0.4444163) < 1e-6)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(hi_q1 - 1.406819) < 1e-6)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 40\.2\.1 **q1** Using the CLT, approximate a \\(99\\%\\) confidence interval for the population mean using the sample `z_q1`.
```
## TASK: Estimate a 99% confidence interval with the sample below
set.seed(101)
z_q1 <- rnorm(n = 100, mean = 1, sd = 2)
lo_q1 <- mean(z_q1) - qnorm( 1 - (1 - 0.99) / 2 ) * sd(z_q1) / sqrt(100)
hi_q1 <- mean(z_q1) + qnorm( 1 - (1 - 0.99) / 2 ) * sd(z_q1) / sqrt(100)
```
Use the following tests to check your answer.
```
## NOTE: No need to change this!
assertthat::assert_that(abs(lo_q1 - 0.4444163) < 1e-6)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(hi_q1 - 1.406819) < 1e-6)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
40\.3 Making Comparisons with CI
--------------------------------
Why would we bother with constructing a confidence interval? Let’s take a look at a real example with the NYC flight data.
Let’s suppose we were trying to determine whether the mean arrival delay time of American Airlines (`AA`) flights is greater than zero. We have the population of 2013 flights, so we can answer this definitively:
```
## NOTE: No need to change this!
df_flights_aa <-
flights %>%
filter(carrier == "AA") %>%
summarize(across(
arr_delay,
c(
"mean" = ~mean(., na.rm = TRUE),
"sd" = ~sd(., na.rm = TRUE),
"n" = ~length(.)
)
))
df_flights_aa
```
```
## # A tibble: 1 × 3
## arr_delay_mean arr_delay_sd arr_delay_n
## <dbl> <dbl> <int>
## 1 0.364 42.5 32729
```
The `arr_delay_mean` is greater than zero, so case closed.
But imagine we only had a sample of flights, rather than the whole population. The following code randomly samples the `AA` flights, and repeats this process at a few different sample sizes. I also construct confidence intervals: If the confidence interval has its lower bound greater than zero, then we can be reasonably confident the mean delay time is greater than zero.
```
## NOTE: No need to change this!
set.seed(101)
# Downsample at different sample sizes, construct a confidence interval
df_flights_sampled <-
map_dfr(
c(5, 10, 25, 50, 100, 250, 500), # Sample sizes
function(n) {
flights %>%
filter(carrier == "AA") %>%
slice_sample(n = n) %>%
summarize(across(
arr_delay,
c(
"mean" = ~mean(., na.rm = TRUE),
"se" = ~sd(., na.rm = TRUE) / length(.)
)
)) %>%
mutate(
arr_delay_lo = arr_delay_mean - 1.96 * arr_delay_se,
arr_delay_hi = arr_delay_mean + 1.96 * arr_delay_se,
n = n
)
}
)
# Visualize
df_flights_sampled %>%
ggplot(aes(n, arr_delay_mean)) +
geom_hline(
data = df_flights_aa,
mapping = aes(yintercept = arr_delay_mean),
size = 0.1
) +
geom_hline(yintercept = 0, color = "white", size = 2) +
geom_errorbar(aes(
ymin = arr_delay_lo,
ymax = arr_delay_hi,
color = (0 < arr_delay_lo)
)) +
geom_point() +
scale_x_log10() +
scale_color_discrete(name = "Confidently Greater than Zero?") +
theme(legend.position = "bottom") +
labs(
x = "Samples",
y = "Arrival Delay (minutes)",
title = "American Airlines Delays"
)
```
```
## Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
## ℹ Please use `linewidth` instead.
```
These confidence intervals illustrate a number of different sampling scenarios. In some of them, we correctly determine that the mean arrival delay is confidently greater than zero. The case at \\(n \= 100\\) is inconclusive; the CI is compatible with both positive and negative mean delay times. Note the two lowest \\(n\\) cases; there we “confidently” determine that the mean arrival delay is negative \[3]. Any time we are doing estimation we are in danger of making an incorrect conclusion, even when we do the statistics correctly! Obtaining data simply decreases the probability of making a false conclusion \[4].
However, combining all our available information to form a confidence interval is a principled way to report our results. A confidence interval gives us a plausible range of values for the population value, and by its width gives us a sense of how accurate our estimate is likely to be.
40\.4 (Bonus) Deriving an Approximate Confidence Interval
---------------------------------------------------------
(This is bonus content provided for the curious reader.)
Under the CLT, the sampling distribution for the sample mean is
\\\[\\overline{X} \\sim N(\\mu, \\sigma^2 / n).\\]
We can standardize this quantity to form
\\\[(\\overline{X} \- \\mu) / (\\sigma / \\sqrt{n}) \\sim N(0, 1^2\).\\]
This is called a *pivotal quantity*; it is a quantity whose distribution does not depend on the parameters we are trying to estimate. The lower and upper quantiles corresponding to a symmetric \\(C\\) confidence level are `q_C = qnorm( 1 - (1 - C) / 2 )` and `-q_C`, which means
\\\[\\mathbb{P}\[\-q\_C \< (\\overline{X} \- \\mu) / (\\sigma / \\sqrt{n}) \< \+q\_C] \= C.\\]
With a small amount of arithmetic, we can re\-arrange the inequalities inside the probability statement to write
\\\[\\mathbb{P}\[\\overline{X} \- q\_C (\\sigma / \\sqrt{n}) \< \\mu \< \\overline{X} \+ q\_C (\\sigma / \\sqrt{n})] \= C.\\]
Using a plug\-in estimate for \\(\\sigma\\) gives the procedure defined above.
40\.5 Notes
-----------
\[1] Namely, the population must have finite mean and finite variance.
\[2] [This](https://seeing-theory.brown.edu/frequentist-inference/index.html) **the best** visualization of the confidence interval concept that I have ever found. Click through Frequentist Inference \> Confidence Interval to see the animation.
\[3] Part of the issue here is that we are not accounting for the additional variability that arises from estimating the standard deviation. Using a [t\-distribution](https://en.wikipedia.org/wiki/Student%27s_t-distribution#Confidence_intervals) to construct more conservative confidence intervals helps at lower sample sizes.
\[4] The process of making decisions about what to believe about reality based on data is called [hypothesis testing](https://en.wikipedia.org/wiki/Statistical_hypothesis_testing). We’ll talk about this soon!
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/data-reading-excel-sheets.html |
41 Data: Reading Excel Sheets
=============================
*Purpose*: The Tidyverse is built to work with tidy data. Unfortunately, most data in the wild are not tidy. The good news is that we have a lot of tools for *wrangling* data into tidy form. The bad news is that “every untidy dataset is untidy in its own way.” I can’t show you you every crazy way people decide to store their data. But I can walk you through a worked example to show some common techniques.
In this case study, I’ll take you through the process of *wrangling* a messy Excel spreadsheet into machine\-readable form. You will both learn some general tools for wrangling data, and you can keep this notebook as a *recipe* for future messy datasets of similar form.
*Reading*: (None)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(readxl) # For reading Excel sheets
library(httr) # For downloading files
## Use my tidy-exercises copy of UNDOC data for stability
url_undoc <- "https://github.com/zdelrosario/tidy-exercises/blob/master/2019/2019-12-10-news-plots/GSH2013_Homicide_count_and_rate.xlsx?raw=true"
filename <- "./data/undoc.xlsx"
```
I keep a copy of the example data in a personal repo; download a local copy.
```
## NOTE: No need to edit
curl::curl_download(
url_undoc,
filename
)
```
41\.1 Wrangling Basics
----------------------
### 41\.1\.1 **q1** Run the following code and pay attention to the column names. Open the downloaded Excel sheet and compare. Why are the column names so weird?
```
## NOTE: No need to edit; run and inspect
df_q1 <- read_excel(filename)
```
```
## New names:
## • `` -> `...2`
## • `` -> `...3`
## • `` -> `...4`
## • `` -> `...5`
## • `` -> `...6`
## • `` -> `...7`
## • `` -> `...8`
## • `` -> `...9`
## • `` -> `...10`
## • `` -> `...11`
## • `` -> `...12`
## • `` -> `...13`
## • `` -> `...14`
## • `` -> `...15`
## • `` -> `...16`
## • `` -> `...17`
## • `` -> `...18`
## • `` -> `...19`
```
```
df_q1 %>% glimpse
```
```
## Rows: 447
## Columns: 19
## $ `Intentional homicide count and rate per 100,000 population, by country/territory (2000-2012)` <chr> …
## $ ...2 <chr> …
## $ ...3 <chr> …
## $ ...4 <chr> …
## $ ...5 <chr> …
## $ ...6 <chr> …
## $ ...7 <chr> …
## $ ...8 <dbl> …
## $ ...9 <dbl> …
## $ ...10 <dbl> …
## $ ...11 <dbl> …
## $ ...12 <dbl> …
## $ ...13 <dbl> …
## $ ...14 <dbl> …
## $ ...15 <dbl> …
## $ ...16 <chr> …
## $ ...17 <chr> …
## $ ...18 <chr> …
## $ ...19 <chr> …
```
**Observations**:
* The top row is filled with expository text. The actual column names are several rows down.
Most `read_` functions have a *skip* argument you can use to skip over the first few lines. Use this argument in the next task to deal with the top of the Excel sheet.
### 41\.1\.2 **q2** Read the Excel sheet.
Open the target Excel sheet (located at `./data/undoc.xlsx`) and find which line (row) at which the year column headers are located. Use the `skip` keyword to start your read at that line.
```
## TODO:
df_q2 <- read_excel(
filename,
skip = 6
)
```
```
## New names:
## • `` -> `...1`
## • `` -> `...2`
## • `` -> `...3`
## • `` -> `...4`
## • `` -> `...5`
## • `` -> `...6`
```
```
df_q2 %>% glimpse
```
```
## Rows: 444
## Columns: 19
## $ ...1 <chr> "Africa", NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, N…
## $ ...2 <chr> "Eastern Africa", NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, N…
## $ ...3 <chr> "Burundi", NA, "Comoros", NA, "Djibouti", NA, "Eritrea", NA, "E…
## $ ...4 <chr> "PH", NA, "PH", NA, "PH", NA, "PH", NA, "PH", NA, "CJ", NA, "PH…
## $ ...5 <chr> "WHO", NA, "WHO", NA, "WHO", NA, "WHO", NA, "WHO", NA, "CTS", N…
## $ ...6 <chr> "Rate", "Count", "Rate", "Count", "Rate", "Count", "Rate", "Cou…
## $ `2000` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 6.2, 70…
## $ `2001` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 7.7, 90…
## $ `2002` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 4.8, 56…
## $ `2003` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 2.5, 30…
## $ `2004` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 4.0, 1395.0, NA, NA, 3.…
## $ `2005` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 3.5, 1260.0, NA, NA, 1.…
## $ `2006` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 3.5, 1286.0, NA, NA, 6.…
## $ `2007` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 3.4, 1281.0, NA, NA, 5.…
## $ `2008` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 3.6, 1413.0, NA, NA, 5.…
## $ `2009` <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, "5.6", "2218", NA, NA, …
## $ `2010` <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, "5.5", "2239", NA, NA, …
## $ `2011` <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, "6.3", "2641", NA, NA, …
## $ `2012` <chr> "8", "790", "10", "72", "10.1", "87", "7.1", "437", "12", "1104…
```
Use the following test to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(setequal(
(df_q2 %>% names() %>% .[7:19]),
as.character(seq(2000, 2012))
))
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
Let’s take stock of where we are:
```
df_q2 %>% head()
```
```
## # A tibble: 6 × 19
## ...1 ...2 ...3 ...4 ...5 ...6 `2000` `2001` `2002` `2003` `2004` `2005`
## <chr> <chr> <chr> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Africa East… Buru… PH WHO Rate NA NA NA NA NA NA
## 2 <NA> <NA> <NA> <NA> <NA> Count NA NA NA NA NA NA
## 3 <NA> <NA> Como… PH WHO Rate NA NA NA NA NA NA
## 4 <NA> <NA> <NA> <NA> <NA> Count NA NA NA NA NA NA
## 5 <NA> <NA> Djib… PH WHO Rate NA NA NA NA NA NA
## 6 <NA> <NA> <NA> <NA> <NA> Count NA NA NA NA NA NA
## # … with 7 more variables: `2006` <dbl>, `2007` <dbl>, `2008` <dbl>,
## # `2009` <chr>, `2010` <chr>, `2011` <chr>, `2012` <chr>
```
We still have problems:
* The first few columns don’t have sensible names. The `col_names` argument allows us to set manual names at the read phase.
* Some of the columns are of the wrong type; for instance `2012` is a `chr` vector. We can use the `col_types` argument to set manual column types.
### 41\.1\.3 **q3** Change the column names and types.
Use the provided names in `col_names_undoc` with the `col_names` argument to set *manual* column names. Use the `col_types` argument to set all years to `"numeric"`, and the rest to `"text"`.
*Hint 1*: Since you’re providing manual `col_names`, you will have to *adjust* your `skip` value!
*Hint 2*: You can use a named vector for `col_types` to help keep type of which type is assigned to which variable, for instance `c("variable" = "type")`.
```
## NOTE: Use these column names
col_names_undoc <-
c(
"region",
"sub_region",
"territory",
"source",
"org",
"indicator",
"2000",
"2001",
"2002",
"2003",
"2004",
"2005",
"2006",
"2007",
"2008",
"2009",
"2010",
"2011",
"2012"
)
## TASK: Use the arguments `skip`, `col_names`, and `col_types`
df_q3 <- read_excel(
filename,
sheet = 1,
skip = 7,
col_names = col_names_undoc,
col_types = c(
"region" = "text",
"sub_region" = "text",
"territory" = "text",
"source" = "text",
"org" = "text",
"indicator" = "text",
"2000" = "numeric",
"2001" = "numeric",
"2002" = "numeric",
"2003" = "numeric",
"2004" = "numeric",
"2005" = "numeric",
"2006" = "numeric",
"2007" = "numeric",
"2008" = "numeric",
"2009" = "numeric",
"2010" = "numeric",
"2011" = "numeric",
"2012" = "numeric"
)
)
```
```
## Warning: Expecting numeric in P315 / R315C16: got '2366*'
```
```
## Warning: Expecting numeric in Q315 / R315C17: got '1923*'
```
```
## Warning: Expecting numeric in R315 / R315C18: got '1866*'
```
```
## Warning: Expecting numeric in S381 / R381C19: got 'x'
```
```
## Warning: Expecting numeric in S431 / R431C19: got 'x'
```
```
## Warning: Expecting numeric in S433 / R433C19: got 'x'
```
```
## Warning: Expecting numeric in S435 / R435C19: got 'x'
```
```
## Warning: Expecting numeric in S439 / R439C19: got 'x'
```
```
## Warning: Expecting numeric in S445 / R445C19: got 'x'
```
Use the following test to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(setequal(
(df_q3 %>% names()),
col_names_undoc
))
```
```
## [1] TRUE
```
```
assertthat::assert_that((df_q3 %>% slice(1) %>% pull(`2012`)) == 8)
```
```
## [1] TRUE
```
```
print("Great!")
```
```
## [1] "Great!"
```
41\.2 Danger Zone
-----------------
Now let’s take a look at the head of the data:
```
df_q3 %>% head()
```
```
## # A tibble: 6 × 19
## region sub_r…¹ terri…² source org indic…³ `2000` `2001` `2002` `2003` `2004`
## <chr> <chr> <chr> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Africa Easter… Burundi PH WHO Rate NA NA NA NA NA
## 2 <NA> <NA> <NA> <NA> <NA> Count NA NA NA NA NA
## 3 <NA> <NA> Comoros PH WHO Rate NA NA NA NA NA
## 4 <NA> <NA> <NA> <NA> <NA> Count NA NA NA NA NA
## 5 <NA> <NA> Djibou… PH WHO Rate NA NA NA NA NA
## 6 <NA> <NA> <NA> <NA> <NA> Count NA NA NA NA NA
## # … with 8 more variables: `2005` <dbl>, `2006` <dbl>, `2007` <dbl>,
## # `2008` <dbl>, `2009` <dbl>, `2010` <dbl>, `2011` <dbl>, `2012` <dbl>, and
## # abbreviated variable names ¹sub_region, ²territory, ³indicator
```
Irritatingly, many of the cell values are left *implicit*; as humans reading these data, we can tell that the entries in `region` under `Africa` also have the value `Africa`. However, the computer can’t tell this! We need to make these values *explicit* by filling them in.
To that end, I’m going to *guide* you through some slightly advanced Tidyverse code to *lag\-fill* the missing values. To that end, I’ll define and demonstrate two helper functions:
First, the following function counts the number of rows with `NA` entries in a chosen set of columns:
```
## Helper function to count num rows w/ NA in vars_lagged
rowAny <- function(x) rowSums(x) > 0
countna <- function(df, vars_lagged) {
df %>%
filter(rowAny(across(vars_lagged, is.na))) %>%
dim %>%
.[[1]]
}
countna(df_q3, c("region"))
```
```
## Warning: Using an external vector in selections was deprecated in tidyselect 1.1.0.
## ℹ Please use `all_of()` or `any_of()` instead.
## # Was:
## data %>% select(vars_lagged)
##
## # Now:
## data %>% select(all_of(vars_lagged))
##
## See <https://tidyselect.r-lib.org/reference/faq-external-vector.html>.
```
```
## [1] 435
```
Ideally we want this count to be *zero*. To fill\-in values, we will use the following function to do one round of *lag\-filling*:
```
lagfill <- function(df, vars_lagged) {
df %>%
mutate(across(
vars_lagged,
function(var) {
if_else(
is.na(var) & !is.na(lag(var)),
lag(var),
var
)
}
))
}
df_tmp <-
df_q3 %>%
lagfill(c("region"))
countna(df_tmp, c("region"))
```
```
## [1] 429
```
We can see that `lagfill` has filled the `Africa` value in row 2, as well as a number of other rows as evidenced by the reduced value returned by `countna`.
What we’ll do is continually run `lagfill` until we reduce `countna` to zero. We could do this by repeatedly running the function *manually*, but that would be silly. Instead, we’ll run a `while` loop to automatically run the function until `countna` reaches zero.
### 41\.2\.1 **q4** I have already provided the `while` loop below; fill in `vars_lagged` with the names of the columns where cell values are *implicit*.
*Hint*: Think about which columns have implicit values, and which truly have missing values.
```
## Choose variables to lag-fill
vars_lagged <- c("region", "sub_region", "territory", "source", "org")
## NOTE: No need to edit this
## Trim head and notes
df_q4 <-
df_q3 %>%
slice(-(n()-5:-n()))
## Repeatedly lag-fill until NA's are gone
while (countna(df_q4, vars_lagged) > 0) {
df_q4 <-
df_q4 %>%
lagfill(vars_lagged)
}
```
And we’re done! All of the particularly tricky wrangling is now done. You could now use pivoting to tidy the data into long form.
41\.1 Wrangling Basics
----------------------
### 41\.1\.1 **q1** Run the following code and pay attention to the column names. Open the downloaded Excel sheet and compare. Why are the column names so weird?
```
## NOTE: No need to edit; run and inspect
df_q1 <- read_excel(filename)
```
```
## New names:
## • `` -> `...2`
## • `` -> `...3`
## • `` -> `...4`
## • `` -> `...5`
## • `` -> `...6`
## • `` -> `...7`
## • `` -> `...8`
## • `` -> `...9`
## • `` -> `...10`
## • `` -> `...11`
## • `` -> `...12`
## • `` -> `...13`
## • `` -> `...14`
## • `` -> `...15`
## • `` -> `...16`
## • `` -> `...17`
## • `` -> `...18`
## • `` -> `...19`
```
```
df_q1 %>% glimpse
```
```
## Rows: 447
## Columns: 19
## $ `Intentional homicide count and rate per 100,000 population, by country/territory (2000-2012)` <chr> …
## $ ...2 <chr> …
## $ ...3 <chr> …
## $ ...4 <chr> …
## $ ...5 <chr> …
## $ ...6 <chr> …
## $ ...7 <chr> …
## $ ...8 <dbl> …
## $ ...9 <dbl> …
## $ ...10 <dbl> …
## $ ...11 <dbl> …
## $ ...12 <dbl> …
## $ ...13 <dbl> …
## $ ...14 <dbl> …
## $ ...15 <dbl> …
## $ ...16 <chr> …
## $ ...17 <chr> …
## $ ...18 <chr> …
## $ ...19 <chr> …
```
**Observations**:
* The top row is filled with expository text. The actual column names are several rows down.
Most `read_` functions have a *skip* argument you can use to skip over the first few lines. Use this argument in the next task to deal with the top of the Excel sheet.
### 41\.1\.2 **q2** Read the Excel sheet.
Open the target Excel sheet (located at `./data/undoc.xlsx`) and find which line (row) at which the year column headers are located. Use the `skip` keyword to start your read at that line.
```
## TODO:
df_q2 <- read_excel(
filename,
skip = 6
)
```
```
## New names:
## • `` -> `...1`
## • `` -> `...2`
## • `` -> `...3`
## • `` -> `...4`
## • `` -> `...5`
## • `` -> `...6`
```
```
df_q2 %>% glimpse
```
```
## Rows: 444
## Columns: 19
## $ ...1 <chr> "Africa", NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, N…
## $ ...2 <chr> "Eastern Africa", NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, N…
## $ ...3 <chr> "Burundi", NA, "Comoros", NA, "Djibouti", NA, "Eritrea", NA, "E…
## $ ...4 <chr> "PH", NA, "PH", NA, "PH", NA, "PH", NA, "PH", NA, "CJ", NA, "PH…
## $ ...5 <chr> "WHO", NA, "WHO", NA, "WHO", NA, "WHO", NA, "WHO", NA, "CTS", N…
## $ ...6 <chr> "Rate", "Count", "Rate", "Count", "Rate", "Count", "Rate", "Cou…
## $ `2000` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 6.2, 70…
## $ `2001` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 7.7, 90…
## $ `2002` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 4.8, 56…
## $ `2003` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 2.5, 30…
## $ `2004` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 4.0, 1395.0, NA, NA, 3.…
## $ `2005` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 3.5, 1260.0, NA, NA, 1.…
## $ `2006` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 3.5, 1286.0, NA, NA, 6.…
## $ `2007` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 3.4, 1281.0, NA, NA, 5.…
## $ `2008` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 3.6, 1413.0, NA, NA, 5.…
## $ `2009` <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, "5.6", "2218", NA, NA, …
## $ `2010` <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, "5.5", "2239", NA, NA, …
## $ `2011` <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, "6.3", "2641", NA, NA, …
## $ `2012` <chr> "8", "790", "10", "72", "10.1", "87", "7.1", "437", "12", "1104…
```
Use the following test to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(setequal(
(df_q2 %>% names() %>% .[7:19]),
as.character(seq(2000, 2012))
))
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
Let’s take stock of where we are:
```
df_q2 %>% head()
```
```
## # A tibble: 6 × 19
## ...1 ...2 ...3 ...4 ...5 ...6 `2000` `2001` `2002` `2003` `2004` `2005`
## <chr> <chr> <chr> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Africa East… Buru… PH WHO Rate NA NA NA NA NA NA
## 2 <NA> <NA> <NA> <NA> <NA> Count NA NA NA NA NA NA
## 3 <NA> <NA> Como… PH WHO Rate NA NA NA NA NA NA
## 4 <NA> <NA> <NA> <NA> <NA> Count NA NA NA NA NA NA
## 5 <NA> <NA> Djib… PH WHO Rate NA NA NA NA NA NA
## 6 <NA> <NA> <NA> <NA> <NA> Count NA NA NA NA NA NA
## # … with 7 more variables: `2006` <dbl>, `2007` <dbl>, `2008` <dbl>,
## # `2009` <chr>, `2010` <chr>, `2011` <chr>, `2012` <chr>
```
We still have problems:
* The first few columns don’t have sensible names. The `col_names` argument allows us to set manual names at the read phase.
* Some of the columns are of the wrong type; for instance `2012` is a `chr` vector. We can use the `col_types` argument to set manual column types.
### 41\.1\.3 **q3** Change the column names and types.
Use the provided names in `col_names_undoc` with the `col_names` argument to set *manual* column names. Use the `col_types` argument to set all years to `"numeric"`, and the rest to `"text"`.
*Hint 1*: Since you’re providing manual `col_names`, you will have to *adjust* your `skip` value!
*Hint 2*: You can use a named vector for `col_types` to help keep type of which type is assigned to which variable, for instance `c("variable" = "type")`.
```
## NOTE: Use these column names
col_names_undoc <-
c(
"region",
"sub_region",
"territory",
"source",
"org",
"indicator",
"2000",
"2001",
"2002",
"2003",
"2004",
"2005",
"2006",
"2007",
"2008",
"2009",
"2010",
"2011",
"2012"
)
## TASK: Use the arguments `skip`, `col_names`, and `col_types`
df_q3 <- read_excel(
filename,
sheet = 1,
skip = 7,
col_names = col_names_undoc,
col_types = c(
"region" = "text",
"sub_region" = "text",
"territory" = "text",
"source" = "text",
"org" = "text",
"indicator" = "text",
"2000" = "numeric",
"2001" = "numeric",
"2002" = "numeric",
"2003" = "numeric",
"2004" = "numeric",
"2005" = "numeric",
"2006" = "numeric",
"2007" = "numeric",
"2008" = "numeric",
"2009" = "numeric",
"2010" = "numeric",
"2011" = "numeric",
"2012" = "numeric"
)
)
```
```
## Warning: Expecting numeric in P315 / R315C16: got '2366*'
```
```
## Warning: Expecting numeric in Q315 / R315C17: got '1923*'
```
```
## Warning: Expecting numeric in R315 / R315C18: got '1866*'
```
```
## Warning: Expecting numeric in S381 / R381C19: got 'x'
```
```
## Warning: Expecting numeric in S431 / R431C19: got 'x'
```
```
## Warning: Expecting numeric in S433 / R433C19: got 'x'
```
```
## Warning: Expecting numeric in S435 / R435C19: got 'x'
```
```
## Warning: Expecting numeric in S439 / R439C19: got 'x'
```
```
## Warning: Expecting numeric in S445 / R445C19: got 'x'
```
Use the following test to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(setequal(
(df_q3 %>% names()),
col_names_undoc
))
```
```
## [1] TRUE
```
```
assertthat::assert_that((df_q3 %>% slice(1) %>% pull(`2012`)) == 8)
```
```
## [1] TRUE
```
```
print("Great!")
```
```
## [1] "Great!"
```
### 41\.1\.1 **q1** Run the following code and pay attention to the column names. Open the downloaded Excel sheet and compare. Why are the column names so weird?
```
## NOTE: No need to edit; run and inspect
df_q1 <- read_excel(filename)
```
```
## New names:
## • `` -> `...2`
## • `` -> `...3`
## • `` -> `...4`
## • `` -> `...5`
## • `` -> `...6`
## • `` -> `...7`
## • `` -> `...8`
## • `` -> `...9`
## • `` -> `...10`
## • `` -> `...11`
## • `` -> `...12`
## • `` -> `...13`
## • `` -> `...14`
## • `` -> `...15`
## • `` -> `...16`
## • `` -> `...17`
## • `` -> `...18`
## • `` -> `...19`
```
```
df_q1 %>% glimpse
```
```
## Rows: 447
## Columns: 19
## $ `Intentional homicide count and rate per 100,000 population, by country/territory (2000-2012)` <chr> …
## $ ...2 <chr> …
## $ ...3 <chr> …
## $ ...4 <chr> …
## $ ...5 <chr> …
## $ ...6 <chr> …
## $ ...7 <chr> …
## $ ...8 <dbl> …
## $ ...9 <dbl> …
## $ ...10 <dbl> …
## $ ...11 <dbl> …
## $ ...12 <dbl> …
## $ ...13 <dbl> …
## $ ...14 <dbl> …
## $ ...15 <dbl> …
## $ ...16 <chr> …
## $ ...17 <chr> …
## $ ...18 <chr> …
## $ ...19 <chr> …
```
**Observations**:
* The top row is filled with expository text. The actual column names are several rows down.
Most `read_` functions have a *skip* argument you can use to skip over the first few lines. Use this argument in the next task to deal with the top of the Excel sheet.
### 41\.1\.2 **q2** Read the Excel sheet.
Open the target Excel sheet (located at `./data/undoc.xlsx`) and find which line (row) at which the year column headers are located. Use the `skip` keyword to start your read at that line.
```
## TODO:
df_q2 <- read_excel(
filename,
skip = 6
)
```
```
## New names:
## • `` -> `...1`
## • `` -> `...2`
## • `` -> `...3`
## • `` -> `...4`
## • `` -> `...5`
## • `` -> `...6`
```
```
df_q2 %>% glimpse
```
```
## Rows: 444
## Columns: 19
## $ ...1 <chr> "Africa", NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, N…
## $ ...2 <chr> "Eastern Africa", NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, N…
## $ ...3 <chr> "Burundi", NA, "Comoros", NA, "Djibouti", NA, "Eritrea", NA, "E…
## $ ...4 <chr> "PH", NA, "PH", NA, "PH", NA, "PH", NA, "PH", NA, "CJ", NA, "PH…
## $ ...5 <chr> "WHO", NA, "WHO", NA, "WHO", NA, "WHO", NA, "WHO", NA, "CTS", N…
## $ ...6 <chr> "Rate", "Count", "Rate", "Count", "Rate", "Count", "Rate", "Cou…
## $ `2000` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 6.2, 70…
## $ `2001` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 7.7, 90…
## $ `2002` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 4.8, 56…
## $ `2003` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 2.5, 30…
## $ `2004` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 4.0, 1395.0, NA, NA, 3.…
## $ `2005` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 3.5, 1260.0, NA, NA, 1.…
## $ `2006` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 3.5, 1286.0, NA, NA, 6.…
## $ `2007` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 3.4, 1281.0, NA, NA, 5.…
## $ `2008` <dbl> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, 3.6, 1413.0, NA, NA, 5.…
## $ `2009` <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, "5.6", "2218", NA, NA, …
## $ `2010` <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, "5.5", "2239", NA, NA, …
## $ `2011` <chr> NA, NA, NA, NA, NA, NA, NA, NA, NA, NA, "6.3", "2641", NA, NA, …
## $ `2012` <chr> "8", "790", "10", "72", "10.1", "87", "7.1", "437", "12", "1104…
```
Use the following test to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(setequal(
(df_q2 %>% names() %>% .[7:19]),
as.character(seq(2000, 2012))
))
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
Let’s take stock of where we are:
```
df_q2 %>% head()
```
```
## # A tibble: 6 × 19
## ...1 ...2 ...3 ...4 ...5 ...6 `2000` `2001` `2002` `2003` `2004` `2005`
## <chr> <chr> <chr> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Africa East… Buru… PH WHO Rate NA NA NA NA NA NA
## 2 <NA> <NA> <NA> <NA> <NA> Count NA NA NA NA NA NA
## 3 <NA> <NA> Como… PH WHO Rate NA NA NA NA NA NA
## 4 <NA> <NA> <NA> <NA> <NA> Count NA NA NA NA NA NA
## 5 <NA> <NA> Djib… PH WHO Rate NA NA NA NA NA NA
## 6 <NA> <NA> <NA> <NA> <NA> Count NA NA NA NA NA NA
## # … with 7 more variables: `2006` <dbl>, `2007` <dbl>, `2008` <dbl>,
## # `2009` <chr>, `2010` <chr>, `2011` <chr>, `2012` <chr>
```
We still have problems:
* The first few columns don’t have sensible names. The `col_names` argument allows us to set manual names at the read phase.
* Some of the columns are of the wrong type; for instance `2012` is a `chr` vector. We can use the `col_types` argument to set manual column types.
### 41\.1\.3 **q3** Change the column names and types.
Use the provided names in `col_names_undoc` with the `col_names` argument to set *manual* column names. Use the `col_types` argument to set all years to `"numeric"`, and the rest to `"text"`.
*Hint 1*: Since you’re providing manual `col_names`, you will have to *adjust* your `skip` value!
*Hint 2*: You can use a named vector for `col_types` to help keep type of which type is assigned to which variable, for instance `c("variable" = "type")`.
```
## NOTE: Use these column names
col_names_undoc <-
c(
"region",
"sub_region",
"territory",
"source",
"org",
"indicator",
"2000",
"2001",
"2002",
"2003",
"2004",
"2005",
"2006",
"2007",
"2008",
"2009",
"2010",
"2011",
"2012"
)
## TASK: Use the arguments `skip`, `col_names`, and `col_types`
df_q3 <- read_excel(
filename,
sheet = 1,
skip = 7,
col_names = col_names_undoc,
col_types = c(
"region" = "text",
"sub_region" = "text",
"territory" = "text",
"source" = "text",
"org" = "text",
"indicator" = "text",
"2000" = "numeric",
"2001" = "numeric",
"2002" = "numeric",
"2003" = "numeric",
"2004" = "numeric",
"2005" = "numeric",
"2006" = "numeric",
"2007" = "numeric",
"2008" = "numeric",
"2009" = "numeric",
"2010" = "numeric",
"2011" = "numeric",
"2012" = "numeric"
)
)
```
```
## Warning: Expecting numeric in P315 / R315C16: got '2366*'
```
```
## Warning: Expecting numeric in Q315 / R315C17: got '1923*'
```
```
## Warning: Expecting numeric in R315 / R315C18: got '1866*'
```
```
## Warning: Expecting numeric in S381 / R381C19: got 'x'
```
```
## Warning: Expecting numeric in S431 / R431C19: got 'x'
```
```
## Warning: Expecting numeric in S433 / R433C19: got 'x'
```
```
## Warning: Expecting numeric in S435 / R435C19: got 'x'
```
```
## Warning: Expecting numeric in S439 / R439C19: got 'x'
```
```
## Warning: Expecting numeric in S445 / R445C19: got 'x'
```
Use the following test to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(setequal(
(df_q3 %>% names()),
col_names_undoc
))
```
```
## [1] TRUE
```
```
assertthat::assert_that((df_q3 %>% slice(1) %>% pull(`2012`)) == 8)
```
```
## [1] TRUE
```
```
print("Great!")
```
```
## [1] "Great!"
```
41\.2 Danger Zone
-----------------
Now let’s take a look at the head of the data:
```
df_q3 %>% head()
```
```
## # A tibble: 6 × 19
## region sub_r…¹ terri…² source org indic…³ `2000` `2001` `2002` `2003` `2004`
## <chr> <chr> <chr> <chr> <chr> <chr> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Africa Easter… Burundi PH WHO Rate NA NA NA NA NA
## 2 <NA> <NA> <NA> <NA> <NA> Count NA NA NA NA NA
## 3 <NA> <NA> Comoros PH WHO Rate NA NA NA NA NA
## 4 <NA> <NA> <NA> <NA> <NA> Count NA NA NA NA NA
## 5 <NA> <NA> Djibou… PH WHO Rate NA NA NA NA NA
## 6 <NA> <NA> <NA> <NA> <NA> Count NA NA NA NA NA
## # … with 8 more variables: `2005` <dbl>, `2006` <dbl>, `2007` <dbl>,
## # `2008` <dbl>, `2009` <dbl>, `2010` <dbl>, `2011` <dbl>, `2012` <dbl>, and
## # abbreviated variable names ¹sub_region, ²territory, ³indicator
```
Irritatingly, many of the cell values are left *implicit*; as humans reading these data, we can tell that the entries in `region` under `Africa` also have the value `Africa`. However, the computer can’t tell this! We need to make these values *explicit* by filling them in.
To that end, I’m going to *guide* you through some slightly advanced Tidyverse code to *lag\-fill* the missing values. To that end, I’ll define and demonstrate two helper functions:
First, the following function counts the number of rows with `NA` entries in a chosen set of columns:
```
## Helper function to count num rows w/ NA in vars_lagged
rowAny <- function(x) rowSums(x) > 0
countna <- function(df, vars_lagged) {
df %>%
filter(rowAny(across(vars_lagged, is.na))) %>%
dim %>%
.[[1]]
}
countna(df_q3, c("region"))
```
```
## Warning: Using an external vector in selections was deprecated in tidyselect 1.1.0.
## ℹ Please use `all_of()` or `any_of()` instead.
## # Was:
## data %>% select(vars_lagged)
##
## # Now:
## data %>% select(all_of(vars_lagged))
##
## See <https://tidyselect.r-lib.org/reference/faq-external-vector.html>.
```
```
## [1] 435
```
Ideally we want this count to be *zero*. To fill\-in values, we will use the following function to do one round of *lag\-filling*:
```
lagfill <- function(df, vars_lagged) {
df %>%
mutate(across(
vars_lagged,
function(var) {
if_else(
is.na(var) & !is.na(lag(var)),
lag(var),
var
)
}
))
}
df_tmp <-
df_q3 %>%
lagfill(c("region"))
countna(df_tmp, c("region"))
```
```
## [1] 429
```
We can see that `lagfill` has filled the `Africa` value in row 2, as well as a number of other rows as evidenced by the reduced value returned by `countna`.
What we’ll do is continually run `lagfill` until we reduce `countna` to zero. We could do this by repeatedly running the function *manually*, but that would be silly. Instead, we’ll run a `while` loop to automatically run the function until `countna` reaches zero.
### 41\.2\.1 **q4** I have already provided the `while` loop below; fill in `vars_lagged` with the names of the columns where cell values are *implicit*.
*Hint*: Think about which columns have implicit values, and which truly have missing values.
```
## Choose variables to lag-fill
vars_lagged <- c("region", "sub_region", "territory", "source", "org")
## NOTE: No need to edit this
## Trim head and notes
df_q4 <-
df_q3 %>%
slice(-(n()-5:-n()))
## Repeatedly lag-fill until NA's are gone
while (countna(df_q4, vars_lagged) > 0) {
df_q4 <-
df_q4 %>%
lagfill(vars_lagged)
}
```
And we’re done! All of the particularly tricky wrangling is now done. You could now use pivoting to tidy the data into long form.
### 41\.2\.1 **q4** I have already provided the `while` loop below; fill in `vars_lagged` with the names of the columns where cell values are *implicit*.
*Hint*: Think about which columns have implicit values, and which truly have missing values.
```
## Choose variables to lag-fill
vars_lagged <- c("region", "sub_region", "territory", "source", "org")
## NOTE: No need to edit this
## Trim head and notes
df_q4 <-
df_q3 %>%
slice(-(n()-5:-n()))
## Repeatedly lag-fill until NA's are gone
while (countna(df_q4, vars_lagged) > 0) {
df_q4 <-
df_q4 %>%
lagfill(vars_lagged)
}
```
And we’re done! All of the particularly tricky wrangling is now done. You could now use pivoting to tidy the data into long form.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/stats-error-and-bias.html |
42 Stats: Error and Bias
========================
*Purpose*: *Error* is a subtle concept. Often statistics concepts are introduced
with a host of assumptions on the errors. In this short exercise, we’ll reminder
ourselves what errors are and learn what happens when one standard
assumption—*unbiasedness*—is violated.
*Prerequisites*: `c02-michelson`, `e-stat06-clt`
```
## Note: No need to edit this chunk!
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(googlesheets4)
url <- "https://docs.google.com/spreadsheets/d/1av_SXn4j0-4Rk0mQFik3LLr-uf0YdA06i3ugE6n-Zdo/edit?usp=sharing"
c_true <- 299792.458 # Exact speed of light in a vacuum (km / s)
c_michelson <- 299944.00 # Michelson's speed estimate (km / s)
meas_adjust <- +92 # Michelson's speed of light adjustment (km / s)
c_michelson_uncertainty <- 51 # Michelson's measurement uncertainty (km / s)
gs4_deauth()
ss <- gs4_get(url)
df_michelson <-
read_sheet(ss) %>%
select(Date, Distinctness, Temp, Velocity) %>%
mutate(
Distinctness = as_factor(Distinctness),
c_meas = Velocity + meas_adjust
)
```
```
## ✔ Reading from "michelson1879".
```
```
## ✔ Range 'Sheet1'.
```
42\.1 Errors
------------
Let’s re\-examine the Michelson speed of light data to discuss the concept of *error*. Let \\(c\\) denote the true speed of light, and let \\(\\hat{c}\_i\\) denote the i\-th measurement by Michelson. Then the error \\(\\epsilon\_{c,i}\\) is:
\\\[\\epsilon\_{c,i} \\equiv \\hat{c}\_i \- c.\\]
Note that these are *errors* (and not some other quantity) because they are
differences against the true value \\(c\\). Very frequently in statistics, we
*assume* that the errors are *unbiased*; that is we assume \\(\\mathbb{E}\[\\epsilon] \= 0\\). Let’s take a look at what happens when that assumption is violated.
### 42\.1\.1 **q1** Compute the errors \\(\\epsilon\_c\\) using Michelson’s measurements `c_meas` and the true speed of light `c_true`.
```
## TASK: Compute `epsilon_c`
df_q1 <-
df_michelson %>%
mutate(epsilon_c = c_meas - c_true)
df_q1 %>%
ggplot(aes(epsilon_c)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
We can use descriptive statistics in order to summarize the errors. This will give us a quantification of the *uncertainty* in our measurements: remember that uncertainty is our assessment of the error.
### 42\.1\.2 **q2** Estimate the mean and standard deviation of \\(\\epsilon\_c\\) from `df_q1.` Is the error mean large or small, compared to its standard deviation? How about compared to Michelson’s uncertainty `c_michelson_uncertainty`?
```
## TASK: Estimate `epsilon_mean` and `epsilon_sd` from df_q1
df_q2 <-
df_q1 %>%
summarize(
epsilon_mean = mean(epsilon_c),
epsilon_sd = sd(epsilon_c)
)
```
**Observations**:
Use the following tests to check your answers.
```
## NOTE: No need to change this!
assertthat::assert_that(abs((df_q2 %>% pull(epsilon_mean)) - 151.942) < 1e-3)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs((df_q2 %>% pull(epsilon_sd)) - 79.01055) < 1e-3)
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
Generally, we want our errors to have *zero mean*—the case where the errors have zero mean is called *unbiased*. The quantity \\(\\mathbb{E}\[\\epsilon]\\) is called *bias*, and an estimate such as \\(\\hat{c}\\) with \\(\\mathbb{E}\[\\epsilon] \\neq 0\\) is called *biased*.
What can happen when our estimates are biased? In that case, increased data *may not* improve our estimate, and our statistical tools—such as confidence intervals—may give us a false impression of the true error. The next example will show us what happens if we apply confidence intervals in a biased\-data setting like Michelson’s data.
### 42\.1\.3 **q3** Use a CLT approximation to construct a \\(99%\\) confidence interval on the mean of `c_meas`. Check (with the provided code) if your CI includes the true speed of light.
*Hint*: This computation should **not** use the true speed of light \\(c\_true\\) in any way.
```
## TASK: Compute a 99% confidence interval on the mean of c_meas
C <- 0.99
q <- qnorm( 1 - (1 - C) / 2 )
df_q3 <-
df_q1 %>%
summarize(
c_meas_mean = mean(c_meas),
c_meas_sd = sd(c_meas),
n_samp = n(),
c_lo = c_meas_mean - q * c_meas_sd / sqrt(n_samp),
c_hi = c_meas_mean + q * c_meas_sd / sqrt(n_samp)
)
## NOTE: This checks if the CI contains c_true
(df_q3 %>% pull(c_lo) <= c_true) & (c_true <= df_q3 %>% pull(c_hi))
```
```
## [1] FALSE
```
Use the following tests to check your answers.
```
## NOTE: No need to change this!
assertthat::assert_that(abs((df_q3 %>% pull(c_lo)) - 299924.048) < 1e-3)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs((df_q3 %>% pull(c_hi)) - 299964.752) < 1e-3)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
Once you correctly compute a CI for `c_meas`, you should find that the interval *does not* include `c_true`. CI is never guaranteed to include its true value—it is a probabilistic construction, after all. However, we saw above that the errors are *biased*; even if we were to gather more data, our confidence intervals would converge on the *wrong* value. Statistics are not a cure\-all!
42\.1 Errors
------------
Let’s re\-examine the Michelson speed of light data to discuss the concept of *error*. Let \\(c\\) denote the true speed of light, and let \\(\\hat{c}\_i\\) denote the i\-th measurement by Michelson. Then the error \\(\\epsilon\_{c,i}\\) is:
\\\[\\epsilon\_{c,i} \\equiv \\hat{c}\_i \- c.\\]
Note that these are *errors* (and not some other quantity) because they are
differences against the true value \\(c\\). Very frequently in statistics, we
*assume* that the errors are *unbiased*; that is we assume \\(\\mathbb{E}\[\\epsilon] \= 0\\). Let’s take a look at what happens when that assumption is violated.
### 42\.1\.1 **q1** Compute the errors \\(\\epsilon\_c\\) using Michelson’s measurements `c_meas` and the true speed of light `c_true`.
```
## TASK: Compute `epsilon_c`
df_q1 <-
df_michelson %>%
mutate(epsilon_c = c_meas - c_true)
df_q1 %>%
ggplot(aes(epsilon_c)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
We can use descriptive statistics in order to summarize the errors. This will give us a quantification of the *uncertainty* in our measurements: remember that uncertainty is our assessment of the error.
### 42\.1\.2 **q2** Estimate the mean and standard deviation of \\(\\epsilon\_c\\) from `df_q1.` Is the error mean large or small, compared to its standard deviation? How about compared to Michelson’s uncertainty `c_michelson_uncertainty`?
```
## TASK: Estimate `epsilon_mean` and `epsilon_sd` from df_q1
df_q2 <-
df_q1 %>%
summarize(
epsilon_mean = mean(epsilon_c),
epsilon_sd = sd(epsilon_c)
)
```
**Observations**:
Use the following tests to check your answers.
```
## NOTE: No need to change this!
assertthat::assert_that(abs((df_q2 %>% pull(epsilon_mean)) - 151.942) < 1e-3)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs((df_q2 %>% pull(epsilon_sd)) - 79.01055) < 1e-3)
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
Generally, we want our errors to have *zero mean*—the case where the errors have zero mean is called *unbiased*. The quantity \\(\\mathbb{E}\[\\epsilon]\\) is called *bias*, and an estimate such as \\(\\hat{c}\\) with \\(\\mathbb{E}\[\\epsilon] \\neq 0\\) is called *biased*.
What can happen when our estimates are biased? In that case, increased data *may not* improve our estimate, and our statistical tools—such as confidence intervals—may give us a false impression of the true error. The next example will show us what happens if we apply confidence intervals in a biased\-data setting like Michelson’s data.
### 42\.1\.3 **q3** Use a CLT approximation to construct a \\(99%\\) confidence interval on the mean of `c_meas`. Check (with the provided code) if your CI includes the true speed of light.
*Hint*: This computation should **not** use the true speed of light \\(c\_true\\) in any way.
```
## TASK: Compute a 99% confidence interval on the mean of c_meas
C <- 0.99
q <- qnorm( 1 - (1 - C) / 2 )
df_q3 <-
df_q1 %>%
summarize(
c_meas_mean = mean(c_meas),
c_meas_sd = sd(c_meas),
n_samp = n(),
c_lo = c_meas_mean - q * c_meas_sd / sqrt(n_samp),
c_hi = c_meas_mean + q * c_meas_sd / sqrt(n_samp)
)
## NOTE: This checks if the CI contains c_true
(df_q3 %>% pull(c_lo) <= c_true) & (c_true <= df_q3 %>% pull(c_hi))
```
```
## [1] FALSE
```
Use the following tests to check your answers.
```
## NOTE: No need to change this!
assertthat::assert_that(abs((df_q3 %>% pull(c_lo)) - 299924.048) < 1e-3)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs((df_q3 %>% pull(c_hi)) - 299964.752) < 1e-3)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
Once you correctly compute a CI for `c_meas`, you should find that the interval *does not* include `c_true`. CI is never guaranteed to include its true value—it is a probabilistic construction, after all. However, we saw above that the errors are *biased*; even if we were to gather more data, our confidence intervals would converge on the *wrong* value. Statistics are not a cure\-all!
### 42\.1\.1 **q1** Compute the errors \\(\\epsilon\_c\\) using Michelson’s measurements `c_meas` and the true speed of light `c_true`.
```
## TASK: Compute `epsilon_c`
df_q1 <-
df_michelson %>%
mutate(epsilon_c = c_meas - c_true)
df_q1 %>%
ggplot(aes(epsilon_c)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
We can use descriptive statistics in order to summarize the errors. This will give us a quantification of the *uncertainty* in our measurements: remember that uncertainty is our assessment of the error.
### 42\.1\.2 **q2** Estimate the mean and standard deviation of \\(\\epsilon\_c\\) from `df_q1.` Is the error mean large or small, compared to its standard deviation? How about compared to Michelson’s uncertainty `c_michelson_uncertainty`?
```
## TASK: Estimate `epsilon_mean` and `epsilon_sd` from df_q1
df_q2 <-
df_q1 %>%
summarize(
epsilon_mean = mean(epsilon_c),
epsilon_sd = sd(epsilon_c)
)
```
**Observations**:
Use the following tests to check your answers.
```
## NOTE: No need to change this!
assertthat::assert_that(abs((df_q2 %>% pull(epsilon_mean)) - 151.942) < 1e-3)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs((df_q2 %>% pull(epsilon_sd)) - 79.01055) < 1e-3)
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
Generally, we want our errors to have *zero mean*—the case where the errors have zero mean is called *unbiased*. The quantity \\(\\mathbb{E}\[\\epsilon]\\) is called *bias*, and an estimate such as \\(\\hat{c}\\) with \\(\\mathbb{E}\[\\epsilon] \\neq 0\\) is called *biased*.
What can happen when our estimates are biased? In that case, increased data *may not* improve our estimate, and our statistical tools—such as confidence intervals—may give us a false impression of the true error. The next example will show us what happens if we apply confidence intervals in a biased\-data setting like Michelson’s data.
### 42\.1\.3 **q3** Use a CLT approximation to construct a \\(99%\\) confidence interval on the mean of `c_meas`. Check (with the provided code) if your CI includes the true speed of light.
*Hint*: This computation should **not** use the true speed of light \\(c\_true\\) in any way.
```
## TASK: Compute a 99% confidence interval on the mean of c_meas
C <- 0.99
q <- qnorm( 1 - (1 - C) / 2 )
df_q3 <-
df_q1 %>%
summarize(
c_meas_mean = mean(c_meas),
c_meas_sd = sd(c_meas),
n_samp = n(),
c_lo = c_meas_mean - q * c_meas_sd / sqrt(n_samp),
c_hi = c_meas_mean + q * c_meas_sd / sqrt(n_samp)
)
## NOTE: This checks if the CI contains c_true
(df_q3 %>% pull(c_lo) <= c_true) & (c_true <= df_q3 %>% pull(c_hi))
```
```
## [1] FALSE
```
Use the following tests to check your answers.
```
## NOTE: No need to change this!
assertthat::assert_that(abs((df_q3 %>% pull(c_lo)) - 299924.048) < 1e-3)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs((df_q3 %>% pull(c_hi)) - 299964.752) < 1e-3)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
Once you correctly compute a CI for `c_meas`, you should find that the interval *does not* include `c_true`. CI is never guaranteed to include its true value—it is a probabilistic construction, after all. However, we saw above that the errors are *biased*; even if we were to gather more data, our confidence intervals would converge on the *wrong* value. Statistics are not a cure\-all!
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/data-map-basics.html |
43 Data: Map Basics
===================
*Purpose*: The `map()` function and its variants are extremely useful for automating iterative tasks. We’ll learn the basics through this short exercise.
*Reading*: [Introduction to Iteration](https://rstudio.cloud/learn/primers/5.1) and [Map](https://rstudio.cloud/learn/primers/5.2) (you can skip the Case Study).
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
43\.1 Formulas
--------------
The primer introduced `map()` as a way to apply a function to a list.
```
# NOTE: No need to change this example
map_dbl(c(1, 2, 3), log)
```
```
## [1] 0.0000000 0.6931472 1.0986123
```
This is very helpful when we have a builtin or otherwise defined function, but what about when we need a more special\-purpose function for a specific case? In this instance we can use R’s *formula notation*. For example, to compute powers of `10`, we could do:
```
# NOTE: No need to change this example
map_dbl(c(1, 2, 3), ~ 10 ^ .x)
```
```
## [1] 10 100 1000
```
The tilde `~` operator signals to R that we’re doing something special: defining a formula. The `.x` symbol is the argument for this new function. Basically, we are taking a formal function definition, such as
```
# NOTE: No need to change this example
pow10 <- function(x) {10 ^ x}
```
And defining a more compact version with `~ 10 ^ x.`. We’ve actually already seen this formula notation when we use `facet_grid()` and `facet_wrap()`, though it’s used in a very different way in that context.
### 43\.1\.1 **q1** Use `map_chr()` to prepend the string `"N: "` to the numbers in `v_nums`. Use formula notation with `str_c()` as your map function.
*Hint*: The function `str_c()` combines two or more objects into one string.
```
v_nums <- c(1, 2, 3)
v_q1 <- map_chr(v_nums, ~ str_c("N: ", .x))
v_q1
```
```
## [1] "N: 1" "N: 2" "N: 3"
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(setequal(v_q1, c("N: 1", "N: 2", "N: 3")))
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
Formula notation is another way to pass arguments to functions; I find this a little more readable than passing arguments to `map()`.
### 43\.1\.2 **q2** Use `map_dbl()` to compute the `log` with `base = 2` of the numbers in `v_nums`. Use formula notation with `log()` as your map function.
```
v_q2 <- map_dbl(v_nums, ~ log(.x, base = 2))
v_q2
```
```
## [1] 0.000000 1.000000 1.584963
```
```
## NOTE: No need to change this!
assertthat::assert_that(setequal(v_q2, log(v_nums, base = 2)))
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
43\.1 Formulas
--------------
The primer introduced `map()` as a way to apply a function to a list.
```
# NOTE: No need to change this example
map_dbl(c(1, 2, 3), log)
```
```
## [1] 0.0000000 0.6931472 1.0986123
```
This is very helpful when we have a builtin or otherwise defined function, but what about when we need a more special\-purpose function for a specific case? In this instance we can use R’s *formula notation*. For example, to compute powers of `10`, we could do:
```
# NOTE: No need to change this example
map_dbl(c(1, 2, 3), ~ 10 ^ .x)
```
```
## [1] 10 100 1000
```
The tilde `~` operator signals to R that we’re doing something special: defining a formula. The `.x` symbol is the argument for this new function. Basically, we are taking a formal function definition, such as
```
# NOTE: No need to change this example
pow10 <- function(x) {10 ^ x}
```
And defining a more compact version with `~ 10 ^ x.`. We’ve actually already seen this formula notation when we use `facet_grid()` and `facet_wrap()`, though it’s used in a very different way in that context.
### 43\.1\.1 **q1** Use `map_chr()` to prepend the string `"N: "` to the numbers in `v_nums`. Use formula notation with `str_c()` as your map function.
*Hint*: The function `str_c()` combines two or more objects into one string.
```
v_nums <- c(1, 2, 3)
v_q1 <- map_chr(v_nums, ~ str_c("N: ", .x))
v_q1
```
```
## [1] "N: 1" "N: 2" "N: 3"
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(setequal(v_q1, c("N: 1", "N: 2", "N: 3")))
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
Formula notation is another way to pass arguments to functions; I find this a little more readable than passing arguments to `map()`.
### 43\.1\.2 **q2** Use `map_dbl()` to compute the `log` with `base = 2` of the numbers in `v_nums`. Use formula notation with `log()` as your map function.
```
v_q2 <- map_dbl(v_nums, ~ log(.x, base = 2))
v_q2
```
```
## [1] 0.000000 1.000000 1.584963
```
```
## NOTE: No need to change this!
assertthat::assert_that(setequal(v_q2, log(v_nums, base = 2)))
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 43\.1\.1 **q1** Use `map_chr()` to prepend the string `"N: "` to the numbers in `v_nums`. Use formula notation with `str_c()` as your map function.
*Hint*: The function `str_c()` combines two or more objects into one string.
```
v_nums <- c(1, 2, 3)
v_q1 <- map_chr(v_nums, ~ str_c("N: ", .x))
v_q1
```
```
## [1] "N: 1" "N: 2" "N: 3"
```
Use the following test to check your work.
```
## NOTE: No need to change this!
assertthat::assert_that(setequal(v_q1, c("N: 1", "N: 2", "N: 3")))
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
Formula notation is another way to pass arguments to functions; I find this a little more readable than passing arguments to `map()`.
### 43\.1\.2 **q2** Use `map_dbl()` to compute the `log` with `base = 2` of the numbers in `v_nums`. Use formula notation with `log()` as your map function.
```
v_q2 <- map_dbl(v_nums, ~ log(.x, base = 2))
v_q2
```
```
## [1] 0.000000 1.000000 1.584963
```
```
## NOTE: No need to change this!
assertthat::assert_that(setequal(v_q2, log(v_nums, base = 2)))
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/data-factors.html |
44 Data: Factors
================
*Purpose*: Factors are an important type of variables. Since they’re largely in
a class of their own, there are special tools available in the package `forcats`
to help wrangle factors.
*Reading*: (None)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(gapminder)
```
A *factor* is a variable that only takes fixed, often non\-numeric, values.
Factors are sometimes called *categorical variables*. We’ve already seen
44\.1 Organization
------------------
### 44\.1\.1 **q1** The following chunk displays the levels of the factor `continent`. Run
the following code chunk and note in what *order* they are listed.
```
## NOTE: No need to edit this
diamonds %>%
pull(cut) %>%
levels()
```
```
## [1] "Fair" "Good" "Very Good" "Premium" "Ideal"
```
```
## TASK: Determine what order the factors are listed in.
```
**Observations**:
* The factor levels are ordered in terms of increasing quality of diamond cut.
* The levels are essentially a measure of quality; we would expect price to (generally) increase with improved cut.
### 44\.1\.2 **q2** Determine the levels for the `continent` variable in the `gapminder` dataset. Note the *order* of the levels.
```
## TASK: Determine the levels of the variable
gapminder %>%
pull(continent) %>%
levels()
```
```
## [1] "Africa" "Americas" "Asia" "Europe" "Oceania"
```
```
## TASK: Determine what order the factors are listed in.
```
**Observations**:
* The factor levels are ordered alphabetically.
The [forcats](https://forcats.tidyverse.org/) package has tools for working with
factors. For instance, we can assign manual factor levels with the function
`fct_relevel()`. This is generally used in a `mutate()`; for instance `mutate(x = fct_relevel(x, "a", "b", "c")`.
### 44\.1\.3 **q3** Relevel the continents.
Copy your code from q2 and introduce a mutate using `fct_relevel()` to reorder `continent`. Choose which levels to reorder and what order in which to put them. Note how the resulting order is changed when you call `levels()` at the end of the pipe.
```
gapminder %>%
mutate(
continent = fct_relevel(
continent,
"Oceania"
)
) %>%
pull(continent) %>%
levels()
```
```
## [1] "Oceania" "Africa" "Americas" "Asia" "Europe"
```
**Observations**:
* Calling `fct_reorder()` as I do in the solution brings “Oceania” to the front, but leaves the other factors alone.
44\.2 Visual Tricks
-------------------
When factors do no have any *meaningful* order, it is generally better to
sort them on another variable, rather
```
mpg %>%
mutate(manufacturer = fct_reorder(manufacturer, cty)) %>%
ggplot(aes(manufacturer, cty)) +
geom_boxplot() +
coord_flip()
```
The function `fct_reorder(f, x)` allows you to reorder the factor `f` based on
another variable `x`. This will “match” the order between the two.
### 44\.2\.1 **q4** Use `fct_reorder()` to sort `manufacturer` to match the order of `cty`.
```
## TASK: Modify the following code to sort the factor `manufacturer` based on
## `cty`.
mpg %>%
mutate(manufacturer = fct_reorder(manufacturer, cty)) %>%
ggplot(aes(manufacturer, cty)) +
geom_boxplot() +
coord_flip()
```
**Observations**:
*Before*
\- Toyota and Nissan seem have the most variable vehicles in this dataset, in terms of `cty`.
\- Volkswagon has a number of high `cty` outliers.
*Sorted*
\- Honda has the most efficient vehicles in this sample.
\- Lincoln and Land Rover have the least efficient vehicles in this sample.
\- Mercury has a remarkably consistent set of `cty` values; perhaps this is a small sample.
The function `fct_reorder2(f, x, y)` allows us to sort on *two* variables; this
is most useful when making line plots.
### 44\.2\.2 **q5** Sort the countries by values.
Use `fct_reorder2()` to sort `country` to match the order of `x = year, y = pop`. Pay attention to the rightmost edge of the curves and the legend order. How does `fct_reorder2()` sort factors?
```
## TASK: Modify the following code to sort the factor `country` based on `year`
## and `pop`.
gapminder %>%
filter(dense_rank(country) <= 7) %>%
mutate(country = fct_reorder2(country, year, pop)) %>%
ggplot(aes(year, pop, color = country)) +
geom_line() +
scale_y_log10()
```
**Observations**:
* The factors are sorted such that the rightmost points on the lines are vertically ordered the same as the legend.
This *small, simple trick* is extremely helpful for creating easily\-readable
line graphs.
44\.1 Organization
------------------
### 44\.1\.1 **q1** The following chunk displays the levels of the factor `continent`. Run
the following code chunk and note in what *order* they are listed.
```
## NOTE: No need to edit this
diamonds %>%
pull(cut) %>%
levels()
```
```
## [1] "Fair" "Good" "Very Good" "Premium" "Ideal"
```
```
## TASK: Determine what order the factors are listed in.
```
**Observations**:
* The factor levels are ordered in terms of increasing quality of diamond cut.
* The levels are essentially a measure of quality; we would expect price to (generally) increase with improved cut.
### 44\.1\.2 **q2** Determine the levels for the `continent` variable in the `gapminder` dataset. Note the *order* of the levels.
```
## TASK: Determine the levels of the variable
gapminder %>%
pull(continent) %>%
levels()
```
```
## [1] "Africa" "Americas" "Asia" "Europe" "Oceania"
```
```
## TASK: Determine what order the factors are listed in.
```
**Observations**:
* The factor levels are ordered alphabetically.
The [forcats](https://forcats.tidyverse.org/) package has tools for working with
factors. For instance, we can assign manual factor levels with the function
`fct_relevel()`. This is generally used in a `mutate()`; for instance `mutate(x = fct_relevel(x, "a", "b", "c")`.
### 44\.1\.3 **q3** Relevel the continents.
Copy your code from q2 and introduce a mutate using `fct_relevel()` to reorder `continent`. Choose which levels to reorder and what order in which to put them. Note how the resulting order is changed when you call `levels()` at the end of the pipe.
```
gapminder %>%
mutate(
continent = fct_relevel(
continent,
"Oceania"
)
) %>%
pull(continent) %>%
levels()
```
```
## [1] "Oceania" "Africa" "Americas" "Asia" "Europe"
```
**Observations**:
* Calling `fct_reorder()` as I do in the solution brings “Oceania” to the front, but leaves the other factors alone.
### 44\.1\.1 **q1** The following chunk displays the levels of the factor `continent`. Run
the following code chunk and note in what *order* they are listed.
```
## NOTE: No need to edit this
diamonds %>%
pull(cut) %>%
levels()
```
```
## [1] "Fair" "Good" "Very Good" "Premium" "Ideal"
```
```
## TASK: Determine what order the factors are listed in.
```
**Observations**:
* The factor levels are ordered in terms of increasing quality of diamond cut.
* The levels are essentially a measure of quality; we would expect price to (generally) increase with improved cut.
### 44\.1\.2 **q2** Determine the levels for the `continent` variable in the `gapminder` dataset. Note the *order* of the levels.
```
## TASK: Determine the levels of the variable
gapminder %>%
pull(continent) %>%
levels()
```
```
## [1] "Africa" "Americas" "Asia" "Europe" "Oceania"
```
```
## TASK: Determine what order the factors are listed in.
```
**Observations**:
* The factor levels are ordered alphabetically.
The [forcats](https://forcats.tidyverse.org/) package has tools for working with
factors. For instance, we can assign manual factor levels with the function
`fct_relevel()`. This is generally used in a `mutate()`; for instance `mutate(x = fct_relevel(x, "a", "b", "c")`.
### 44\.1\.3 **q3** Relevel the continents.
Copy your code from q2 and introduce a mutate using `fct_relevel()` to reorder `continent`. Choose which levels to reorder and what order in which to put them. Note how the resulting order is changed when you call `levels()` at the end of the pipe.
```
gapminder %>%
mutate(
continent = fct_relevel(
continent,
"Oceania"
)
) %>%
pull(continent) %>%
levels()
```
```
## [1] "Oceania" "Africa" "Americas" "Asia" "Europe"
```
**Observations**:
* Calling `fct_reorder()` as I do in the solution brings “Oceania” to the front, but leaves the other factors alone.
44\.2 Visual Tricks
-------------------
When factors do no have any *meaningful* order, it is generally better to
sort them on another variable, rather
```
mpg %>%
mutate(manufacturer = fct_reorder(manufacturer, cty)) %>%
ggplot(aes(manufacturer, cty)) +
geom_boxplot() +
coord_flip()
```
The function `fct_reorder(f, x)` allows you to reorder the factor `f` based on
another variable `x`. This will “match” the order between the two.
### 44\.2\.1 **q4** Use `fct_reorder()` to sort `manufacturer` to match the order of `cty`.
```
## TASK: Modify the following code to sort the factor `manufacturer` based on
## `cty`.
mpg %>%
mutate(manufacturer = fct_reorder(manufacturer, cty)) %>%
ggplot(aes(manufacturer, cty)) +
geom_boxplot() +
coord_flip()
```
**Observations**:
*Before*
\- Toyota and Nissan seem have the most variable vehicles in this dataset, in terms of `cty`.
\- Volkswagon has a number of high `cty` outliers.
*Sorted*
\- Honda has the most efficient vehicles in this sample.
\- Lincoln and Land Rover have the least efficient vehicles in this sample.
\- Mercury has a remarkably consistent set of `cty` values; perhaps this is a small sample.
The function `fct_reorder2(f, x, y)` allows us to sort on *two* variables; this
is most useful when making line plots.
### 44\.2\.2 **q5** Sort the countries by values.
Use `fct_reorder2()` to sort `country` to match the order of `x = year, y = pop`. Pay attention to the rightmost edge of the curves and the legend order. How does `fct_reorder2()` sort factors?
```
## TASK: Modify the following code to sort the factor `country` based on `year`
## and `pop`.
gapminder %>%
filter(dense_rank(country) <= 7) %>%
mutate(country = fct_reorder2(country, year, pop)) %>%
ggplot(aes(year, pop, color = country)) +
geom_line() +
scale_y_log10()
```
**Observations**:
* The factors are sorted such that the rightmost points on the lines are vertically ordered the same as the legend.
This *small, simple trick* is extremely helpful for creating easily\-readable
line graphs.
### 44\.2\.1 **q4** Use `fct_reorder()` to sort `manufacturer` to match the order of `cty`.
```
## TASK: Modify the following code to sort the factor `manufacturer` based on
## `cty`.
mpg %>%
mutate(manufacturer = fct_reorder(manufacturer, cty)) %>%
ggplot(aes(manufacturer, cty)) +
geom_boxplot() +
coord_flip()
```
**Observations**:
*Before*
\- Toyota and Nissan seem have the most variable vehicles in this dataset, in terms of `cty`.
\- Volkswagon has a number of high `cty` outliers.
*Sorted*
\- Honda has the most efficient vehicles in this sample.
\- Lincoln and Land Rover have the least efficient vehicles in this sample.
\- Mercury has a remarkably consistent set of `cty` values; perhaps this is a small sample.
The function `fct_reorder2(f, x, y)` allows us to sort on *two* variables; this
is most useful when making line plots.
### 44\.2\.2 **q5** Sort the countries by values.
Use `fct_reorder2()` to sort `country` to match the order of `x = year, y = pop`. Pay attention to the rightmost edge of the curves and the legend order. How does `fct_reorder2()` sort factors?
```
## TASK: Modify the following code to sort the factor `country` based on `year`
## and `pop`.
gapminder %>%
filter(dense_rank(country) <= 7) %>%
mutate(country = fct_reorder2(country, year, pop)) %>%
ggplot(aes(year, pop, color = country)) +
geom_line() +
scale_y_log10()
```
**Observations**:
* The factors are sorted such that the rightmost points on the lines are vertically ordered the same as the legend.
This *small, simple trick* is extremely helpful for creating easily\-readable
line graphs.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/stats-fitting-distributions.html |
45 Stats: Fitting Distributions
===============================
*Purpose*: We use distributions to model random quantities. However, in order to model physical phenomena, we should *fit* the distributions using data. In this short exercise you’ll learn some functions for fitting distributions to data.
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(MASS)
```
```
##
## Attaching package: 'MASS'
```
```
## The following object is masked from 'package:dplyr':
##
## select
```
```
library(broom)
```
45\.1 Aside: Masking
--------------------
Note that when we load the `MASS` and `tidyverse` packages, we will find that their functions *conflict*. To deal with this, we’ll need to learn how to specify a *namespace* when calling a function. To do this, use the `::` notation; i.e. `namespace::function`. For instance, to call `filter` from `dplyr`, we would write `dplyr::filter()`.
One of the specific conflicts between `MASS` and `tidyverse` is the `select` function. Try running the chunk below; it will throw an error:
```
diamonds %>%
select(carat, cut) %>%
glimpse()
```
This error occurs because `MASS` *also* provides a `select` function.
### 45\.1\.1 **q0** Fix the following code!
Use the namespace `::` operator to use the correct `select()` function.
```
diamonds %>%
dplyr::select(carat, cut) %>%
glimpse()
```
```
## Rows: 53,940
## Columns: 2
## $ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.30…
## $ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Very …
```
45\.2 Distribution Parameters and Fitting
-----------------------------------------
The function `rnorm()` requires values for `mean` and `sd`; while `rnorm()` has
defaults for these arguments, if we are trying to model a random event in the
real world, we should set `mean, sd` based on data. The process of estimating
parameters such as `mean, sd` for a distribution is called *fitting*. Fitting a
distribution is often accomplished through [*maximum likelihood
estimation*](https://en.wikipedia.org/wiki/Maximum_likelihood_estimation) (MLE);
rather than discuss the gory details of MLE, we will simply use MLE as a
technology to do useful work.
First, let’s look at an example of MLE carried out with the function `MASS::fitdistr()`.
```
## NOTE: No need to edit this setup
set.seed(101)
df_data_norm <- tibble(x = rnorm(50, mean = 2, sd = 1))
## NOTE: Example use of fitdistr()
df_est_norm <-
df_data_norm %>%
pull(x) %>%
fitdistr(densfun = "normal") %>%
tidy()
df_est_norm
```
```
## # A tibble: 2 × 3
## term estimate std.error
## <chr> <dbl> <dbl>
## 1 mean 1.88 0.131
## 2 sd 0.923 0.0923
```
*Notes*:
* `fitdistr()` takes a *vector*; I use the function `pull(x)` to pull the vector `x` out of the dataframe.
* `fitdistr()` returns a messy output; the function `broom::tidy()` automagically cleans up the output and provides a tibble.
### 45\.2\.1 **q1** Compute the sample mean and standard deviation of `x` in `df_data_norm`. Compare these values to those you computed with `fitdistr()`.
```
## TASK: Compute the sample mean and sd of `df_data_norm %>% pull(x)`
mean_est <- df_data_norm %>% pull(x) %>% mean()
sd_est <- df_data_norm %>% pull(x) %>% sd()
mean_est
```
```
## [1] 1.876029
```
```
sd_est
```
```
## [1] 0.9321467
```
**Observations**:
* The values are exactly the same!
Estimating parameters for a normal distribution is easy because it is parameterized in terms of the mean and standard deviation. The advantage of using `fitdistr()` is that it will allow us to work with a much wider selection of distribution models.
### 45\.2\.2 **q2** Use the function `fitdistr()` to fit a `"weibull"` distribution to the realizations `y` in `df_data_weibull`.
*Note*: The [weibull distribution](https://en.wikipedia.org/wiki/Weibull_distribution) is used to model many physical phenomena, including the strength of composite materials.
```
## NOTE: No need to edit this setup
set.seed(101)
df_data_weibull <- tibble(y = rweibull(50, shape = 2, scale = 4))
## TASK: Use the `fitdistr()` function to estimate parameters
df_q2 <-
df_data_weibull %>%
pull(y) %>%
fitdistr(densfun = "weibull") %>%
tidy()
df_q2
```
```
## # A tibble: 2 × 3
## term estimate std.error
## <chr> <dbl> <dbl>
## 1 shape 2.18 0.239
## 2 scale 4.00 0.274
```
Once we’ve fit a distribution, we can use the estimated parameters to approximate quantities like probabilities. If we were using the distribution for `y` to model a material strength, we would estimate probabilities to compute the rate of failure for mechanical components—we could then use this information to make design decisions.
### 45\.2\.3 **q3** Extract the estimates `shape_est` and `scale_est` from `df_q2`, and use them to estimate the probability that `Y <= 2`.
*Hint*: `pr_true` contains the true probability; modify that code to compute the estimated probability.
```
## NOTE: No need to modify this line
pr_true <- pweibull(q = 2, shape = 2, scale = 4)
set.seed(101)
shape_est <-
df_q2 %>%
filter(term == "shape") %>%
pull(estimate)
scale_est <-
df_q2 %>%
filter(term == "scale") %>%
pull(estimate)
pr_est <- pweibull(q = 2, shape = shape_est, scale = scale_est)
pr_true
```
```
## [1] 0.2211992
```
```
pr_est
```
```
## [1] 0.1988446
```
You’ll probably find that `pr_true != pr_est`! As we saw in `e-stat06-clt` we should really compute a *confidence interval* to assess our degree of confidence in this probability estimate. However, it’s not obvious how we can use the ideas of the Central Limit Theorem to put a confidence interval around \`pr\_est. In the next exercise we’ll learn a very general technique for estimating confidence intervals.
45\.3 Notes
-----------
\[1] For another tutorial on fitting distributions in R, see this [R\-bloggers](https://www.r-bloggers.com/fitting-distributions-with-r/) post.
45\.1 Aside: Masking
--------------------
Note that when we load the `MASS` and `tidyverse` packages, we will find that their functions *conflict*. To deal with this, we’ll need to learn how to specify a *namespace* when calling a function. To do this, use the `::` notation; i.e. `namespace::function`. For instance, to call `filter` from `dplyr`, we would write `dplyr::filter()`.
One of the specific conflicts between `MASS` and `tidyverse` is the `select` function. Try running the chunk below; it will throw an error:
```
diamonds %>%
select(carat, cut) %>%
glimpse()
```
This error occurs because `MASS` *also* provides a `select` function.
### 45\.1\.1 **q0** Fix the following code!
Use the namespace `::` operator to use the correct `select()` function.
```
diamonds %>%
dplyr::select(carat, cut) %>%
glimpse()
```
```
## Rows: 53,940
## Columns: 2
## $ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.30…
## $ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Very …
```
### 45\.1\.1 **q0** Fix the following code!
Use the namespace `::` operator to use the correct `select()` function.
```
diamonds %>%
dplyr::select(carat, cut) %>%
glimpse()
```
```
## Rows: 53,940
## Columns: 2
## $ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0.30…
## $ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Very …
```
45\.2 Distribution Parameters and Fitting
-----------------------------------------
The function `rnorm()` requires values for `mean` and `sd`; while `rnorm()` has
defaults for these arguments, if we are trying to model a random event in the
real world, we should set `mean, sd` based on data. The process of estimating
parameters such as `mean, sd` for a distribution is called *fitting*. Fitting a
distribution is often accomplished through [*maximum likelihood
estimation*](https://en.wikipedia.org/wiki/Maximum_likelihood_estimation) (MLE);
rather than discuss the gory details of MLE, we will simply use MLE as a
technology to do useful work.
First, let’s look at an example of MLE carried out with the function `MASS::fitdistr()`.
```
## NOTE: No need to edit this setup
set.seed(101)
df_data_norm <- tibble(x = rnorm(50, mean = 2, sd = 1))
## NOTE: Example use of fitdistr()
df_est_norm <-
df_data_norm %>%
pull(x) %>%
fitdistr(densfun = "normal") %>%
tidy()
df_est_norm
```
```
## # A tibble: 2 × 3
## term estimate std.error
## <chr> <dbl> <dbl>
## 1 mean 1.88 0.131
## 2 sd 0.923 0.0923
```
*Notes*:
* `fitdistr()` takes a *vector*; I use the function `pull(x)` to pull the vector `x` out of the dataframe.
* `fitdistr()` returns a messy output; the function `broom::tidy()` automagically cleans up the output and provides a tibble.
### 45\.2\.1 **q1** Compute the sample mean and standard deviation of `x` in `df_data_norm`. Compare these values to those you computed with `fitdistr()`.
```
## TASK: Compute the sample mean and sd of `df_data_norm %>% pull(x)`
mean_est <- df_data_norm %>% pull(x) %>% mean()
sd_est <- df_data_norm %>% pull(x) %>% sd()
mean_est
```
```
## [1] 1.876029
```
```
sd_est
```
```
## [1] 0.9321467
```
**Observations**:
* The values are exactly the same!
Estimating parameters for a normal distribution is easy because it is parameterized in terms of the mean and standard deviation. The advantage of using `fitdistr()` is that it will allow us to work with a much wider selection of distribution models.
### 45\.2\.2 **q2** Use the function `fitdistr()` to fit a `"weibull"` distribution to the realizations `y` in `df_data_weibull`.
*Note*: The [weibull distribution](https://en.wikipedia.org/wiki/Weibull_distribution) is used to model many physical phenomena, including the strength of composite materials.
```
## NOTE: No need to edit this setup
set.seed(101)
df_data_weibull <- tibble(y = rweibull(50, shape = 2, scale = 4))
## TASK: Use the `fitdistr()` function to estimate parameters
df_q2 <-
df_data_weibull %>%
pull(y) %>%
fitdistr(densfun = "weibull") %>%
tidy()
df_q2
```
```
## # A tibble: 2 × 3
## term estimate std.error
## <chr> <dbl> <dbl>
## 1 shape 2.18 0.239
## 2 scale 4.00 0.274
```
Once we’ve fit a distribution, we can use the estimated parameters to approximate quantities like probabilities. If we were using the distribution for `y` to model a material strength, we would estimate probabilities to compute the rate of failure for mechanical components—we could then use this information to make design decisions.
### 45\.2\.3 **q3** Extract the estimates `shape_est` and `scale_est` from `df_q2`, and use them to estimate the probability that `Y <= 2`.
*Hint*: `pr_true` contains the true probability; modify that code to compute the estimated probability.
```
## NOTE: No need to modify this line
pr_true <- pweibull(q = 2, shape = 2, scale = 4)
set.seed(101)
shape_est <-
df_q2 %>%
filter(term == "shape") %>%
pull(estimate)
scale_est <-
df_q2 %>%
filter(term == "scale") %>%
pull(estimate)
pr_est <- pweibull(q = 2, shape = shape_est, scale = scale_est)
pr_true
```
```
## [1] 0.2211992
```
```
pr_est
```
```
## [1] 0.1988446
```
You’ll probably find that `pr_true != pr_est`! As we saw in `e-stat06-clt` we should really compute a *confidence interval* to assess our degree of confidence in this probability estimate. However, it’s not obvious how we can use the ideas of the Central Limit Theorem to put a confidence interval around \`pr\_est. In the next exercise we’ll learn a very general technique for estimating confidence intervals.
### 45\.2\.1 **q1** Compute the sample mean and standard deviation of `x` in `df_data_norm`. Compare these values to those you computed with `fitdistr()`.
```
## TASK: Compute the sample mean and sd of `df_data_norm %>% pull(x)`
mean_est <- df_data_norm %>% pull(x) %>% mean()
sd_est <- df_data_norm %>% pull(x) %>% sd()
mean_est
```
```
## [1] 1.876029
```
```
sd_est
```
```
## [1] 0.9321467
```
**Observations**:
* The values are exactly the same!
Estimating parameters for a normal distribution is easy because it is parameterized in terms of the mean and standard deviation. The advantage of using `fitdistr()` is that it will allow us to work with a much wider selection of distribution models.
### 45\.2\.2 **q2** Use the function `fitdistr()` to fit a `"weibull"` distribution to the realizations `y` in `df_data_weibull`.
*Note*: The [weibull distribution](https://en.wikipedia.org/wiki/Weibull_distribution) is used to model many physical phenomena, including the strength of composite materials.
```
## NOTE: No need to edit this setup
set.seed(101)
df_data_weibull <- tibble(y = rweibull(50, shape = 2, scale = 4))
## TASK: Use the `fitdistr()` function to estimate parameters
df_q2 <-
df_data_weibull %>%
pull(y) %>%
fitdistr(densfun = "weibull") %>%
tidy()
df_q2
```
```
## # A tibble: 2 × 3
## term estimate std.error
## <chr> <dbl> <dbl>
## 1 shape 2.18 0.239
## 2 scale 4.00 0.274
```
Once we’ve fit a distribution, we can use the estimated parameters to approximate quantities like probabilities. If we were using the distribution for `y` to model a material strength, we would estimate probabilities to compute the rate of failure for mechanical components—we could then use this information to make design decisions.
### 45\.2\.3 **q3** Extract the estimates `shape_est` and `scale_est` from `df_q2`, and use them to estimate the probability that `Y <= 2`.
*Hint*: `pr_true` contains the true probability; modify that code to compute the estimated probability.
```
## NOTE: No need to modify this line
pr_true <- pweibull(q = 2, shape = 2, scale = 4)
set.seed(101)
shape_est <-
df_q2 %>%
filter(term == "shape") %>%
pull(estimate)
scale_est <-
df_q2 %>%
filter(term == "scale") %>%
pull(estimate)
pr_est <- pweibull(q = 2, shape = shape_est, scale = scale_est)
pr_true
```
```
## [1] 0.2211992
```
```
pr_est
```
```
## [1] 0.1988446
```
You’ll probably find that `pr_true != pr_est`! As we saw in `e-stat06-clt` we should really compute a *confidence interval* to assess our degree of confidence in this probability estimate. However, it’s not obvious how we can use the ideas of the Central Limit Theorem to put a confidence interval around \`pr\_est. In the next exercise we’ll learn a very general technique for estimating confidence intervals.
45\.3 Notes
-----------
\[1] For another tutorial on fitting distributions in R, see this [R\-bloggers](https://www.r-bloggers.com/fitting-distributions-with-r/) post.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/vis-perceptual-basics.html |
46 Vis: Perceptual Basics
=========================
*Purpose*: Creating a *presentation\-quality* graph is an exercise in *communication*. In order to create graphs that other people can understand, we should know some stuff about *how humans see data*. Through the required “reading” (video) you’ll learn about visual perception, then put these ideas to use criticizing some graphs. Later, you’ll use these ideas to *improve* some graphs.
*Reading*: [How Humans See Data](https://www.youtube.com/watch?v=fSgEeI2Xpdc&list=PLluqivwOH1ouKkbM0c6x-g7DQnXF0UmC0&index=37&t=0s) (Video)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
46\.1 Criticize these graphs!
-----------------------------
Using the ideas from the reading (video), state some issues with the following graphs. As a reminder, the *visual hierarchy* is:
1. Position along a common scale
2. Position on identical but nonaligned scales
3. Length
4. Angle; Slope (With slope not too close to 0, \\(\\pi/2\\), or \\(\\pi\\).)
5. Area
6. Volume; Density; Color saturation
7. Color hue
### 46\.1\.1 **q1** What are some issues with the following graph? *Don’t just say* “it’s bad”—use concepts from the required reading.
```
## NOTE: No need to edit; run and criticize
mpg %>%
ggplot(aes(manufacturer, cty)) +
geom_boxplot() +
coord_flip()
```
**Observations**:
* An alphabetical ordering of factors is almost never meaningful.
We’ll learn how to reorder factors in `e-data11-factors`:
```
mpg %>%
ggplot(aes(fct_reorder(manufacturer, cty), cty)) +
geom_boxplot() +
coord_flip()
```
### 46\.1\.2 **q2** What are some issues with the following graph? *Don’t just say* “it’s bad”—use concepts from the required reading.
```
## NOTE: No need to edit; run and criticize
as_tibble(mtcars) %>%
mutate(model = rownames(mtcars)) %>%
ggplot(aes(x = "", y = "", size = mpg)) +
geom_point() +
facet_wrap(~model)
```
* Area is *low* on the visual hierarchy; it is difficult to see the difference between mpg values.
### 46\.1\.3 **q3** What are some issues with the following graph? *Don’t just say* “it’s bad”—use concepts from the required reading.
```
## NOTE: No need to edit; run and criticize
diamonds %>%
ggplot(aes(clarity, fill = cut)) +
geom_bar()
```
* Stacked bar charts force us to make comparisons using length, rather than position along a common axis.
### 46\.1\.4 **q4** What are some issues with the following graph? *Don’t just say* “it’s bad”—use concepts from the required reading.
```
## NOTE: No need to edit; run and criticize
diamonds %>%
ggplot(aes(x = "", fill = cut)) +
geom_bar() +
coord_polar("y") +
labs(x = "")
```
* A pie chart has encodes numbers as angles, which is low on the visual hierarchy.
46\.1 Criticize these graphs!
-----------------------------
Using the ideas from the reading (video), state some issues with the following graphs. As a reminder, the *visual hierarchy* is:
1. Position along a common scale
2. Position on identical but nonaligned scales
3. Length
4. Angle; Slope (With slope not too close to 0, \\(\\pi/2\\), or \\(\\pi\\).)
5. Area
6. Volume; Density; Color saturation
7. Color hue
### 46\.1\.1 **q1** What are some issues with the following graph? *Don’t just say* “it’s bad”—use concepts from the required reading.
```
## NOTE: No need to edit; run and criticize
mpg %>%
ggplot(aes(manufacturer, cty)) +
geom_boxplot() +
coord_flip()
```
**Observations**:
* An alphabetical ordering of factors is almost never meaningful.
We’ll learn how to reorder factors in `e-data11-factors`:
```
mpg %>%
ggplot(aes(fct_reorder(manufacturer, cty), cty)) +
geom_boxplot() +
coord_flip()
```
### 46\.1\.2 **q2** What are some issues with the following graph? *Don’t just say* “it’s bad”—use concepts from the required reading.
```
## NOTE: No need to edit; run and criticize
as_tibble(mtcars) %>%
mutate(model = rownames(mtcars)) %>%
ggplot(aes(x = "", y = "", size = mpg)) +
geom_point() +
facet_wrap(~model)
```
* Area is *low* on the visual hierarchy; it is difficult to see the difference between mpg values.
### 46\.1\.3 **q3** What are some issues with the following graph? *Don’t just say* “it’s bad”—use concepts from the required reading.
```
## NOTE: No need to edit; run and criticize
diamonds %>%
ggplot(aes(clarity, fill = cut)) +
geom_bar()
```
* Stacked bar charts force us to make comparisons using length, rather than position along a common axis.
### 46\.1\.4 **q4** What are some issues with the following graph? *Don’t just say* “it’s bad”—use concepts from the required reading.
```
## NOTE: No need to edit; run and criticize
diamonds %>%
ggplot(aes(x = "", fill = cut)) +
geom_bar() +
coord_polar("y") +
labs(x = "")
```
* A pie chart has encodes numbers as angles, which is low on the visual hierarchy.
### 46\.1\.1 **q1** What are some issues with the following graph? *Don’t just say* “it’s bad”—use concepts from the required reading.
```
## NOTE: No need to edit; run and criticize
mpg %>%
ggplot(aes(manufacturer, cty)) +
geom_boxplot() +
coord_flip()
```
**Observations**:
* An alphabetical ordering of factors is almost never meaningful.
We’ll learn how to reorder factors in `e-data11-factors`:
```
mpg %>%
ggplot(aes(fct_reorder(manufacturer, cty), cty)) +
geom_boxplot() +
coord_flip()
```
### 46\.1\.2 **q2** What are some issues with the following graph? *Don’t just say* “it’s bad”—use concepts from the required reading.
```
## NOTE: No need to edit; run and criticize
as_tibble(mtcars) %>%
mutate(model = rownames(mtcars)) %>%
ggplot(aes(x = "", y = "", size = mpg)) +
geom_point() +
facet_wrap(~model)
```
* Area is *low* on the visual hierarchy; it is difficult to see the difference between mpg values.
### 46\.1\.3 **q3** What are some issues with the following graph? *Don’t just say* “it’s bad”—use concepts from the required reading.
```
## NOTE: No need to edit; run and criticize
diamonds %>%
ggplot(aes(clarity, fill = cut)) +
geom_bar()
```
* Stacked bar charts force us to make comparisons using length, rather than position along a common axis.
### 46\.1\.4 **q4** What are some issues with the following graph? *Don’t just say* “it’s bad”—use concepts from the required reading.
```
## NOTE: No need to edit; run and criticize
diamonds %>%
ggplot(aes(x = "", fill = cut)) +
geom_bar() +
coord_polar("y") +
labs(x = "")
```
* A pie chart has encodes numbers as angles, which is low on the visual hierarchy.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/stats-the-bootstrap-some-recipies.html |
47 Stats: The Bootstrap, Some Recipies
======================================
*Purpose*: Confidence intervals are an important tool for assessing our estimates. However, our tools so far for estimating confidence intervals rely on assumptions (normality, applicability of the CLT) that limit the statistics we can study. In this exercise we’ll learn about a general\-purpose tool we can use to approximate CI—the *bootstrap*.
```
library(MASS)
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
## ✖ dplyr::select() masks MASS::select()
```
```
library(broom)
library(rsample)
```
47\.1 A Simple Example: Estimating the Mean
-------------------------------------------
first, imagine that we have a sample from some population.
```
## NOTE: No need to edit this setup
set.seed(101)
df_data_norm <- tibble(x = rnorm(50))
df_data_norm %>%
ggplot(aes(x)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
The set of samples—so long as it is representative of the population—is our *best available approximation* of the population. What the bootstrap does is operationalize this observation: We treat our sample as a population, and sample from it randomly. What that means is we generate some number of new *bootstrap samples* from our available sample. Visually, that looks like the following:
```
## NOTE: No need to edit this setup
df_resample_norm <-
bootstraps(df_data_norm, times = 1000) %>%
mutate(df = map(splits, ~ analysis(.x)))
df_resample_norm %>%
slice(1:9) %>%
unnest(df) %>%
ggplot(aes(x)) +
geom_histogram() +
facet_wrap(~ id)
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
Every panel in this figure depicts a single *bootstrap resample*, drawn from our original sample. Each bootstrap resample plays the role of a single sample; we construct a resample, compute a single statistic for each bootstrap resample, and we do this whole process some number of `times`. In the example above, I set `times = 1000`; generally larger is better, but a good rule of thumb is to do `1000` resamples.
*Notes*:
* The `bootstraps()` function comes from the `rsample` package, which implements many different resampling strategies (beyond the bootstrap).
* The `analysis()` function also comes from `rsample`; this is a special function we need to call when working with a resampling of the data \[1].
* We saw the `map()` function in `e-data10-map`; using `map()` above is necessary in part because we need to call `analysis()`. Since `analysis()` is not vectorized, we need the map to use this function on every split in `splits`.
```
## NOTE: No need to edit this example
v_mean_est <-
map_dbl(
df_resample_norm %>% pull(df),
~ summarize(.x, mean_est = mean(x)) %>% pull(mean_est)
)
v_mean_est[1:9]
```
```
## [1] -0.066678997 -0.244149601 0.049938113 -0.074652483 0.008309007
## [6] -0.196989467 -0.252095066 -0.238739640 -0.189294459
```
### 47\.1\.1 **q1** Modify the code above to use within a `mutate()` call on `df_resample_norm`. Assign the mean estimates to the new column `mean_est`.
```
df_q1 <-
df_resample_norm %>%
mutate(
mean_est = map_dbl(
df,
~ summarize(.x, mean_est = mean(x)) %>% pull(mean_est)
)
)
df_q1
```
```
## # Bootstrap sampling
## # A tibble: 1,000 × 4
## splits id df mean_est
## <list> <chr> <list> <dbl>
## 1 <split [50/16]> Bootstrap0001 <tibble [50 × 1]> -0.0667
## 2 <split [50/21]> Bootstrap0002 <tibble [50 × 1]> -0.244
## 3 <split [50/20]> Bootstrap0003 <tibble [50 × 1]> 0.0499
## 4 <split [50/15]> Bootstrap0004 <tibble [50 × 1]> -0.0747
## 5 <split [50/21]> Bootstrap0005 <tibble [50 × 1]> 0.00831
## 6 <split [50/16]> Bootstrap0006 <tibble [50 × 1]> -0.197
## 7 <split [50/19]> Bootstrap0007 <tibble [50 × 1]> -0.252
## 8 <split [50/17]> Bootstrap0008 <tibble [50 × 1]> -0.239
## 9 <split [50/17]> Bootstrap0009 <tibble [50 × 1]> -0.189
## 10 <split [50/18]> Bootstrap0010 <tibble [50 × 1]> -0.136
## # … with 990 more rows
```
The following test will verify that your `df_q1` is correct:
```
## NOTE: No need to change this!
assertthat::assert_that(
assertthat::are_equal(
df_q1 %>% pull(mean_est),
v_mean_est
)
)
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
What we have now in `df_q1 %>% pull(mean_est)` is an approximation of the *sampling distribution* for the mean estimate. Remember that a confidence interval is a construction based on the sampling distribution, so this is the object we need! From this point, our job would be to work the mathematical manipulations necessary to construct a confidence interval from the quantiles of `df_q1 %>% pull(mean_est)`. Thankfully, the `rsample` package has already worked out those details for us!
The `rsample` function `int_pctl()` will compute (percentile) confidence intervals from a bootstrap resampling, but we need to compute our own statistics. Remember the `fitdistr()` function from the previous exercise?
```
# NOTE: No need to change this demo code
df_data_norm %>%
pull(x) %>%
fitdistr(densfun = "normal") %>%
tidy()
```
```
## # A tibble: 2 × 3
## term estimate std.error
## <chr> <dbl> <dbl>
## 1 mean -0.124 0.131
## 2 sd 0.923 0.0923
```
The output of `fitdistr()`, after run through `tidy()`, is exactly what `int_pctl()` expects. Note that the output here is a tibble with a `term` column and two statistics: the `estimate` and the `std.error`. To use `int_pctl()`, we’ll have to provide statistics in this compatible form.
### 47\.1\.2 **q2** Modify the code below following `recall-fitdistr` to provide tidy results to `int_pctl()`.
*Hint*: You should only have to modify the formula (`~`) line.
```
df_q2 <-
df_resample_norm %>%
mutate(
estimates = map(
splits,
~ analysis(.x) %>% pull(x) %>% fitdistr(densfun = "normal") %>% tidy()
)
)
# NOTE: The following function call will work once you correctly edit the code above
int_pctl(df_q2, estimates)
```
```
## # A tibble: 2 × 6
## term .lower .estimate .upper .alpha .method
## <chr> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 mean -0.378 -0.118 0.124 0.05 percentile
## 2 sd 0.750 0.911 1.07 0.05 percentile
```
Once you learn how to provide statistics in the form that `int_pctl()` is expecting, you’re off to the races! You can use the bootstrap to compute confidence intervals for very general settings.
One of the important things to remember is that the bootstrap is an *approximation*. The bootstrap relies on a number of assumptions; there are many, but two important ones are:
1. The data are representative of the population
2. Resampling is performed sufficiently many times
The next two tasks will study what happens when these two assumptions are not met.
### 47\.1\.3 **q3** (Representative sample) Read the following code before running it, and make a hypothesis about the result. Is the sample entering `bootstraps()` representative of the population `rnorm(mean = 0, sd = 1)`? How are the bootstrap results affected?
```
## TASK: Read this code; will the data be representative of the population
## rnorm(mean = 0, sd = 1)?
tibble(x = rnorm(n = 100)) %>%
filter(x < 0) %>%
bootstraps(times = 1000) %>%
mutate(
estimates = map(
splits,
~ analysis(.x) %>% pull(x) %>% fitdistr(densfun = "normal") %>% tidy()
)
) %>%
int_pctl(estimates)
```
```
## # A tibble: 2 × 6
## term .lower .estimate .upper .alpha .method
## <chr> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 mean -0.965 -0.770 -0.605 0.05 percentile
## 2 sd 0.425 0.572 0.718 0.05 percentile
```
**Observations**:
* The sample is not at all representative; we are totally missing all positive samples.
* Correspondingly, the mean is much lower than it should be, and the standard deviation is too small.
The following code generates `100` different samples from a normal distribution (each with `n = 10`), and computes a very coarse bootstrap for each one.
### 47\.1\.4 **q4** (Number of replicates) First run this code, and comment on whether the approximate coverage probability is close to the nominal `0.95`. Increase the value of `n_boot` and re\-run; at what point does the coverage probability approach the desired `0.95`?
*Note*: At higher values of `n_boot`, the following code can take a long while to run. I recommend keeping `n_boot <= 1000`.
```
## TASK: Run this code,
set.seed(101)
times <- 100 # Number of bootstrap resamples
df_q4 <-
map_dfr(
seq(1, 100), # Number of replicates
function(repl) {
tibble(x = rnorm(10)) %>%
bootstraps(times = times) %>%
mutate(
estimates = map(
splits,
~ analysis(.x) %>% pull(x) %>% fitdistr(densfun = "normal") %>% tidy()
)
) %>%
int_pctl(estimates, alpha = 1 - 0.95) %>%
mutate(repl = repl)
}
)
```
```
## Warning: Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
```
```
## Warning in bootstraps(., times = times): Some assessment sets contained zero
## rows.
```
```
## Warning: Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
```
```
## Warning in bootstraps(., times = times): Some assessment sets contained zero
## rows.
```
```
## Warning: Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
```
```
## Estimate the coverage probability of the bootstrap intervals
df_q4 %>%
filter(term == "mean") %>%
mutate(cover = (.lower <= 0) & (0 <= .upper)) %>%
summarize(mean(cover))
```
```
## # A tibble: 1 × 1
## `mean(cover)`
## <dbl>
## 1 0.9
```
**Observations**:
* I find a coverage probability around `0.73` at `n_boot = 10`. This is much smaller than desired.
* At `n_boot = 1000` I find an estimated coverage probability around `0.88`, which is closer but not perfect.
*Aside*: The `rsample` function `int_pctl` actually complains when you give it fewer than `1000` replicates. Since you’ll usually be running the bootstrap only a handful of times (rather than `100` above), you need not be stingy with bootstrap replicates. Do at least `1000` in most cases.
47\.2 A Worked Example: Probability Estimate
--------------------------------------------
To finish, I’ll present some example code on how you can apply the bootstrap to a more complicated problem. In the previous exercise `e-stat08-fit-dist` we estimated a probability based on a fitted distribution. Now we have the tools to produce a bootstrap\-approximated a confidence interval for that probability estimate.
Remember that we had the following setup: sampling from a weibull distribution and estimating parameters with `fitdistr()`.
```
## NOTE: No need to change this example code
set.seed(101)
df_data_w <- tibble(y = rweibull(50, shape = 2, scale = 4))
pr_true <- pweibull(q = 2, shape = 2, scale = 4)
df_data_w %>%
pull(y) %>%
fitdistr(densfun = "weibull") %>%
tidy()
```
```
## # A tibble: 2 × 3
## term estimate std.error
## <chr> <dbl> <dbl>
## 1 shape 2.18 0.239
## 2 scale 4.00 0.274
```
In order to approximate a confidence interval for our probability estimate, we’ll need to provide the probability value in the form that `int_pctl()` expects. Below I define a helper function that takes each split, extracts the estimated parameters, and uses them to compute a probability estimate. I then add that value as a new row to the output of `tidy()`, making sure to populate the columns `estimate` and `term`.
```
## NOTE: No need to change this example code; but feel free to adapt it!
fit_fun <- function(split) {
## Fit distribution
df_tmp <-
analysis(split) %>%
pull(y) %>%
fitdistr(densfun = "weibull") %>%
tidy()
## Extract statistics
scale_est <-
df_tmp %>%
filter(term == "scale") %>%
pull(estimate)
shape_est <-
df_tmp %>%
filter(term == "shape") %>%
pull(estimate)
## Add probability estimate in tidy form
df_tmp %>%
bind_rows(tibble(
estimate = pweibull(q = 2, scale = scale_est, shape = shape_est),
term = "pr"
))
}
df_resample_pr <-
bootstraps(df_data_w, times = 1000) %>%
mutate(estimates = map(splits, fit_fun))
```
```
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
```
Now I’ve got all the information I need to pass to `df_resample_pr`:
```
## NOTE: No need to change this example code
int_pctl(df_resample_pr, estimates)
```
```
## # A tibble: 3 × 6
## term .lower .estimate .upper .alpha .method
## <chr> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 pr 0.121 0.197 0.278 0.05 percentile
## 2 scale 3.50 3.99 4.51 0.05 percentile
## 3 shape 1.87 2.23 2.68 0.05 percentile
```
```
pr_true
```
```
## [1] 0.2211992
```
When I run this, I find that the confidence interval contains `pr_true` as one might hope!
47\.3 Notes
-----------
\[1] This is because `rsample` does some fancy stuff under the hood. Basically `bootstraps` does not make any additional copies of the data; the price we pay for this efficiency is the need to call `analysis()`.
\[2] For a slightly more mathematical treatment of the bootstrap, try [these MIT course notes](https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf)
47\.1 A Simple Example: Estimating the Mean
-------------------------------------------
first, imagine that we have a sample from some population.
```
## NOTE: No need to edit this setup
set.seed(101)
df_data_norm <- tibble(x = rnorm(50))
df_data_norm %>%
ggplot(aes(x)) +
geom_histogram()
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
The set of samples—so long as it is representative of the population—is our *best available approximation* of the population. What the bootstrap does is operationalize this observation: We treat our sample as a population, and sample from it randomly. What that means is we generate some number of new *bootstrap samples* from our available sample. Visually, that looks like the following:
```
## NOTE: No need to edit this setup
df_resample_norm <-
bootstraps(df_data_norm, times = 1000) %>%
mutate(df = map(splits, ~ analysis(.x)))
df_resample_norm %>%
slice(1:9) %>%
unnest(df) %>%
ggplot(aes(x)) +
geom_histogram() +
facet_wrap(~ id)
```
```
## `stat_bin()` using `bins = 30`. Pick better value with `binwidth`.
```
Every panel in this figure depicts a single *bootstrap resample*, drawn from our original sample. Each bootstrap resample plays the role of a single sample; we construct a resample, compute a single statistic for each bootstrap resample, and we do this whole process some number of `times`. In the example above, I set `times = 1000`; generally larger is better, but a good rule of thumb is to do `1000` resamples.
*Notes*:
* The `bootstraps()` function comes from the `rsample` package, which implements many different resampling strategies (beyond the bootstrap).
* The `analysis()` function also comes from `rsample`; this is a special function we need to call when working with a resampling of the data \[1].
* We saw the `map()` function in `e-data10-map`; using `map()` above is necessary in part because we need to call `analysis()`. Since `analysis()` is not vectorized, we need the map to use this function on every split in `splits`.
```
## NOTE: No need to edit this example
v_mean_est <-
map_dbl(
df_resample_norm %>% pull(df),
~ summarize(.x, mean_est = mean(x)) %>% pull(mean_est)
)
v_mean_est[1:9]
```
```
## [1] -0.066678997 -0.244149601 0.049938113 -0.074652483 0.008309007
## [6] -0.196989467 -0.252095066 -0.238739640 -0.189294459
```
### 47\.1\.1 **q1** Modify the code above to use within a `mutate()` call on `df_resample_norm`. Assign the mean estimates to the new column `mean_est`.
```
df_q1 <-
df_resample_norm %>%
mutate(
mean_est = map_dbl(
df,
~ summarize(.x, mean_est = mean(x)) %>% pull(mean_est)
)
)
df_q1
```
```
## # Bootstrap sampling
## # A tibble: 1,000 × 4
## splits id df mean_est
## <list> <chr> <list> <dbl>
## 1 <split [50/16]> Bootstrap0001 <tibble [50 × 1]> -0.0667
## 2 <split [50/21]> Bootstrap0002 <tibble [50 × 1]> -0.244
## 3 <split [50/20]> Bootstrap0003 <tibble [50 × 1]> 0.0499
## 4 <split [50/15]> Bootstrap0004 <tibble [50 × 1]> -0.0747
## 5 <split [50/21]> Bootstrap0005 <tibble [50 × 1]> 0.00831
## 6 <split [50/16]> Bootstrap0006 <tibble [50 × 1]> -0.197
## 7 <split [50/19]> Bootstrap0007 <tibble [50 × 1]> -0.252
## 8 <split [50/17]> Bootstrap0008 <tibble [50 × 1]> -0.239
## 9 <split [50/17]> Bootstrap0009 <tibble [50 × 1]> -0.189
## 10 <split [50/18]> Bootstrap0010 <tibble [50 × 1]> -0.136
## # … with 990 more rows
```
The following test will verify that your `df_q1` is correct:
```
## NOTE: No need to change this!
assertthat::assert_that(
assertthat::are_equal(
df_q1 %>% pull(mean_est),
v_mean_est
)
)
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
What we have now in `df_q1 %>% pull(mean_est)` is an approximation of the *sampling distribution* for the mean estimate. Remember that a confidence interval is a construction based on the sampling distribution, so this is the object we need! From this point, our job would be to work the mathematical manipulations necessary to construct a confidence interval from the quantiles of `df_q1 %>% pull(mean_est)`. Thankfully, the `rsample` package has already worked out those details for us!
The `rsample` function `int_pctl()` will compute (percentile) confidence intervals from a bootstrap resampling, but we need to compute our own statistics. Remember the `fitdistr()` function from the previous exercise?
```
# NOTE: No need to change this demo code
df_data_norm %>%
pull(x) %>%
fitdistr(densfun = "normal") %>%
tidy()
```
```
## # A tibble: 2 × 3
## term estimate std.error
## <chr> <dbl> <dbl>
## 1 mean -0.124 0.131
## 2 sd 0.923 0.0923
```
The output of `fitdistr()`, after run through `tidy()`, is exactly what `int_pctl()` expects. Note that the output here is a tibble with a `term` column and two statistics: the `estimate` and the `std.error`. To use `int_pctl()`, we’ll have to provide statistics in this compatible form.
### 47\.1\.2 **q2** Modify the code below following `recall-fitdistr` to provide tidy results to `int_pctl()`.
*Hint*: You should only have to modify the formula (`~`) line.
```
df_q2 <-
df_resample_norm %>%
mutate(
estimates = map(
splits,
~ analysis(.x) %>% pull(x) %>% fitdistr(densfun = "normal") %>% tidy()
)
)
# NOTE: The following function call will work once you correctly edit the code above
int_pctl(df_q2, estimates)
```
```
## # A tibble: 2 × 6
## term .lower .estimate .upper .alpha .method
## <chr> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 mean -0.378 -0.118 0.124 0.05 percentile
## 2 sd 0.750 0.911 1.07 0.05 percentile
```
Once you learn how to provide statistics in the form that `int_pctl()` is expecting, you’re off to the races! You can use the bootstrap to compute confidence intervals for very general settings.
One of the important things to remember is that the bootstrap is an *approximation*. The bootstrap relies on a number of assumptions; there are many, but two important ones are:
1. The data are representative of the population
2. Resampling is performed sufficiently many times
The next two tasks will study what happens when these two assumptions are not met.
### 47\.1\.3 **q3** (Representative sample) Read the following code before running it, and make a hypothesis about the result. Is the sample entering `bootstraps()` representative of the population `rnorm(mean = 0, sd = 1)`? How are the bootstrap results affected?
```
## TASK: Read this code; will the data be representative of the population
## rnorm(mean = 0, sd = 1)?
tibble(x = rnorm(n = 100)) %>%
filter(x < 0) %>%
bootstraps(times = 1000) %>%
mutate(
estimates = map(
splits,
~ analysis(.x) %>% pull(x) %>% fitdistr(densfun = "normal") %>% tidy()
)
) %>%
int_pctl(estimates)
```
```
## # A tibble: 2 × 6
## term .lower .estimate .upper .alpha .method
## <chr> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 mean -0.965 -0.770 -0.605 0.05 percentile
## 2 sd 0.425 0.572 0.718 0.05 percentile
```
**Observations**:
* The sample is not at all representative; we are totally missing all positive samples.
* Correspondingly, the mean is much lower than it should be, and the standard deviation is too small.
The following code generates `100` different samples from a normal distribution (each with `n = 10`), and computes a very coarse bootstrap for each one.
### 47\.1\.4 **q4** (Number of replicates) First run this code, and comment on whether the approximate coverage probability is close to the nominal `0.95`. Increase the value of `n_boot` and re\-run; at what point does the coverage probability approach the desired `0.95`?
*Note*: At higher values of `n_boot`, the following code can take a long while to run. I recommend keeping `n_boot <= 1000`.
```
## TASK: Run this code,
set.seed(101)
times <- 100 # Number of bootstrap resamples
df_q4 <-
map_dfr(
seq(1, 100), # Number of replicates
function(repl) {
tibble(x = rnorm(10)) %>%
bootstraps(times = times) %>%
mutate(
estimates = map(
splits,
~ analysis(.x) %>% pull(x) %>% fitdistr(densfun = "normal") %>% tidy()
)
) %>%
int_pctl(estimates, alpha = 1 - 0.95) %>%
mutate(repl = repl)
}
)
```
```
## Warning: Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
```
```
## Warning in bootstraps(., times = times): Some assessment sets contained zero
## rows.
```
```
## Warning: Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
```
```
## Warning in bootstraps(., times = times): Some assessment sets contained zero
## rows.
```
```
## Warning: Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
```
```
## Estimate the coverage probability of the bootstrap intervals
df_q4 %>%
filter(term == "mean") %>%
mutate(cover = (.lower <= 0) & (0 <= .upper)) %>%
summarize(mean(cover))
```
```
## # A tibble: 1 × 1
## `mean(cover)`
## <dbl>
## 1 0.9
```
**Observations**:
* I find a coverage probability around `0.73` at `n_boot = 10`. This is much smaller than desired.
* At `n_boot = 1000` I find an estimated coverage probability around `0.88`, which is closer but not perfect.
*Aside*: The `rsample` function `int_pctl` actually complains when you give it fewer than `1000` replicates. Since you’ll usually be running the bootstrap only a handful of times (rather than `100` above), you need not be stingy with bootstrap replicates. Do at least `1000` in most cases.
### 47\.1\.1 **q1** Modify the code above to use within a `mutate()` call on `df_resample_norm`. Assign the mean estimates to the new column `mean_est`.
```
df_q1 <-
df_resample_norm %>%
mutate(
mean_est = map_dbl(
df,
~ summarize(.x, mean_est = mean(x)) %>% pull(mean_est)
)
)
df_q1
```
```
## # Bootstrap sampling
## # A tibble: 1,000 × 4
## splits id df mean_est
## <list> <chr> <list> <dbl>
## 1 <split [50/16]> Bootstrap0001 <tibble [50 × 1]> -0.0667
## 2 <split [50/21]> Bootstrap0002 <tibble [50 × 1]> -0.244
## 3 <split [50/20]> Bootstrap0003 <tibble [50 × 1]> 0.0499
## 4 <split [50/15]> Bootstrap0004 <tibble [50 × 1]> -0.0747
## 5 <split [50/21]> Bootstrap0005 <tibble [50 × 1]> 0.00831
## 6 <split [50/16]> Bootstrap0006 <tibble [50 × 1]> -0.197
## 7 <split [50/19]> Bootstrap0007 <tibble [50 × 1]> -0.252
## 8 <split [50/17]> Bootstrap0008 <tibble [50 × 1]> -0.239
## 9 <split [50/17]> Bootstrap0009 <tibble [50 × 1]> -0.189
## 10 <split [50/18]> Bootstrap0010 <tibble [50 × 1]> -0.136
## # … with 990 more rows
```
The following test will verify that your `df_q1` is correct:
```
## NOTE: No need to change this!
assertthat::assert_that(
assertthat::are_equal(
df_q1 %>% pull(mean_est),
v_mean_est
)
)
```
```
## [1] TRUE
```
```
print("Great job!")
```
```
## [1] "Great job!"
```
What we have now in `df_q1 %>% pull(mean_est)` is an approximation of the *sampling distribution* for the mean estimate. Remember that a confidence interval is a construction based on the sampling distribution, so this is the object we need! From this point, our job would be to work the mathematical manipulations necessary to construct a confidence interval from the quantiles of `df_q1 %>% pull(mean_est)`. Thankfully, the `rsample` package has already worked out those details for us!
The `rsample` function `int_pctl()` will compute (percentile) confidence intervals from a bootstrap resampling, but we need to compute our own statistics. Remember the `fitdistr()` function from the previous exercise?
```
# NOTE: No need to change this demo code
df_data_norm %>%
pull(x) %>%
fitdistr(densfun = "normal") %>%
tidy()
```
```
## # A tibble: 2 × 3
## term estimate std.error
## <chr> <dbl> <dbl>
## 1 mean -0.124 0.131
## 2 sd 0.923 0.0923
```
The output of `fitdistr()`, after run through `tidy()`, is exactly what `int_pctl()` expects. Note that the output here is a tibble with a `term` column and two statistics: the `estimate` and the `std.error`. To use `int_pctl()`, we’ll have to provide statistics in this compatible form.
### 47\.1\.2 **q2** Modify the code below following `recall-fitdistr` to provide tidy results to `int_pctl()`.
*Hint*: You should only have to modify the formula (`~`) line.
```
df_q2 <-
df_resample_norm %>%
mutate(
estimates = map(
splits,
~ analysis(.x) %>% pull(x) %>% fitdistr(densfun = "normal") %>% tidy()
)
)
# NOTE: The following function call will work once you correctly edit the code above
int_pctl(df_q2, estimates)
```
```
## # A tibble: 2 × 6
## term .lower .estimate .upper .alpha .method
## <chr> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 mean -0.378 -0.118 0.124 0.05 percentile
## 2 sd 0.750 0.911 1.07 0.05 percentile
```
Once you learn how to provide statistics in the form that `int_pctl()` is expecting, you’re off to the races! You can use the bootstrap to compute confidence intervals for very general settings.
One of the important things to remember is that the bootstrap is an *approximation*. The bootstrap relies on a number of assumptions; there are many, but two important ones are:
1. The data are representative of the population
2. Resampling is performed sufficiently many times
The next two tasks will study what happens when these two assumptions are not met.
### 47\.1\.3 **q3** (Representative sample) Read the following code before running it, and make a hypothesis about the result. Is the sample entering `bootstraps()` representative of the population `rnorm(mean = 0, sd = 1)`? How are the bootstrap results affected?
```
## TASK: Read this code; will the data be representative of the population
## rnorm(mean = 0, sd = 1)?
tibble(x = rnorm(n = 100)) %>%
filter(x < 0) %>%
bootstraps(times = 1000) %>%
mutate(
estimates = map(
splits,
~ analysis(.x) %>% pull(x) %>% fitdistr(densfun = "normal") %>% tidy()
)
) %>%
int_pctl(estimates)
```
```
## # A tibble: 2 × 6
## term .lower .estimate .upper .alpha .method
## <chr> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 mean -0.965 -0.770 -0.605 0.05 percentile
## 2 sd 0.425 0.572 0.718 0.05 percentile
```
**Observations**:
* The sample is not at all representative; we are totally missing all positive samples.
* Correspondingly, the mean is much lower than it should be, and the standard deviation is too small.
The following code generates `100` different samples from a normal distribution (each with `n = 10`), and computes a very coarse bootstrap for each one.
### 47\.1\.4 **q4** (Number of replicates) First run this code, and comment on whether the approximate coverage probability is close to the nominal `0.95`. Increase the value of `n_boot` and re\-run; at what point does the coverage probability approach the desired `0.95`?
*Note*: At higher values of `n_boot`, the following code can take a long while to run. I recommend keeping `n_boot <= 1000`.
```
## TASK: Run this code,
set.seed(101)
times <- 100 # Number of bootstrap resamples
df_q4 <-
map_dfr(
seq(1, 100), # Number of replicates
function(repl) {
tibble(x = rnorm(10)) %>%
bootstraps(times = times) %>%
mutate(
estimates = map(
splits,
~ analysis(.x) %>% pull(x) %>% fitdistr(densfun = "normal") %>% tidy()
)
) %>%
int_pctl(estimates, alpha = 1 - 0.95) %>%
mutate(repl = repl)
}
)
```
```
## Warning: Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
```
```
## Warning in bootstraps(., times = times): Some assessment sets contained zero
## rows.
```
```
## Warning: Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
```
```
## Warning in bootstraps(., times = times): Some assessment sets contained zero
## rows.
```
```
## Warning: Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
## Recommend at least 1000 non-missing bootstrap resamples for terms: `mean`, `sd`.
```
```
## Estimate the coverage probability of the bootstrap intervals
df_q4 %>%
filter(term == "mean") %>%
mutate(cover = (.lower <= 0) & (0 <= .upper)) %>%
summarize(mean(cover))
```
```
## # A tibble: 1 × 1
## `mean(cover)`
## <dbl>
## 1 0.9
```
**Observations**:
* I find a coverage probability around `0.73` at `n_boot = 10`. This is much smaller than desired.
* At `n_boot = 1000` I find an estimated coverage probability around `0.88`, which is closer but not perfect.
*Aside*: The `rsample` function `int_pctl` actually complains when you give it fewer than `1000` replicates. Since you’ll usually be running the bootstrap only a handful of times (rather than `100` above), you need not be stingy with bootstrap replicates. Do at least `1000` in most cases.
47\.2 A Worked Example: Probability Estimate
--------------------------------------------
To finish, I’ll present some example code on how you can apply the bootstrap to a more complicated problem. In the previous exercise `e-stat08-fit-dist` we estimated a probability based on a fitted distribution. Now we have the tools to produce a bootstrap\-approximated a confidence interval for that probability estimate.
Remember that we had the following setup: sampling from a weibull distribution and estimating parameters with `fitdistr()`.
```
## NOTE: No need to change this example code
set.seed(101)
df_data_w <- tibble(y = rweibull(50, shape = 2, scale = 4))
pr_true <- pweibull(q = 2, shape = 2, scale = 4)
df_data_w %>%
pull(y) %>%
fitdistr(densfun = "weibull") %>%
tidy()
```
```
## # A tibble: 2 × 3
## term estimate std.error
## <chr> <dbl> <dbl>
## 1 shape 2.18 0.239
## 2 scale 4.00 0.274
```
In order to approximate a confidence interval for our probability estimate, we’ll need to provide the probability value in the form that `int_pctl()` expects. Below I define a helper function that takes each split, extracts the estimated parameters, and uses them to compute a probability estimate. I then add that value as a new row to the output of `tidy()`, making sure to populate the columns `estimate` and `term`.
```
## NOTE: No need to change this example code; but feel free to adapt it!
fit_fun <- function(split) {
## Fit distribution
df_tmp <-
analysis(split) %>%
pull(y) %>%
fitdistr(densfun = "weibull") %>%
tidy()
## Extract statistics
scale_est <-
df_tmp %>%
filter(term == "scale") %>%
pull(estimate)
shape_est <-
df_tmp %>%
filter(term == "shape") %>%
pull(estimate)
## Add probability estimate in tidy form
df_tmp %>%
bind_rows(tibble(
estimate = pweibull(q = 2, scale = scale_est, shape = shape_est),
term = "pr"
))
}
df_resample_pr <-
bootstraps(df_data_w, times = 1000) %>%
mutate(estimates = map(splits, fit_fun))
```
```
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
## Warning in densfun(x, parm[1], parm[2], ...): NaNs produced
```
Now I’ve got all the information I need to pass to `df_resample_pr`:
```
## NOTE: No need to change this example code
int_pctl(df_resample_pr, estimates)
```
```
## # A tibble: 3 × 6
## term .lower .estimate .upper .alpha .method
## <chr> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 pr 0.121 0.197 0.278 0.05 percentile
## 2 scale 3.50 3.99 4.51 0.05 percentile
## 3 shape 1.87 2.23 2.68 0.05 percentile
```
```
pr_true
```
```
## [1] 0.2211992
```
When I run this, I find that the confidence interval contains `pr_true` as one might hope!
47\.3 Notes
-----------
\[1] This is because `rsample` does some fancy stuff under the hood. Basically `bootstraps` does not make any additional copies of the data; the price we pay for this efficiency is the need to call `analysis()`.
\[2] For a slightly more mathematical treatment of the bootstrap, try [these MIT course notes](https://ocw.mit.edu/courses/mathematics/18-05-introduction-to-probability-and-statistics-spring-2014/readings/MIT18_05S14_Reading24.pdf)
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/data-a-simple-data-pipeline.html |
48 Data: A Simple Data Pipeline
===============================
*Purpose*: Analyzing existing data is helpful, but it’s even more important to be able to *obtain relevant data*. One kind of data is survey data, which is helpful for understanding things about people. In this short exercise you’ll learn how to set up your own survey, link it to a cloud\-based sheet, and automatically download that sheet for local data analysis.
*Reading*: (None, this exercise *is* the reading)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(googlesheets4)
```
48\.1 Reading a Sheet with `googlesheets4`
------------------------------------------
The [googlesheets4](https://googlesheets4.tidyverse.org/) package provides a convenient interface to Google Sheet’s API \[1]. We’ll use this to set up a *very simple* data pipeline: A means to collect data at some user\-facing point, and load that data for analysis.
48\.2 Public sheets
-------------------
Back in c02\-michelson you actually used googlesheets4 to load the speed of light data:
```
## Note: No need to edit this chunk!
url_michelson <- "https://docs.google.com/spreadsheets/d/1av_SXn4j0-4Rk0mQFik3LLr-uf0YdA06i3ugE6n-Zdo/edit?usp=sharing"
## Put googlesheets4 in "deauthorized mode"
gs4_deauth()
## Get sheet metadata
ss_michelson <- gs4_get(url_michelson)
## Load the sheet as a dataframe
df_michelson <-
read_sheet(ss_michelson) %>%
select(Date, Distinctness, Temp, Velocity) %>%
mutate(Distinctness = as_factor(Distinctness))
```
```
## ✔ Reading from "michelson1879".
```
```
## ✔ Range 'Sheet1'.
```
```
df_michelson %>% glimpse
```
```
## Rows: 100
## Columns: 4
## $ Date <dttm> 1879-06-05, 1879-06-07, 1879-06-07, 1879-06-07, 1879-06-…
## $ Distinctness <fct> 3, 2, 2, 2, 2, 2, 3, 3, 3, 3, 2, 2, 2, 2, 2, 1, 3, 3, 2, …
## $ Temp <dbl> 76, 72, 72, 72, 72, 72, 83, 83, 83, 83, 83, 90, 90, 71, 7…
## $ Velocity <dbl> 299850, 299740, 299900, 300070, 299930, 299850, 299950, 2…
```
I made this sheet public so that anyone can access it. The line `gs4_deauth()` tells the googlesheets4 package not to ask for login information; this way you can easily load this public sheet, even without having a Google account.
But what if we want to load one of our own *private* data sheets?
48\.3 Private sheets
--------------------
In order to load a private data sheet, you’ll need to *authorize* googlesheets4 to use your Google account. The following line should open a browser window that will ask for your permissions.
```
## NOTE: No need to edit; run to authorize R to use your google account
gs4_auth()
```
Now that you’ve authorized your account, let’s create a very simple data\-collection pipeline.
48\.4 Setting up a Form \+ Sheet
--------------------------------
One convenient feature of Google Sheets is that it nicely integrates with Google Forms: We can create a form (a survey) and link it to a sheet. Let’s do that!
### 48\.4\.1 **q1** Create your own form.
Go to [Google Forms](https://www.google.com/forms/about/) and create a new form. Add at least one question.
### 48\.4\.2 **q2** Navigate to the `Responses` tab and click `Create Spreadsheet`. Select `Create a new spreadsheet` and accept the default name.
Create spreadsheet linked to form
### 48\.4\.3 **q3** Copy the URL for your new sheet and copy it below. Run the following chunk to load your (probably empy) sheet.
```
## NOTE: I'm not going to put a URL here, as one of my personal sheets
## won't work for you....
url_custom_sheet <- ""
df_custom_sheet <- read_sheet(url_custom_sheet)
df_custom_sheet %>% glimpse()
```
Now as results from your survey come in, you can simply re\-run this notebook to grab the most recent version of your data for local analysis.
This is *very simple* but *surprisingly powerful*: I use a pipeline exactly like this for the exit tickets!
48\.5 Notes
-----------
\[1] It’s `googlesheets4` because the package is designed for V4 of Google Sheet’s API.
48\.1 Reading a Sheet with `googlesheets4`
------------------------------------------
The [googlesheets4](https://googlesheets4.tidyverse.org/) package provides a convenient interface to Google Sheet’s API \[1]. We’ll use this to set up a *very simple* data pipeline: A means to collect data at some user\-facing point, and load that data for analysis.
48\.2 Public sheets
-------------------
Back in c02\-michelson you actually used googlesheets4 to load the speed of light data:
```
## Note: No need to edit this chunk!
url_michelson <- "https://docs.google.com/spreadsheets/d/1av_SXn4j0-4Rk0mQFik3LLr-uf0YdA06i3ugE6n-Zdo/edit?usp=sharing"
## Put googlesheets4 in "deauthorized mode"
gs4_deauth()
## Get sheet metadata
ss_michelson <- gs4_get(url_michelson)
## Load the sheet as a dataframe
df_michelson <-
read_sheet(ss_michelson) %>%
select(Date, Distinctness, Temp, Velocity) %>%
mutate(Distinctness = as_factor(Distinctness))
```
```
## ✔ Reading from "michelson1879".
```
```
## ✔ Range 'Sheet1'.
```
```
df_michelson %>% glimpse
```
```
## Rows: 100
## Columns: 4
## $ Date <dttm> 1879-06-05, 1879-06-07, 1879-06-07, 1879-06-07, 1879-06-…
## $ Distinctness <fct> 3, 2, 2, 2, 2, 2, 3, 3, 3, 3, 2, 2, 2, 2, 2, 1, 3, 3, 2, …
## $ Temp <dbl> 76, 72, 72, 72, 72, 72, 83, 83, 83, 83, 83, 90, 90, 71, 7…
## $ Velocity <dbl> 299850, 299740, 299900, 300070, 299930, 299850, 299950, 2…
```
I made this sheet public so that anyone can access it. The line `gs4_deauth()` tells the googlesheets4 package not to ask for login information; this way you can easily load this public sheet, even without having a Google account.
But what if we want to load one of our own *private* data sheets?
48\.3 Private sheets
--------------------
In order to load a private data sheet, you’ll need to *authorize* googlesheets4 to use your Google account. The following line should open a browser window that will ask for your permissions.
```
## NOTE: No need to edit; run to authorize R to use your google account
gs4_auth()
```
Now that you’ve authorized your account, let’s create a very simple data\-collection pipeline.
48\.4 Setting up a Form \+ Sheet
--------------------------------
One convenient feature of Google Sheets is that it nicely integrates with Google Forms: We can create a form (a survey) and link it to a sheet. Let’s do that!
### 48\.4\.1 **q1** Create your own form.
Go to [Google Forms](https://www.google.com/forms/about/) and create a new form. Add at least one question.
### 48\.4\.2 **q2** Navigate to the `Responses` tab and click `Create Spreadsheet`. Select `Create a new spreadsheet` and accept the default name.
Create spreadsheet linked to form
### 48\.4\.3 **q3** Copy the URL for your new sheet and copy it below. Run the following chunk to load your (probably empy) sheet.
```
## NOTE: I'm not going to put a URL here, as one of my personal sheets
## won't work for you....
url_custom_sheet <- ""
df_custom_sheet <- read_sheet(url_custom_sheet)
df_custom_sheet %>% glimpse()
```
Now as results from your survey come in, you can simply re\-run this notebook to grab the most recent version of your data for local analysis.
This is *very simple* but *surprisingly powerful*: I use a pipeline exactly like this for the exit tickets!
### 48\.4\.1 **q1** Create your own form.
Go to [Google Forms](https://www.google.com/forms/about/) and create a new form. Add at least one question.
### 48\.4\.2 **q2** Navigate to the `Responses` tab and click `Create Spreadsheet`. Select `Create a new spreadsheet` and accept the default name.
Create spreadsheet linked to form
### 48\.4\.3 **q3** Copy the URL for your new sheet and copy it below. Run the following chunk to load your (probably empy) sheet.
```
## NOTE: I'm not going to put a URL here, as one of my personal sheets
## won't work for you....
url_custom_sheet <- ""
df_custom_sheet <- read_sheet(url_custom_sheet)
df_custom_sheet %>% glimpse()
```
Now as results from your survey come in, you can simply re\-run this notebook to grab the most recent version of your data for local analysis.
This is *very simple* but *surprisingly powerful*: I use a pipeline exactly like this for the exit tickets!
48\.5 Notes
-----------
\[1] It’s `googlesheets4` because the package is designed for V4 of Google Sheet’s API.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/vis-small-multiples.html |
49 Vis: Small Multiples
=======================
*Purpose*: A powerful idea in visualization is the *small multiple*. In this exercise you’ll learn how to design and create small multiple graphs.
*Reading*: (None; there’s a bit of reading here.)
“At the heart of quantitative reasoning is a single question: *Compared to
what?*” Edward Tufte on small multiples.
49\.1 Small Multiples
---------------------
Facets in ggplot allow us to apply the ideas of [small
multiples](https://en.wikipedia.org/wiki/Small_multiple). As an example,
consider the following graph:
```
economics %>%
pivot_longer(
names_to = "variable",
values_to = "value",
cols = c(pce, pop, psavert, uempmed, unemploy)
) %>%
ggplot(aes(date, value)) +
geom_line() +
facet_wrap(~variable, scales = "free_y")
```
The “multiples” are the different panels; above we’ve separated the different variables into their own panel. This allows us to compare trends simply by lookin across at different panels. The faceting above works well for comparing trends: It’s clear by inspection whether the various trends are increasing, decreasing, etc.
The next example with the `mpg` data is not so effective:
```
## NOTE: No need to edit; study this example
mpg %>%
ggplot(aes(displ, hwy)) +
geom_point() +
facet_wrap(~class)
```
With these scatterplots it’s more difficult to “keep in our heads” the absolute positions of the other points as we look across the multiples. Instead we could add some “ghost” points:
```
## NOTE: No need to edit; study this example
mpg %>%
ggplot(aes(displ, hwy)) +
## A bit of a trick; remove the facet variable to prevent faceting
geom_point(
data = . %>% select(-class),
color = "grey80"
) +
geom_point() +
facet_wrap(~class) +
theme_minimal()
```
There’s a trick to getting the visual above; removing the facet variable from an internal dataframe prevents the faceting of that layer. This combined with a second point layer gives the “ghost” point effect.
The presence of these “ghost” points provides more context; they facilitate the “Compared to what?” question that Tufte puts at the center of quantitative reasoning.
### 49\.1\.1 **q1** Edit the following figure to use the “ghost” point trick above.
```
## TODO: Edit this code to facet on `cut`, but keep "ghost" points to aid in
## comparison.
diamonds %>%
ggplot(aes(carat, price)) +
geom_point()
```
```
diamonds %>%
ggplot(aes(carat, price)) +
geom_point(
data = . %>% select(-cut),
color = "grey80"
) +
geom_point() +
facet_wrap(~cut)
```
49\.2 Organizing Factors
------------------------
Sometimes your observations will organize into natural categories. In this case facets are a great way to group your observations. For example, consider the following figure:
```
mpg %>%
group_by(model) %>%
filter(row_number(desc(year)) == 1) %>%
ungroup() %>%
mutate(
manufacturer = fct_reorder(manufacturer, hwy),
model = fct_reorder(model, desc(hwy))
) %>%
ggplot(aes(hwy, model)) +
geom_point() +
facet_grid(manufacturer~., scale = "free_y", space = "free") +
theme(
strip.text.y = element_text(angle = 0)
)
```
There’s *a lot* going on this figure, including a number of subtle points. Let’s list them out:
* I filter on the latest model with the `row_number` call (not strictly necessary).
* I’m re\-ordering both the `manufacturer` and `model` on `hwy`.
+ However, I reverse the order of `model` to get a consistent “descending” pattern.
* I set both the `scale` and `space` arguments of the facet call; without those the spacing would be messed up (try it!).
* I rotate the facet labels to make them more readable.
### 49\.2\.1 **q2** Create a small multiple plot like `ex-mpg-manufacturer` above. Keep in mind the idea of “compared to what?” when deciding which variables to place close to one another.
```
## TODO: Create a set of small multiples plot from these data
as_tibble(iris) %>%
pivot_longer(
names_to = "part",
values_to = "length",
cols = -Species
)
```
```
## # A tibble: 600 × 3
## Species part length
## <fct> <chr> <dbl>
## 1 setosa Sepal.Length 5.1
## 2 setosa Sepal.Width 3.5
## 3 setosa Petal.Length 1.4
## 4 setosa Petal.Width 0.2
## 5 setosa Sepal.Length 4.9
## 6 setosa Sepal.Width 3
## 7 setosa Petal.Length 1.4
## 8 setosa Petal.Width 0.2
## 9 setosa Sepal.Length 4.7
## 10 setosa Sepal.Width 3.2
## # … with 590 more rows
```
```
as_tibble(iris) %>%
pivot_longer(
names_to = "part",
values_to = "length",
cols = -Species
) %>%
ggplot(aes(length, Species)) +
geom_point() +
facet_grid(part~., scale = "free_y", space = "free") +
theme(
strip.text.y = element_text(angle = 0)
)
```
I chose to put the measurements of the same part close together, to facilitate
comparison of the common plant features across different species.
49\.1 Small Multiples
---------------------
Facets in ggplot allow us to apply the ideas of [small
multiples](https://en.wikipedia.org/wiki/Small_multiple). As an example,
consider the following graph:
```
economics %>%
pivot_longer(
names_to = "variable",
values_to = "value",
cols = c(pce, pop, psavert, uempmed, unemploy)
) %>%
ggplot(aes(date, value)) +
geom_line() +
facet_wrap(~variable, scales = "free_y")
```
The “multiples” are the different panels; above we’ve separated the different variables into their own panel. This allows us to compare trends simply by lookin across at different panels. The faceting above works well for comparing trends: It’s clear by inspection whether the various trends are increasing, decreasing, etc.
The next example with the `mpg` data is not so effective:
```
## NOTE: No need to edit; study this example
mpg %>%
ggplot(aes(displ, hwy)) +
geom_point() +
facet_wrap(~class)
```
With these scatterplots it’s more difficult to “keep in our heads” the absolute positions of the other points as we look across the multiples. Instead we could add some “ghost” points:
```
## NOTE: No need to edit; study this example
mpg %>%
ggplot(aes(displ, hwy)) +
## A bit of a trick; remove the facet variable to prevent faceting
geom_point(
data = . %>% select(-class),
color = "grey80"
) +
geom_point() +
facet_wrap(~class) +
theme_minimal()
```
There’s a trick to getting the visual above; removing the facet variable from an internal dataframe prevents the faceting of that layer. This combined with a second point layer gives the “ghost” point effect.
The presence of these “ghost” points provides more context; they facilitate the “Compared to what?” question that Tufte puts at the center of quantitative reasoning.
### 49\.1\.1 **q1** Edit the following figure to use the “ghost” point trick above.
```
## TODO: Edit this code to facet on `cut`, but keep "ghost" points to aid in
## comparison.
diamonds %>%
ggplot(aes(carat, price)) +
geom_point()
```
```
diamonds %>%
ggplot(aes(carat, price)) +
geom_point(
data = . %>% select(-cut),
color = "grey80"
) +
geom_point() +
facet_wrap(~cut)
```
### 49\.1\.1 **q1** Edit the following figure to use the “ghost” point trick above.
```
## TODO: Edit this code to facet on `cut`, but keep "ghost" points to aid in
## comparison.
diamonds %>%
ggplot(aes(carat, price)) +
geom_point()
```
```
diamonds %>%
ggplot(aes(carat, price)) +
geom_point(
data = . %>% select(-cut),
color = "grey80"
) +
geom_point() +
facet_wrap(~cut)
```
49\.2 Organizing Factors
------------------------
Sometimes your observations will organize into natural categories. In this case facets are a great way to group your observations. For example, consider the following figure:
```
mpg %>%
group_by(model) %>%
filter(row_number(desc(year)) == 1) %>%
ungroup() %>%
mutate(
manufacturer = fct_reorder(manufacturer, hwy),
model = fct_reorder(model, desc(hwy))
) %>%
ggplot(aes(hwy, model)) +
geom_point() +
facet_grid(manufacturer~., scale = "free_y", space = "free") +
theme(
strip.text.y = element_text(angle = 0)
)
```
There’s *a lot* going on this figure, including a number of subtle points. Let’s list them out:
* I filter on the latest model with the `row_number` call (not strictly necessary).
* I’m re\-ordering both the `manufacturer` and `model` on `hwy`.
+ However, I reverse the order of `model` to get a consistent “descending” pattern.
* I set both the `scale` and `space` arguments of the facet call; without those the spacing would be messed up (try it!).
* I rotate the facet labels to make them more readable.
### 49\.2\.1 **q2** Create a small multiple plot like `ex-mpg-manufacturer` above. Keep in mind the idea of “compared to what?” when deciding which variables to place close to one another.
```
## TODO: Create a set of small multiples plot from these data
as_tibble(iris) %>%
pivot_longer(
names_to = "part",
values_to = "length",
cols = -Species
)
```
```
## # A tibble: 600 × 3
## Species part length
## <fct> <chr> <dbl>
## 1 setosa Sepal.Length 5.1
## 2 setosa Sepal.Width 3.5
## 3 setosa Petal.Length 1.4
## 4 setosa Petal.Width 0.2
## 5 setosa Sepal.Length 4.9
## 6 setosa Sepal.Width 3
## 7 setosa Petal.Length 1.4
## 8 setosa Petal.Width 0.2
## 9 setosa Sepal.Length 4.7
## 10 setosa Sepal.Width 3.2
## # … with 590 more rows
```
```
as_tibble(iris) %>%
pivot_longer(
names_to = "part",
values_to = "length",
cols = -Species
) %>%
ggplot(aes(length, Species)) +
geom_point() +
facet_grid(part~., scale = "free_y", space = "free") +
theme(
strip.text.y = element_text(angle = 0)
)
```
I chose to put the measurements of the same part close together, to facilitate
comparison of the common plant features across different species.
### 49\.2\.1 **q2** Create a small multiple plot like `ex-mpg-manufacturer` above. Keep in mind the idea of “compared to what?” when deciding which variables to place close to one another.
```
## TODO: Create a set of small multiples plot from these data
as_tibble(iris) %>%
pivot_longer(
names_to = "part",
values_to = "length",
cols = -Species
)
```
```
## # A tibble: 600 × 3
## Species part length
## <fct> <chr> <dbl>
## 1 setosa Sepal.Length 5.1
## 2 setosa Sepal.Width 3.5
## 3 setosa Petal.Length 1.4
## 4 setosa Petal.Width 0.2
## 5 setosa Sepal.Length 4.9
## 6 setosa Sepal.Width 3
## 7 setosa Petal.Length 1.4
## 8 setosa Petal.Width 0.2
## 9 setosa Sepal.Length 4.7
## 10 setosa Sepal.Width 3.2
## # … with 590 more rows
```
```
as_tibble(iris) %>%
pivot_longer(
names_to = "part",
values_to = "length",
cols = -Species
) %>%
ggplot(aes(length, Species)) +
geom_point() +
facet_grid(part~., scale = "free_y", space = "free") +
theme(
strip.text.y = element_text(angle = 0)
)
```
I chose to put the measurements of the same part close together, to facilitate
comparison of the common plant features across different species.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/stats-introduction-to-hypothesis-testing.html |
50 Stats: Introduction to Hypothesis Testing
============================================
*Purpose*: Part of the payoff of statistics is to support making *decisions
under uncertainty*. To frame these decisions we will use the framework of
*hypothesis testing*. In this exercise you’ll learn how to set up competing
hypotheses and potential actions, based on different scenarios.
*Reading*: [Statistical Inference in One Sentence](https://medium.com/hackernoon/statistical-inference-in-one-sentence-33a4683a6424) (9 min)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(rsample)
```
50\.1 A Full Example
--------------------
You are considering buying a set of diamonds in bulk. The prospective vendor is
willing to sell you 100 diamonds at $1700 per diamond. You will *not* get to see
the specific diamonds before buying, though. To convince you, the vendor gives
you a detailed list of a prior package of bulk diamonds they sold
recently—they tell you this is *representative* of the packages they sell.
This is a weird contract, but it’s intriguing. Let’s use statistics to help determine whether or not to take the deal.
50\.2 Pick your population
--------------------------
For the sake of this exercise, let’s assume that `df_population` is the entire
set of diamonds the vendor has in stock.
```
## NOTE: No need to change this!
df_population <-
diamonds %>%
filter(carat < 1)
```
**Important Note**: No peeking! While I’ve defined `df_population` here, you
*should not* look at its values until the end of the exercise.
While we do have access to the entirety of the population, in most real problems
we’ll only have a sample. The function `slice_sample()` allows us to choose a
*random* sample from a dataframe.
```
## NOTE: No need to change this!
set.seed(101)
df_sample <-
df_population %>%
slice_sample(n = 100)
```
50\.3 Set up your hypotheses and actions
----------------------------------------
Based on the contract above, our decision threshold should be related to the
sale price the vendor quotes.
```
## NOTE: No need to change this; this will be our decision threshold
price_threshold <- 1700
```
```
## NOTE: This is for exercise-design purposes: What are the true parameters?
df_population %>%
group_by(cut) %>%
summarize(price = mean(price)) %>%
bind_rows(
df_population %>%
summarize(price = mean(price)) %>%
mutate(cut = "(All)")
)
```
```
## # A tibble: 6 × 2
## cut price
## <chr> <dbl>
## 1 Fair 2092.
## 2 Good 1793.
## 3 Very Good 1732.
## 4 Premium 1598.
## 5 Ideal 1546.
## 6 (All) 1633.
```
In order to do hypothesis testing, we need to define *null and alternative
hypotheses*. These two hypotheses are competing theories for the state of the
world
Furthermore, we are aiming to use hypothesis testing *to support making a
decision*. To that end, we’ll also define a default action (if we fail to reject
the null), and an alternative action (if we find our evidence sufficiently
convincing so as to change our minds).
For this buying scenario, we feel that the contract is pretty weird: We’ll set
up our null hypothesis to assume the vendor is trying to rip us off. In order to
make this hypothesis testable, we’ll need to make it *quantitative*.
One way make our hypothesis quantitative is to think about the mean price of
diamonds in the population: If the diamonds are—on average—less expensive
than the `price_threshold`, then on average we’ll tend to get a set of diamonds
that are worth less than what we paid. This will be our null hypothesis.
Consequently, our default action will be to buy no diamonds from this vendor. In
standard statistics notation, this is how we denote our null and alternative
hypotheses:
**H\_0** (Null hypothesis) The mean price of all diamonds in the population is
less than the threshold `price_threshold`.
\- Default action: Buy no diamonds
**H\_A** (Alternative hypothesis) The mean price of all diamonds in the population is equal to or greater than the threshold `price_threshold`.
\- Alternative action: Buy diamonds in bulk
50\.4 Compute
-------------
### 50\.4\.1 **q1** Based on your results, can you reject the null hypothesis **H\_0** for the population with a 95\-percent confidence interval?
```
## TASK: Compute a confidence interval on the mean, use to answer the question
df_sample %>%
summarize(
price_mean = mean(price),
price_sd = sd(price),
price_lo = price_mean - 1.96 * price_sd / sqrt(n()),
price_hi = price_mean + 1.96 * price_sd / sqrt(n())
) %>%
select(price_lo, price_hi)
```
```
## # A tibble: 1 × 2
## price_lo price_hi
## <dbl> <dbl>
## 1 1418. 1856.
```
```
price_threshold
```
```
## [1] 1700
```
**Observations**:
* Based on the CI above, we *cannot* reject the null hypothesis **H\_0**.
* Since we do not reject **H\_0** we take our default action of buying no diamonds from the vendor.
50\.5 Different Scenario, Different Hypotheses
----------------------------------------------
50\.6 Proportion Ideal
----------------------
Let’s imagine a different scenario: We have a lead on a buyer of engagement
rings who is *obsessed* with well\-cut diamonds. If we could buy at least `50`
diamonds with cut `Premium` or `Ideal` (what we’ll call “high\-cut”), we could
easily recoup the cost of the bulk purchase.
If the proportion of high\-cut diamonds in the vendor’s population is greater
than 50 percent, we stand a good chance of making a lot of money.
Unfortunately, I haven’t taught you any techniques for estimating a [CI for a
proportion](https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval).
*However* in `e-stat09-bootstrap` we learned a general approximation technique:
*the bootstrap*. Let’s put that to work to estimate a confidence interval for
the proportion of high\-cut diamonds in the population.
50\.7 Hypotheses and Actions
----------------------------
Let’s redefine our hypotheses to match the new scenario.
**H\_0** (Null hypothesis) The proportion of high\-cut diamonds in the population
is less than 50 percent.
\- Default action: Buy no diamonds
**H\_A** (Alternative hypothesis) The proportion of high\-cut diamonds in the population is equal to or greater than 50 percent.
\- Alternative action: Buy diamonds in bulk
Furthermore, let’s change our decision threshold from 95\-percent confidence to a
higher 99\-percent confidence.
### 50\.7\.1 **q2** Use the techniques you learned in `e-stat09-bootstrap` to estimate a 99\-percent confidence interval for the population proportion of high\-cut diamonds. Can you reject the null hypothesis? What decision do you take?
*Hint 1*: Remember that you can use `mean(X == "value")` to compute the proportion
of cases in a sample with variable `X` equal to `"value"`. You’ll need to figure out how to combine the cases of `Premium` and `Ideal`.
*Hint 2* `int_pctl()` takes an `alpha` keyword argument; this is simply `alpha = 1 - confidence`.
```
## TASK: Estimate a confidence interval for the proportion of high-cut diamonds
## in the population. Look to `e-stat09-bootstrap` for starter code.
set.seed(101)
fit_fun <- function(split) {
analysis(split) %>%
summarize(estimate = mean((cut == "Premium") | (cut == "Ideal"))) %>%
mutate(term = "proportion_high")
}
df_resample_total_price <-
bootstraps(df_sample, times = 1000) %>%
mutate(estimates = map(splits, fit_fun))
int_pctl(df_resample_total_price, estimates, alpha = 0.01)
```
```
## # A tibble: 1 × 6
## term .lower .estimate .upper .alpha .method
## <chr> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 proportion_high 0.530 0.640 0.750 0.01 percentile
```
**Observations**:
* Based on the CI above, we *can* reject the null hypothesis **H\_0**.
* Since we reject **H\_0** we take our alternative action and buy the diamonds!
50\.8 Closing Thoughts
----------------------
50\.9 The big reveal
--------------------
To close this exercise, let’s reveal whether our chosen hypotheses matched the
underlying population.
### 50\.9\.1 **q3** Compute the population mean price for the diamonds. Did you reject the
null hypothesis?
```
## TASK: Compute the population mean of diamond price
df_population %>%
summarize(price = mean(price))
```
```
## # A tibble: 1 × 1
## price
## <dbl>
## 1 1633.
```
```
price_threshold
```
```
## [1] 1700
```
**Observations**:
When I did q1, I **did not reject the null**. Note the weird wording there:
**did not reject the null**, rathern than “accepted the null”. In this
hypothesis testing framework we never actually *accept* the null hypothesis, we
can only *fail to reject the null*. What this means is that we still maintain
the possibility that the null is false, and all we can say for sure is that our
data are not sufficient to reject the null hypothesis.
In other words, when we fail to reject the null hypothesis “we’ve learned
nothing.”
Learning nothing isn’t a bad thing though! It’s an important part of statistics
to recognize when we’ve learned nothing.
### 50\.9\.2 **q4** Compute the proportion of high\-cut diamonds in the population. Did you
reject the null hypothesis?
```
## TASK: Compute the population proportion of high-cut diamonds
df_population %>%
summarize(proportion = mean((cut == "Premium") | (cut == "Ideal")))
```
```
## # A tibble: 1 × 1
## proportion
## <dbl>
## 1 0.667
```
**Observations**:
When I did q2 I **did reject the null hypothesis**. It happens that this was the
correct choice; the true proportion of high\-cut diamonds is greater than
50\-percent.
50\.10 End notes
----------------
Note that the underlying population is *identical* in the two settings above,
but the “correct” decision is *different*. This helps illustrate that **math
alone cannot help you frame a reasonable hypothesis**. Ultimately, you must
understand the situation you are in, and the decisions you are considering.
If you’ve taken a statistics course, you might be wondering why I’m talking
about hypothesis testing *without* introducing p\-values. I feel that confidence
invervals more obviously communicate the uncertainty in results, in line with
Andrew Gelman’s suggestion that we [embrace
uncertainty](https://stat.columbia.edu/~gelman/research/published/asa_pvalues.pdf).
The penalty we pay working with (two\-sided) confidence intervals is a reduction
in [statistical power](https://en.wikipedia.org/wiki/Power_of_a_test).
50\.1 A Full Example
--------------------
You are considering buying a set of diamonds in bulk. The prospective vendor is
willing to sell you 100 diamonds at $1700 per diamond. You will *not* get to see
the specific diamonds before buying, though. To convince you, the vendor gives
you a detailed list of a prior package of bulk diamonds they sold
recently—they tell you this is *representative* of the packages they sell.
This is a weird contract, but it’s intriguing. Let’s use statistics to help determine whether or not to take the deal.
50\.2 Pick your population
--------------------------
For the sake of this exercise, let’s assume that `df_population` is the entire
set of diamonds the vendor has in stock.
```
## NOTE: No need to change this!
df_population <-
diamonds %>%
filter(carat < 1)
```
**Important Note**: No peeking! While I’ve defined `df_population` here, you
*should not* look at its values until the end of the exercise.
While we do have access to the entirety of the population, in most real problems
we’ll only have a sample. The function `slice_sample()` allows us to choose a
*random* sample from a dataframe.
```
## NOTE: No need to change this!
set.seed(101)
df_sample <-
df_population %>%
slice_sample(n = 100)
```
50\.3 Set up your hypotheses and actions
----------------------------------------
Based on the contract above, our decision threshold should be related to the
sale price the vendor quotes.
```
## NOTE: No need to change this; this will be our decision threshold
price_threshold <- 1700
```
```
## NOTE: This is for exercise-design purposes: What are the true parameters?
df_population %>%
group_by(cut) %>%
summarize(price = mean(price)) %>%
bind_rows(
df_population %>%
summarize(price = mean(price)) %>%
mutate(cut = "(All)")
)
```
```
## # A tibble: 6 × 2
## cut price
## <chr> <dbl>
## 1 Fair 2092.
## 2 Good 1793.
## 3 Very Good 1732.
## 4 Premium 1598.
## 5 Ideal 1546.
## 6 (All) 1633.
```
In order to do hypothesis testing, we need to define *null and alternative
hypotheses*. These two hypotheses are competing theories for the state of the
world
Furthermore, we are aiming to use hypothesis testing *to support making a
decision*. To that end, we’ll also define a default action (if we fail to reject
the null), and an alternative action (if we find our evidence sufficiently
convincing so as to change our minds).
For this buying scenario, we feel that the contract is pretty weird: We’ll set
up our null hypothesis to assume the vendor is trying to rip us off. In order to
make this hypothesis testable, we’ll need to make it *quantitative*.
One way make our hypothesis quantitative is to think about the mean price of
diamonds in the population: If the diamonds are—on average—less expensive
than the `price_threshold`, then on average we’ll tend to get a set of diamonds
that are worth less than what we paid. This will be our null hypothesis.
Consequently, our default action will be to buy no diamonds from this vendor. In
standard statistics notation, this is how we denote our null and alternative
hypotheses:
**H\_0** (Null hypothesis) The mean price of all diamonds in the population is
less than the threshold `price_threshold`.
\- Default action: Buy no diamonds
**H\_A** (Alternative hypothesis) The mean price of all diamonds in the population is equal to or greater than the threshold `price_threshold`.
\- Alternative action: Buy diamonds in bulk
50\.4 Compute
-------------
### 50\.4\.1 **q1** Based on your results, can you reject the null hypothesis **H\_0** for the population with a 95\-percent confidence interval?
```
## TASK: Compute a confidence interval on the mean, use to answer the question
df_sample %>%
summarize(
price_mean = mean(price),
price_sd = sd(price),
price_lo = price_mean - 1.96 * price_sd / sqrt(n()),
price_hi = price_mean + 1.96 * price_sd / sqrt(n())
) %>%
select(price_lo, price_hi)
```
```
## # A tibble: 1 × 2
## price_lo price_hi
## <dbl> <dbl>
## 1 1418. 1856.
```
```
price_threshold
```
```
## [1] 1700
```
**Observations**:
* Based on the CI above, we *cannot* reject the null hypothesis **H\_0**.
* Since we do not reject **H\_0** we take our default action of buying no diamonds from the vendor.
### 50\.4\.1 **q1** Based on your results, can you reject the null hypothesis **H\_0** for the population with a 95\-percent confidence interval?
```
## TASK: Compute a confidence interval on the mean, use to answer the question
df_sample %>%
summarize(
price_mean = mean(price),
price_sd = sd(price),
price_lo = price_mean - 1.96 * price_sd / sqrt(n()),
price_hi = price_mean + 1.96 * price_sd / sqrt(n())
) %>%
select(price_lo, price_hi)
```
```
## # A tibble: 1 × 2
## price_lo price_hi
## <dbl> <dbl>
## 1 1418. 1856.
```
```
price_threshold
```
```
## [1] 1700
```
**Observations**:
* Based on the CI above, we *cannot* reject the null hypothesis **H\_0**.
* Since we do not reject **H\_0** we take our default action of buying no diamonds from the vendor.
50\.5 Different Scenario, Different Hypotheses
----------------------------------------------
50\.6 Proportion Ideal
----------------------
Let’s imagine a different scenario: We have a lead on a buyer of engagement
rings who is *obsessed* with well\-cut diamonds. If we could buy at least `50`
diamonds with cut `Premium` or `Ideal` (what we’ll call “high\-cut”), we could
easily recoup the cost of the bulk purchase.
If the proportion of high\-cut diamonds in the vendor’s population is greater
than 50 percent, we stand a good chance of making a lot of money.
Unfortunately, I haven’t taught you any techniques for estimating a [CI for a
proportion](https://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval).
*However* in `e-stat09-bootstrap` we learned a general approximation technique:
*the bootstrap*. Let’s put that to work to estimate a confidence interval for
the proportion of high\-cut diamonds in the population.
50\.7 Hypotheses and Actions
----------------------------
Let’s redefine our hypotheses to match the new scenario.
**H\_0** (Null hypothesis) The proportion of high\-cut diamonds in the population
is less than 50 percent.
\- Default action: Buy no diamonds
**H\_A** (Alternative hypothesis) The proportion of high\-cut diamonds in the population is equal to or greater than 50 percent.
\- Alternative action: Buy diamonds in bulk
Furthermore, let’s change our decision threshold from 95\-percent confidence to a
higher 99\-percent confidence.
### 50\.7\.1 **q2** Use the techniques you learned in `e-stat09-bootstrap` to estimate a 99\-percent confidence interval for the population proportion of high\-cut diamonds. Can you reject the null hypothesis? What decision do you take?
*Hint 1*: Remember that you can use `mean(X == "value")` to compute the proportion
of cases in a sample with variable `X` equal to `"value"`. You’ll need to figure out how to combine the cases of `Premium` and `Ideal`.
*Hint 2* `int_pctl()` takes an `alpha` keyword argument; this is simply `alpha = 1 - confidence`.
```
## TASK: Estimate a confidence interval for the proportion of high-cut diamonds
## in the population. Look to `e-stat09-bootstrap` for starter code.
set.seed(101)
fit_fun <- function(split) {
analysis(split) %>%
summarize(estimate = mean((cut == "Premium") | (cut == "Ideal"))) %>%
mutate(term = "proportion_high")
}
df_resample_total_price <-
bootstraps(df_sample, times = 1000) %>%
mutate(estimates = map(splits, fit_fun))
int_pctl(df_resample_total_price, estimates, alpha = 0.01)
```
```
## # A tibble: 1 × 6
## term .lower .estimate .upper .alpha .method
## <chr> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 proportion_high 0.530 0.640 0.750 0.01 percentile
```
**Observations**:
* Based on the CI above, we *can* reject the null hypothesis **H\_0**.
* Since we reject **H\_0** we take our alternative action and buy the diamonds!
### 50\.7\.1 **q2** Use the techniques you learned in `e-stat09-bootstrap` to estimate a 99\-percent confidence interval for the population proportion of high\-cut diamonds. Can you reject the null hypothesis? What decision do you take?
*Hint 1*: Remember that you can use `mean(X == "value")` to compute the proportion
of cases in a sample with variable `X` equal to `"value"`. You’ll need to figure out how to combine the cases of `Premium` and `Ideal`.
*Hint 2* `int_pctl()` takes an `alpha` keyword argument; this is simply `alpha = 1 - confidence`.
```
## TASK: Estimate a confidence interval for the proportion of high-cut diamonds
## in the population. Look to `e-stat09-bootstrap` for starter code.
set.seed(101)
fit_fun <- function(split) {
analysis(split) %>%
summarize(estimate = mean((cut == "Premium") | (cut == "Ideal"))) %>%
mutate(term = "proportion_high")
}
df_resample_total_price <-
bootstraps(df_sample, times = 1000) %>%
mutate(estimates = map(splits, fit_fun))
int_pctl(df_resample_total_price, estimates, alpha = 0.01)
```
```
## # A tibble: 1 × 6
## term .lower .estimate .upper .alpha .method
## <chr> <dbl> <dbl> <dbl> <dbl> <chr>
## 1 proportion_high 0.530 0.640 0.750 0.01 percentile
```
**Observations**:
* Based on the CI above, we *can* reject the null hypothesis **H\_0**.
* Since we reject **H\_0** we take our alternative action and buy the diamonds!
50\.8 Closing Thoughts
----------------------
50\.9 The big reveal
--------------------
To close this exercise, let’s reveal whether our chosen hypotheses matched the
underlying population.
### 50\.9\.1 **q3** Compute the population mean price for the diamonds. Did you reject the
null hypothesis?
```
## TASK: Compute the population mean of diamond price
df_population %>%
summarize(price = mean(price))
```
```
## # A tibble: 1 × 1
## price
## <dbl>
## 1 1633.
```
```
price_threshold
```
```
## [1] 1700
```
**Observations**:
When I did q1, I **did not reject the null**. Note the weird wording there:
**did not reject the null**, rathern than “accepted the null”. In this
hypothesis testing framework we never actually *accept* the null hypothesis, we
can only *fail to reject the null*. What this means is that we still maintain
the possibility that the null is false, and all we can say for sure is that our
data are not sufficient to reject the null hypothesis.
In other words, when we fail to reject the null hypothesis “we’ve learned
nothing.”
Learning nothing isn’t a bad thing though! It’s an important part of statistics
to recognize when we’ve learned nothing.
### 50\.9\.2 **q4** Compute the proportion of high\-cut diamonds in the population. Did you
reject the null hypothesis?
```
## TASK: Compute the population proportion of high-cut diamonds
df_population %>%
summarize(proportion = mean((cut == "Premium") | (cut == "Ideal")))
```
```
## # A tibble: 1 × 1
## proportion
## <dbl>
## 1 0.667
```
**Observations**:
When I did q2 I **did reject the null hypothesis**. It happens that this was the
correct choice; the true proportion of high\-cut diamonds is greater than
50\-percent.
### 50\.9\.1 **q3** Compute the population mean price for the diamonds. Did you reject the
null hypothesis?
```
## TASK: Compute the population mean of diamond price
df_population %>%
summarize(price = mean(price))
```
```
## # A tibble: 1 × 1
## price
## <dbl>
## 1 1633.
```
```
price_threshold
```
```
## [1] 1700
```
**Observations**:
When I did q1, I **did not reject the null**. Note the weird wording there:
**did not reject the null**, rathern than “accepted the null”. In this
hypothesis testing framework we never actually *accept* the null hypothesis, we
can only *fail to reject the null*. What this means is that we still maintain
the possibility that the null is false, and all we can say for sure is that our
data are not sufficient to reject the null hypothesis.
In other words, when we fail to reject the null hypothesis “we’ve learned
nothing.”
Learning nothing isn’t a bad thing though! It’s an important part of statistics
to recognize when we’ve learned nothing.
### 50\.9\.2 **q4** Compute the proportion of high\-cut diamonds in the population. Did you
reject the null hypothesis?
```
## TASK: Compute the population proportion of high-cut diamonds
df_population %>%
summarize(proportion = mean((cut == "Premium") | (cut == "Ideal")))
```
```
## # A tibble: 1 × 1
## proportion
## <dbl>
## 1 0.667
```
**Observations**:
When I did q2 I **did reject the null hypothesis**. It happens that this was the
correct choice; the true proportion of high\-cut diamonds is greater than
50\-percent.
50\.10 End notes
----------------
Note that the underlying population is *identical* in the two settings above,
but the “correct” decision is *different*. This helps illustrate that **math
alone cannot help you frame a reasonable hypothesis**. Ultimately, you must
understand the situation you are in, and the decisions you are considering.
If you’ve taken a statistics course, you might be wondering why I’m talking
about hypothesis testing *without* introducing p\-values. I feel that confidence
invervals more obviously communicate the uncertainty in results, in line with
Andrew Gelman’s suggestion that we [embrace
uncertainty](https://stat.columbia.edu/~gelman/research/published/asa_pvalues.pdf).
The penalty we pay working with (two\-sided) confidence intervals is a reduction
in [statistical power](https://en.wikipedia.org/wiki/Power_of_a_test).
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/stats-confidence-vs-prediction-intervals.html |
51 Stats: Confidence vs Prediction Intervals
============================================
*Purpose*: There are multiple kinds of statistical intervals, and different intervals are useful for answering different questions. In this exercise, we’ll learn about *prediction intervals*: How they differ from confidence intervals, and when we would use a CI versus a PI.
*Reading*: (None, this is the reading)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(modelr)
library(broom)
```
```
##
## Attaching package: 'broom'
```
```
## The following object is masked from 'package:modelr':
##
## bootstrap
```
```
## Helper function to compute uncertainty bounds
add_uncertainties <- function(data, model, prefix = "pred", ...) {
df_fit <-
stats::predict(model, data, ...) %>%
as_tibble() %>%
rename_with(~ str_c(prefix, "_", .))
bind_cols(data, df_fit)
}
```
51\.1 Introduction: Confidence vs Prediction Intervals
------------------------------------------------------
There are multiple kinds of statistical intervals: We have already discussed [confidence intervals](https://en.wikipedia.org/wiki/Confidence_interval) (in e\-stat06\-clt), now we’ll discuss [prediction intervals](https://en.wikipedia.org/wiki/Prediction_interval).
51\.2 Specific Mathematical Example: Normal Distribution
--------------------------------------------------------
To help distinguish between between confidence intervals (CI) and prediction intervals (PI), let’s first limit our attention to normal distributions (where the math is easy).
We saw in e\-stat06\-clt that a confidence interval is a way to summarize our knowledge about an *estimated parameter*; for instance, a confidence interval \\(\[l, u]\\) for the sample mean \\(\\overline{X}\\) of a normal distribution at confidence level \\(C\\) would be
\\\[C \= \\mathbb{P}\\left\[l \< \\overline{X} \< u\\right] \= \\mathbb{P}\\left\[\\frac{l \- \\mu}{\\sigma / \\sqrt{n}} \< Z \< \\frac{u \- \\mu}{\\sigma / \\sqrt{n}}\\right].\\]
Note the \\(\\sigma / \\sqrt{n}\\) in the denominator on the right; this is the standard error for the sample mean \\(\\overline{X}\\). A CI is a useful way to summarize our uncertainty about an estimated parameter.
A different kind of interval is a *prediction interval* (PI). Rather than summarizing information about an estimated parameter, a PI summarizes information about *future observations*. The following equation defines a prediction interval for a normal distribution *assuming we magically know the mean and variance*:
\\\[P \= \\mathbb{P}\\left\[l \< X \< u\\right] \= \\mathbb{P}\\left\[\\frac{l \- \\mu}{\\sigma} \< Z \< \\frac{u \- \\mu}{\\sigma}\\right]\\]
**Observations**:
* Note that the CI equation above has a dependence on \\(n\\); as we gather more data the interval will tend to narrow.
* Note that the PI equation above have *no dependence* on \\(n\\); when we turn the “magic” off and have to estimate `mean, sd` from data a dependence on \\(n\\) shows up. However, even if we had infinite data (recovering the “magic” equation above), the interval would still not collapse to zero width.
### 51\.2\.1 **q1** Check your understanding; I provide code below to compute a confidence interval for the sample mean when sampling from `rnorm(mean = 1, sd = 2)` with `n = 400`. Modify the code to compute a prediction interval for the same underlying normal distribution.
```
## NOTE: No need to edit this setup
mu <- 1 # Normal mean
sd <- 2 # Normal variance
n <- 400 # Number of samples
ci_lwr <- mu - 1.96 * sd / sqrt(n)
ci_upr <- mu + 1.96 * sd / sqrt(n)
pi_lwr <- mu - 1.96 * sd
pi_upr <- mu + 1.96 * sd
```
Use the following tests to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(abs(pi_lwr + 2.92) <= 1e-6)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(pi_upr - 4.92) <= 1e-6)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
Our first observation about CI and PI is that PI will tend to be wider than CI! That’s because they are telling us *fundamentally different things* about our population. Consequently, we use CI and PI for *very different applications*.
51\.3 Applications of CI and PI
-------------------------------
A *confidence interval* is most likely to be useful when we care *more about aggregates*—rather than the individual observations.
A *prediction interval* is most likely to be useful when we care *more about individual observations*—rather than the aggregate behavior.
Let’s think back to e\-stat10\-hyp\-intro, where we were buying *many* diamonds. In that case we constructed confidence intervals on the mean price of diamonds and on the proportion of high\-cut diamonds. Since we cared primarily about the properties of *many* diamonds, it made sense to use confidence interval to support our decision making.
Now let’s think of a different application: Imagine we were going to purchase just *one diamond*. In that case we don’t care about the *mean price*; we care about the *single price* of the *one diamond* we’ll ultimately end up buying. In this case, we would be better off constructing a prediction interval for the price of diamonds from the population—this will give us a sense of the range of values we might encounter in our purchase.
Prediction intervals are also used for [other applications](https://en.wikipedia.org/wiki/Prediction_interval#Applications), such as defining a “standard reference range” for blood tests: Since doctors care about the individual patients—we want *every* patient to survive, not just mythical “average” patients!—it is more appropriate to use a prediction interval for this application.
Let’s apply these ideas to the diamonds dataset:
```
## NOTE: No need to edit this setup
# Create a test-validate split
set.seed(101)
diamonds_randomized <-
diamonds %>%
slice(sample(dim(diamonds)[1]))
diamonds_train <-
diamonds_randomized %>%
slice(1:10000)
diamonds_validate <-
diamonds_randomized %>%
slice(10001:20000)
```
We’re about to blindly apply the normal\-assuming formulae, but before we do that, let’s quickly inspect our data to see how normal or not they are:
```
## NOTE: No need to edit this chunk
bind_rows(
diamonds_train %>% mutate(source = "Train"),
diamonds_validate %>% mutate(source = "Validate")
) %>%
ggplot(aes(price)) +
geom_histogram(bins = 100) +
facet_grid(source ~ .)
```
Take a quick look at the plot above, and make a prediction (to yourself) whether the normally\-approximated CI and PI will behave well in this case. Then continue on to q2\.
### 51\.3\.1 **q2** Using the formulas above, estimate CI and PI using `diamonds_train`. Visualize the results using the chunk `q2-vis` below, and answer the questions under *observations*.
```
df_q2 <-
diamonds_train %>%
summarize(
price_mean = mean(price),
price_sd = sd(price),
price_n = n()
) %>%
mutate(
ci_lwr = price_mean - 1.96 * price_sd / sqrt(n),
ci_upr = price_mean + 1.96 * price_sd / sqrt(n),
pi_lwr = price_mean - 1.96 * price_sd,
pi_upr = price_mean + 1.96 * price_sd
) %>%
select(ci_lwr, ci_upr, pi_lwr, pi_upr)
df_q2
```
```
## # A tibble: 1 × 4
## ci_lwr ci_upr pi_lwr pi_upr
## <dbl> <dbl> <dbl> <dbl>
## 1 3561. 4340. -3842. 11743.
```
Use the following code to visualize your results; answer the questions below.
```
## NOTE: No need to edit this chunk
df_q2 %>%
pivot_longer(
names_to = c("type", ".value"),
names_sep = "_",
cols = everything()
) %>%
ggplot() +
geom_point(
data = diamonds_validate,
mapping = aes(x = "", y = price),
position = position_jitter(width = 0.3),
size = 0.2
) +
geom_errorbar(aes(x = "", ymin = lwr, ymax = upr, color = type)) +
guides(color = FALSE) +
facet_grid(~ type)
```
```
## Warning: The `<scale>` argument of `guides()` cannot be `FALSE`. Use "none" instead as
## of ggplot2 3.3.4.
```
**Observations**:
* Visually the CI and PI seem decent.
+ The CI seems to be located in the “middle” of the data.
+ The PI covers a wide fraction of the data. However, its lower bound goes negative, which is undesirable.
* I would check the CI against the population mean (if available) or a validation mean.
* I would check if the PI contains an appropriate fraction of prices, either from the population (if available), or from validation data.
* Both the CI and PI above assume a normal distribution and perfectly\-known parameters `mean, sd`. The assumption of perfectly\-known parameters is probably ok here (since we have *a lot* of data), but based on EDA we’ve done before, the assumption of normality is quite poor.
### 51\.3\.2 **q3** Test whether your CI and PI are constructed correctly: Remember the definitions of what CI and PI are meant to accomplish, and check how closely your intervals agree with the validation data.
```
## TODO: Devise a test to see if your CI and PI are correctly reflecting
## the diamonds population; use diamonds_validation in your testing
## Testing the CI
bind_cols(
df_q2 %>% select(ci_lwr, ci_upr),
diamonds_validate %>% summarize(price_mean = mean(price))
) %>%
select(ci_lwr, price_mean, ci_upr)
```
```
## # A tibble: 1 × 3
## ci_lwr price_mean ci_upr
## <dbl> <dbl> <dbl>
## 1 3561. 3917. 4340.
```
```
## Testing the PI
left_join(
diamonds_validate,
df_q2 %>% select(pi_lwr, pi_upr),
by = character()
) %>%
summarize(P_empirical = mean(pi_lwr <= price & price <= pi_upr))
```
```
## # A tibble: 1 × 1
## P_empirical
## <dbl>
## 1 0.935
```
**Observations**:
* My CI does include the population mean.
* My PI includes \~0\.94 of the validation prices, which is quite close to the 0\.95 desired.
51\.1 Introduction: Confidence vs Prediction Intervals
------------------------------------------------------
There are multiple kinds of statistical intervals: We have already discussed [confidence intervals](https://en.wikipedia.org/wiki/Confidence_interval) (in e\-stat06\-clt), now we’ll discuss [prediction intervals](https://en.wikipedia.org/wiki/Prediction_interval).
51\.2 Specific Mathematical Example: Normal Distribution
--------------------------------------------------------
To help distinguish between between confidence intervals (CI) and prediction intervals (PI), let’s first limit our attention to normal distributions (where the math is easy).
We saw in e\-stat06\-clt that a confidence interval is a way to summarize our knowledge about an *estimated parameter*; for instance, a confidence interval \\(\[l, u]\\) for the sample mean \\(\\overline{X}\\) of a normal distribution at confidence level \\(C\\) would be
\\\[C \= \\mathbb{P}\\left\[l \< \\overline{X} \< u\\right] \= \\mathbb{P}\\left\[\\frac{l \- \\mu}{\\sigma / \\sqrt{n}} \< Z \< \\frac{u \- \\mu}{\\sigma / \\sqrt{n}}\\right].\\]
Note the \\(\\sigma / \\sqrt{n}\\) in the denominator on the right; this is the standard error for the sample mean \\(\\overline{X}\\). A CI is a useful way to summarize our uncertainty about an estimated parameter.
A different kind of interval is a *prediction interval* (PI). Rather than summarizing information about an estimated parameter, a PI summarizes information about *future observations*. The following equation defines a prediction interval for a normal distribution *assuming we magically know the mean and variance*:
\\\[P \= \\mathbb{P}\\left\[l \< X \< u\\right] \= \\mathbb{P}\\left\[\\frac{l \- \\mu}{\\sigma} \< Z \< \\frac{u \- \\mu}{\\sigma}\\right]\\]
**Observations**:
* Note that the CI equation above has a dependence on \\(n\\); as we gather more data the interval will tend to narrow.
* Note that the PI equation above have *no dependence* on \\(n\\); when we turn the “magic” off and have to estimate `mean, sd` from data a dependence on \\(n\\) shows up. However, even if we had infinite data (recovering the “magic” equation above), the interval would still not collapse to zero width.
### 51\.2\.1 **q1** Check your understanding; I provide code below to compute a confidence interval for the sample mean when sampling from `rnorm(mean = 1, sd = 2)` with `n = 400`. Modify the code to compute a prediction interval for the same underlying normal distribution.
```
## NOTE: No need to edit this setup
mu <- 1 # Normal mean
sd <- 2 # Normal variance
n <- 400 # Number of samples
ci_lwr <- mu - 1.96 * sd / sqrt(n)
ci_upr <- mu + 1.96 * sd / sqrt(n)
pi_lwr <- mu - 1.96 * sd
pi_upr <- mu + 1.96 * sd
```
Use the following tests to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(abs(pi_lwr + 2.92) <= 1e-6)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(pi_upr - 4.92) <= 1e-6)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
Our first observation about CI and PI is that PI will tend to be wider than CI! That’s because they are telling us *fundamentally different things* about our population. Consequently, we use CI and PI for *very different applications*.
### 51\.2\.1 **q1** Check your understanding; I provide code below to compute a confidence interval for the sample mean when sampling from `rnorm(mean = 1, sd = 2)` with `n = 400`. Modify the code to compute a prediction interval for the same underlying normal distribution.
```
## NOTE: No need to edit this setup
mu <- 1 # Normal mean
sd <- 2 # Normal variance
n <- 400 # Number of samples
ci_lwr <- mu - 1.96 * sd / sqrt(n)
ci_upr <- mu + 1.96 * sd / sqrt(n)
pi_lwr <- mu - 1.96 * sd
pi_upr <- mu + 1.96 * sd
```
Use the following tests to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(abs(pi_lwr + 2.92) <= 1e-6)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(pi_upr - 4.92) <= 1e-6)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
Our first observation about CI and PI is that PI will tend to be wider than CI! That’s because they are telling us *fundamentally different things* about our population. Consequently, we use CI and PI for *very different applications*.
51\.3 Applications of CI and PI
-------------------------------
A *confidence interval* is most likely to be useful when we care *more about aggregates*—rather than the individual observations.
A *prediction interval* is most likely to be useful when we care *more about individual observations*—rather than the aggregate behavior.
Let’s think back to e\-stat10\-hyp\-intro, where we were buying *many* diamonds. In that case we constructed confidence intervals on the mean price of diamonds and on the proportion of high\-cut diamonds. Since we cared primarily about the properties of *many* diamonds, it made sense to use confidence interval to support our decision making.
Now let’s think of a different application: Imagine we were going to purchase just *one diamond*. In that case we don’t care about the *mean price*; we care about the *single price* of the *one diamond* we’ll ultimately end up buying. In this case, we would be better off constructing a prediction interval for the price of diamonds from the population—this will give us a sense of the range of values we might encounter in our purchase.
Prediction intervals are also used for [other applications](https://en.wikipedia.org/wiki/Prediction_interval#Applications), such as defining a “standard reference range” for blood tests: Since doctors care about the individual patients—we want *every* patient to survive, not just mythical “average” patients!—it is more appropriate to use a prediction interval for this application.
Let’s apply these ideas to the diamonds dataset:
```
## NOTE: No need to edit this setup
# Create a test-validate split
set.seed(101)
diamonds_randomized <-
diamonds %>%
slice(sample(dim(diamonds)[1]))
diamonds_train <-
diamonds_randomized %>%
slice(1:10000)
diamonds_validate <-
diamonds_randomized %>%
slice(10001:20000)
```
We’re about to blindly apply the normal\-assuming formulae, but before we do that, let’s quickly inspect our data to see how normal or not they are:
```
## NOTE: No need to edit this chunk
bind_rows(
diamonds_train %>% mutate(source = "Train"),
diamonds_validate %>% mutate(source = "Validate")
) %>%
ggplot(aes(price)) +
geom_histogram(bins = 100) +
facet_grid(source ~ .)
```
Take a quick look at the plot above, and make a prediction (to yourself) whether the normally\-approximated CI and PI will behave well in this case. Then continue on to q2\.
### 51\.3\.1 **q2** Using the formulas above, estimate CI and PI using `diamonds_train`. Visualize the results using the chunk `q2-vis` below, and answer the questions under *observations*.
```
df_q2 <-
diamonds_train %>%
summarize(
price_mean = mean(price),
price_sd = sd(price),
price_n = n()
) %>%
mutate(
ci_lwr = price_mean - 1.96 * price_sd / sqrt(n),
ci_upr = price_mean + 1.96 * price_sd / sqrt(n),
pi_lwr = price_mean - 1.96 * price_sd,
pi_upr = price_mean + 1.96 * price_sd
) %>%
select(ci_lwr, ci_upr, pi_lwr, pi_upr)
df_q2
```
```
## # A tibble: 1 × 4
## ci_lwr ci_upr pi_lwr pi_upr
## <dbl> <dbl> <dbl> <dbl>
## 1 3561. 4340. -3842. 11743.
```
Use the following code to visualize your results; answer the questions below.
```
## NOTE: No need to edit this chunk
df_q2 %>%
pivot_longer(
names_to = c("type", ".value"),
names_sep = "_",
cols = everything()
) %>%
ggplot() +
geom_point(
data = diamonds_validate,
mapping = aes(x = "", y = price),
position = position_jitter(width = 0.3),
size = 0.2
) +
geom_errorbar(aes(x = "", ymin = lwr, ymax = upr, color = type)) +
guides(color = FALSE) +
facet_grid(~ type)
```
```
## Warning: The `<scale>` argument of `guides()` cannot be `FALSE`. Use "none" instead as
## of ggplot2 3.3.4.
```
**Observations**:
* Visually the CI and PI seem decent.
+ The CI seems to be located in the “middle” of the data.
+ The PI covers a wide fraction of the data. However, its lower bound goes negative, which is undesirable.
* I would check the CI against the population mean (if available) or a validation mean.
* I would check if the PI contains an appropriate fraction of prices, either from the population (if available), or from validation data.
* Both the CI and PI above assume a normal distribution and perfectly\-known parameters `mean, sd`. The assumption of perfectly\-known parameters is probably ok here (since we have *a lot* of data), but based on EDA we’ve done before, the assumption of normality is quite poor.
### 51\.3\.2 **q3** Test whether your CI and PI are constructed correctly: Remember the definitions of what CI and PI are meant to accomplish, and check how closely your intervals agree with the validation data.
```
## TODO: Devise a test to see if your CI and PI are correctly reflecting
## the diamonds population; use diamonds_validation in your testing
## Testing the CI
bind_cols(
df_q2 %>% select(ci_lwr, ci_upr),
diamonds_validate %>% summarize(price_mean = mean(price))
) %>%
select(ci_lwr, price_mean, ci_upr)
```
```
## # A tibble: 1 × 3
## ci_lwr price_mean ci_upr
## <dbl> <dbl> <dbl>
## 1 3561. 3917. 4340.
```
```
## Testing the PI
left_join(
diamonds_validate,
df_q2 %>% select(pi_lwr, pi_upr),
by = character()
) %>%
summarize(P_empirical = mean(pi_lwr <= price & price <= pi_upr))
```
```
## # A tibble: 1 × 1
## P_empirical
## <dbl>
## 1 0.935
```
**Observations**:
* My CI does include the population mean.
* My PI includes \~0\.94 of the validation prices, which is quite close to the 0\.95 desired.
### 51\.3\.1 **q2** Using the formulas above, estimate CI and PI using `diamonds_train`. Visualize the results using the chunk `q2-vis` below, and answer the questions under *observations*.
```
df_q2 <-
diamonds_train %>%
summarize(
price_mean = mean(price),
price_sd = sd(price),
price_n = n()
) %>%
mutate(
ci_lwr = price_mean - 1.96 * price_sd / sqrt(n),
ci_upr = price_mean + 1.96 * price_sd / sqrt(n),
pi_lwr = price_mean - 1.96 * price_sd,
pi_upr = price_mean + 1.96 * price_sd
) %>%
select(ci_lwr, ci_upr, pi_lwr, pi_upr)
df_q2
```
```
## # A tibble: 1 × 4
## ci_lwr ci_upr pi_lwr pi_upr
## <dbl> <dbl> <dbl> <dbl>
## 1 3561. 4340. -3842. 11743.
```
Use the following code to visualize your results; answer the questions below.
```
## NOTE: No need to edit this chunk
df_q2 %>%
pivot_longer(
names_to = c("type", ".value"),
names_sep = "_",
cols = everything()
) %>%
ggplot() +
geom_point(
data = diamonds_validate,
mapping = aes(x = "", y = price),
position = position_jitter(width = 0.3),
size = 0.2
) +
geom_errorbar(aes(x = "", ymin = lwr, ymax = upr, color = type)) +
guides(color = FALSE) +
facet_grid(~ type)
```
```
## Warning: The `<scale>` argument of `guides()` cannot be `FALSE`. Use "none" instead as
## of ggplot2 3.3.4.
```
**Observations**:
* Visually the CI and PI seem decent.
+ The CI seems to be located in the “middle” of the data.
+ The PI covers a wide fraction of the data. However, its lower bound goes negative, which is undesirable.
* I would check the CI against the population mean (if available) or a validation mean.
* I would check if the PI contains an appropriate fraction of prices, either from the population (if available), or from validation data.
* Both the CI and PI above assume a normal distribution and perfectly\-known parameters `mean, sd`. The assumption of perfectly\-known parameters is probably ok here (since we have *a lot* of data), but based on EDA we’ve done before, the assumption of normality is quite poor.
### 51\.3\.2 **q3** Test whether your CI and PI are constructed correctly: Remember the definitions of what CI and PI are meant to accomplish, and check how closely your intervals agree with the validation data.
```
## TODO: Devise a test to see if your CI and PI are correctly reflecting
## the diamonds population; use diamonds_validation in your testing
## Testing the CI
bind_cols(
df_q2 %>% select(ci_lwr, ci_upr),
diamonds_validate %>% summarize(price_mean = mean(price))
) %>%
select(ci_lwr, price_mean, ci_upr)
```
```
## # A tibble: 1 × 3
## ci_lwr price_mean ci_upr
## <dbl> <dbl> <dbl>
## 1 3561. 3917. 4340.
```
```
## Testing the PI
left_join(
diamonds_validate,
df_q2 %>% select(pi_lwr, pi_upr),
by = character()
) %>%
summarize(P_empirical = mean(pi_lwr <= price & price <= pi_upr))
```
```
## # A tibble: 1 × 1
## P_empirical
## <dbl>
## 1 0.935
```
**Observations**:
* My CI does include the population mean.
* My PI includes \~0\.94 of the validation prices, which is quite close to the 0\.95 desired.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/model-variability-quadrants.html |
52 Model: Variability Quadrants
===============================
*Purpose*: All real data have variability: repeated measurements of “the same” quantity tend to result in different values. To help you recognize different kinds of variability and choose a reasonable analysis procedure based on the kind of variability, you will learn about different *sources* of variability in this exercise.
*Reading*: [Conceptual Tools for Handling Uncertainty](https://drive.google.com/file/d/1FCvHiag25zqN6WdKoisaMUh35pSQU-0M/view?usp=sharing) (a draft chapter from a textbook I’m writing)
```
## Note: No need to edit this chunk!
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
52\.1 Variability
-----------------
As we’ve seen in this course, real data exhibit *variability*; that is, repeated measurements of “the same” quantity that result in different values. Variability can arise due to a variety of reasons, and different kinds of variability should be analyzed in different ways. To help make this determination, we’re going to study a theoretical framework for variability.
52\.2 The Cause\-Source Quadrants
---------------------------------
As descrinbed in the reading, the *cause\-source quadrants* organize variability into four distinct categories. Today, we’re going to focus on the *source* axis, and limit our attention to *chance causes*.
Variability quadrants
* *Cause* is an idea from statistical quality control (manufacturing); a *chance cause* is modeled as random, while an *assignable cause* is thought to be traceable and preventable.
* *Source* is an idea from statistics education theory; this concept is explained further below.
52\.3 Real vs Induced Source
----------------------------
The idea of *source* can only be understood in the distinction between a *scopus* and a *measurement*: The *scopus* is the quantity that we are seeking to study, while the *measurement* is a possibly\-corrupted version of our scopus. The key insight is that **variability can occur both in the scopus value, and in the measurement**.
Variability quadrants
As a simple example: based on our current understanding of physics, the speed of light `c` is a [constant value](https://en.wikipedia.org/wiki/Speed_of_light). Therefore, any variability we see in measurements of `c` are understood to be *induced variability*; real variability in `c` is not considered to be possible.
Conversely, our current understanding of physics is that quantum phenomena are [fundamentally unpredictable](https://en.wikipedia.org/wiki/Quantum_mechanics), and can only be described in a statistical sense. This means that quantum phenomena exhibit real variability.
Other physical quantities exhibit both real and induced variability. Since the concept of *source* relies on a choice of scopus, the only way we can make progress with this concept is to consider a specific scenario in detail.
52\.4 Manufacturing structural steel components
-----------------------------------------------
*The Context*: A manufacturer is producing cast steel parts for a landing gear. The part in question takes a heavy load, and if it fails it will disable the aircraft on the ground. These parts will be manufactured in bulk; approximately 500 will be made and installed in commercial aircraft that will operate for decades.
*The Scopus*: The strength of each steel component—as\-manufactured—will ultimately determine whether each aircraft is safe. As we learned in `c08-structures`, a structure is safe if its applied stress is less than its strength. Therefore, a smaller material strength is a more conservative value for design purposes.
52\.5 Scenarios
---------------
### 52\.5\.1 **q1** Imagine the manufacturer selects one part and performs multiple non\-destructive tensile tests on that single part, under similar conditions. The measured elasticity from each test is slightly different. Is this variability real or induced?
* Induced
* The properties of the component are essentially set at manufacturing time; if multiple measurements on the same part return different values, then the variability is most likely induced by the measurement process.
### 52\.5\.2 **q2** Imagine the manufacturer selects multiple parts and—for each part—performs multiple non\-destructive tensile tests, all under similar conditions. The measured elasticity values for each part are averaged to provide a more reliable estimate for each part. Upon comparing the parts, each averaged value is fairly different. Is this variability real or induced?
* Real
* The properties of the component are essentially set at manufacturing time; but no manufacturing process can create items with identical properties. Particularly if variability remains after induced variability has been controlled and eliminated (as described in the prompt), then the remaining variability is real.
### 52\.5\.3 **q3** Now the manufacturer selects multiple parts and performs a destructive tensile test to characterize the strength of each part, with tests carried out under similar conditions. The measured strength values exhibit a fair amount of variability. Is this variability real or induced?
* Without more information, it is impossible to say. It is likely a combination of real and induced sources.
* Real variability can arise from the manufacturing process, and induced variability can arise from the measurement. Since the measurement is destructive, we cannot use multiple measurements to control the induced variability.
* Note that it would generally be conservative to treat all of the variability in a strength as real; this would lead to parts that are heavier but safer than they need to be.
52\.6 Analyzing Data
--------------------
The following generates data with both *noise* and *deviation*
```
set.seed(101)
df_meas <-
map_dfr(
1:30,
function(i) {
Y_deviation <- rlnorm(n = 1, meanlog = 2)
Y_noise <- rnorm(n = 5, sd = 1)
tibble(Y = Y_deviation + Y_noise) %>%
mutate(id_sample = i, id_meas = row_number())
}
)
```
`id_sample` \- represents an individual part
`id_meas` \- represents an individual measurement, with multiple carried out on each part
`Y` \- an individual measurement, identified by `id_sample` and `id_meas`
If we make a simple histogram, we can see that the measured value `Y` is highly variable:
```
df_meas %>%
ggplot(aes(Y)) +
geom_histogram(bins = 30)
```
However, these data exhibit multiple *sources* of variability. The following questions will help you learn how to analyze data in light of this mixed variability.
### 52\.6\.1 **q4** Inspect the following graph. Answer the questions under *observations* below.
```
## NOTE: No need to edit; run and inspect
df_meas %>%
ggplot(aes(id_sample, Y)) +
geom_point(
data = . %>%
group_by(id_sample) %>%
summarize(Y = mean(Y)),
color = "red",
size = 1
) +
geom_point(size = 0.2) +
theme_minimal()
```
*Observations*
\- Based on the visual, the variability due to deviation is obviously much larger than the variability due to noise: There is considerably more scatter between the red dots (each sample’s measurement mean), than there is scatter around each red dot.
We can make this quantitative with an *analysis of variance*:
```
fit_meas <-
df_meas %>%
lm(formula = Y ~ id_sample + id_meas)
anova(fit_meas)
```
```
## Analysis of Variance Table
##
## Response: Y
## Df Sum Sq Mean Sq F value Pr(>F)
## id_sample 1 26.0 26.040 0.1609 0.6889
## id_meas 1 0.9 0.869 0.0054 0.9417
## Residuals 147 23792.3 161.852
```
A [random effects model](https://en.wikipedia.org/wiki/Random_effects_model) would be more appropriate for these data, but the fixed\-effects ANOVA above gives a rough quantitative comparison.
### 52\.6\.2 **q5** Imagine `Y` represents the measured strength of the cast steel. Would it be safe to simply average all of the values and use that as the material strength for design?
* No, it would be foolish to take the average of all these strength measurements and use that value for design. In fact, in aircraft design, this approach would be *illegal* according to [Federal Airworthiness Regulations](https://www.law.cornell.edu/cfr/text/14/25.613).
### 52\.6\.3 **q6** Compute the `0.1` quantile of the `Y` measurements. Would this be a conservative value to use as a material strength for design?
```
## TODO: Compute the 0.01 quantile of the `Y` values; complete the code below
# For comparison, here's the mean of the data
Y_mean <-
df_meas %>%
summarize(Y_mean = mean(Y)) %>%
pull(Y_mean)
Y_lo <-
df_meas %>%
summarize(Y_lo = quantile(Y, p = 0.1)) %>%
pull(Y_lo)
# Compare the values
Y_mean
```
```
## [1] 12.30886
```
```
Y_lo
```
```
## 10%
## 2.011725
```
Use the following to check your work.
```
## NO NEED TO EDIT; use this to check your work
assertthat::assert_that(abs(as.numeric(Y_lo) - 2.0117) < 1e-3)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
*Observations*
* `Y_lo` is considerably smaller than `Y_mean`.
* Yes, the 10% quantile would be a conservative material strength value. This would still not be in compliance with [FAA regulations](https://arc.aiaa.org/doi/full/10.2514/1.J059578) for material properties, but it is certainly better than using the (sample) mean.
### 52\.6\.4 **q7** The following code reduces the variability due to noise before computing the quantile. Run the code below, and answer the questions under *Observations* below.
```
## NOTE: No need to edit; run and answer the questions below
Y_lo_improved <-
df_meas %>%
## Take average within each sample's measurements
group_by(id_sample) %>%
summarize(Y_sample = mean(Y)) %>%
## Take quantile over all the samples
summarize(Y_lo = quantile(Y_sample, p = 0.1)) %>%
pull(Y_lo)
Y_lo_improved
```
```
## 10%
## 2.566182
```
```
Y_lo
```
```
## 10%
## 2.011725
```
*Observations*
* The new value `Y_lo_improved` is less conservative, but it is also more efficient. This value reduces the effects of noise, which focuses on the design\-relevant deviation present in the material property of interest.
* To compute `Y_lo_improved`, we needed multiple observations `id_meas` on each sample `id_sample`.
* In reality, we cannot collect a dataset like `df_meas` for strength properties; this is because we can’t collect more than one strength measurement per sample!
* It is possible to collect repeated observations with any non\-destructive measurement. For material properties, this could include the elasticity, poisson ratio, density, etc.
*Aside*: This kind of statistical experimental design is sometimes called a [nested design](https://online.stat.psu.edu/stat503/lesson/14/14.1).
52\.1 Variability
-----------------
As we’ve seen in this course, real data exhibit *variability*; that is, repeated measurements of “the same” quantity that result in different values. Variability can arise due to a variety of reasons, and different kinds of variability should be analyzed in different ways. To help make this determination, we’re going to study a theoretical framework for variability.
52\.2 The Cause\-Source Quadrants
---------------------------------
As descrinbed in the reading, the *cause\-source quadrants* organize variability into four distinct categories. Today, we’re going to focus on the *source* axis, and limit our attention to *chance causes*.
Variability quadrants
* *Cause* is an idea from statistical quality control (manufacturing); a *chance cause* is modeled as random, while an *assignable cause* is thought to be traceable and preventable.
* *Source* is an idea from statistics education theory; this concept is explained further below.
52\.3 Real vs Induced Source
----------------------------
The idea of *source* can only be understood in the distinction between a *scopus* and a *measurement*: The *scopus* is the quantity that we are seeking to study, while the *measurement* is a possibly\-corrupted version of our scopus. The key insight is that **variability can occur both in the scopus value, and in the measurement**.
Variability quadrants
As a simple example: based on our current understanding of physics, the speed of light `c` is a [constant value](https://en.wikipedia.org/wiki/Speed_of_light). Therefore, any variability we see in measurements of `c` are understood to be *induced variability*; real variability in `c` is not considered to be possible.
Conversely, our current understanding of physics is that quantum phenomena are [fundamentally unpredictable](https://en.wikipedia.org/wiki/Quantum_mechanics), and can only be described in a statistical sense. This means that quantum phenomena exhibit real variability.
Other physical quantities exhibit both real and induced variability. Since the concept of *source* relies on a choice of scopus, the only way we can make progress with this concept is to consider a specific scenario in detail.
52\.4 Manufacturing structural steel components
-----------------------------------------------
*The Context*: A manufacturer is producing cast steel parts for a landing gear. The part in question takes a heavy load, and if it fails it will disable the aircraft on the ground. These parts will be manufactured in bulk; approximately 500 will be made and installed in commercial aircraft that will operate for decades.
*The Scopus*: The strength of each steel component—as\-manufactured—will ultimately determine whether each aircraft is safe. As we learned in `c08-structures`, a structure is safe if its applied stress is less than its strength. Therefore, a smaller material strength is a more conservative value for design purposes.
52\.5 Scenarios
---------------
### 52\.5\.1 **q1** Imagine the manufacturer selects one part and performs multiple non\-destructive tensile tests on that single part, under similar conditions. The measured elasticity from each test is slightly different. Is this variability real or induced?
* Induced
* The properties of the component are essentially set at manufacturing time; if multiple measurements on the same part return different values, then the variability is most likely induced by the measurement process.
### 52\.5\.2 **q2** Imagine the manufacturer selects multiple parts and—for each part—performs multiple non\-destructive tensile tests, all under similar conditions. The measured elasticity values for each part are averaged to provide a more reliable estimate for each part. Upon comparing the parts, each averaged value is fairly different. Is this variability real or induced?
* Real
* The properties of the component are essentially set at manufacturing time; but no manufacturing process can create items with identical properties. Particularly if variability remains after induced variability has been controlled and eliminated (as described in the prompt), then the remaining variability is real.
### 52\.5\.3 **q3** Now the manufacturer selects multiple parts and performs a destructive tensile test to characterize the strength of each part, with tests carried out under similar conditions. The measured strength values exhibit a fair amount of variability. Is this variability real or induced?
* Without more information, it is impossible to say. It is likely a combination of real and induced sources.
* Real variability can arise from the manufacturing process, and induced variability can arise from the measurement. Since the measurement is destructive, we cannot use multiple measurements to control the induced variability.
* Note that it would generally be conservative to treat all of the variability in a strength as real; this would lead to parts that are heavier but safer than they need to be.
### 52\.5\.1 **q1** Imagine the manufacturer selects one part and performs multiple non\-destructive tensile tests on that single part, under similar conditions. The measured elasticity from each test is slightly different. Is this variability real or induced?
* Induced
* The properties of the component are essentially set at manufacturing time; if multiple measurements on the same part return different values, then the variability is most likely induced by the measurement process.
### 52\.5\.2 **q2** Imagine the manufacturer selects multiple parts and—for each part—performs multiple non\-destructive tensile tests, all under similar conditions. The measured elasticity values for each part are averaged to provide a more reliable estimate for each part. Upon comparing the parts, each averaged value is fairly different. Is this variability real or induced?
* Real
* The properties of the component are essentially set at manufacturing time; but no manufacturing process can create items with identical properties. Particularly if variability remains after induced variability has been controlled and eliminated (as described in the prompt), then the remaining variability is real.
### 52\.5\.3 **q3** Now the manufacturer selects multiple parts and performs a destructive tensile test to characterize the strength of each part, with tests carried out under similar conditions. The measured strength values exhibit a fair amount of variability. Is this variability real or induced?
* Without more information, it is impossible to say. It is likely a combination of real and induced sources.
* Real variability can arise from the manufacturing process, and induced variability can arise from the measurement. Since the measurement is destructive, we cannot use multiple measurements to control the induced variability.
* Note that it would generally be conservative to treat all of the variability in a strength as real; this would lead to parts that are heavier but safer than they need to be.
52\.6 Analyzing Data
--------------------
The following generates data with both *noise* and *deviation*
```
set.seed(101)
df_meas <-
map_dfr(
1:30,
function(i) {
Y_deviation <- rlnorm(n = 1, meanlog = 2)
Y_noise <- rnorm(n = 5, sd = 1)
tibble(Y = Y_deviation + Y_noise) %>%
mutate(id_sample = i, id_meas = row_number())
}
)
```
`id_sample` \- represents an individual part
`id_meas` \- represents an individual measurement, with multiple carried out on each part
`Y` \- an individual measurement, identified by `id_sample` and `id_meas`
If we make a simple histogram, we can see that the measured value `Y` is highly variable:
```
df_meas %>%
ggplot(aes(Y)) +
geom_histogram(bins = 30)
```
However, these data exhibit multiple *sources* of variability. The following questions will help you learn how to analyze data in light of this mixed variability.
### 52\.6\.1 **q4** Inspect the following graph. Answer the questions under *observations* below.
```
## NOTE: No need to edit; run and inspect
df_meas %>%
ggplot(aes(id_sample, Y)) +
geom_point(
data = . %>%
group_by(id_sample) %>%
summarize(Y = mean(Y)),
color = "red",
size = 1
) +
geom_point(size = 0.2) +
theme_minimal()
```
*Observations*
\- Based on the visual, the variability due to deviation is obviously much larger than the variability due to noise: There is considerably more scatter between the red dots (each sample’s measurement mean), than there is scatter around each red dot.
We can make this quantitative with an *analysis of variance*:
```
fit_meas <-
df_meas %>%
lm(formula = Y ~ id_sample + id_meas)
anova(fit_meas)
```
```
## Analysis of Variance Table
##
## Response: Y
## Df Sum Sq Mean Sq F value Pr(>F)
## id_sample 1 26.0 26.040 0.1609 0.6889
## id_meas 1 0.9 0.869 0.0054 0.9417
## Residuals 147 23792.3 161.852
```
A [random effects model](https://en.wikipedia.org/wiki/Random_effects_model) would be more appropriate for these data, but the fixed\-effects ANOVA above gives a rough quantitative comparison.
### 52\.6\.2 **q5** Imagine `Y` represents the measured strength of the cast steel. Would it be safe to simply average all of the values and use that as the material strength for design?
* No, it would be foolish to take the average of all these strength measurements and use that value for design. In fact, in aircraft design, this approach would be *illegal* according to [Federal Airworthiness Regulations](https://www.law.cornell.edu/cfr/text/14/25.613).
### 52\.6\.3 **q6** Compute the `0.1` quantile of the `Y` measurements. Would this be a conservative value to use as a material strength for design?
```
## TODO: Compute the 0.01 quantile of the `Y` values; complete the code below
# For comparison, here's the mean of the data
Y_mean <-
df_meas %>%
summarize(Y_mean = mean(Y)) %>%
pull(Y_mean)
Y_lo <-
df_meas %>%
summarize(Y_lo = quantile(Y, p = 0.1)) %>%
pull(Y_lo)
# Compare the values
Y_mean
```
```
## [1] 12.30886
```
```
Y_lo
```
```
## 10%
## 2.011725
```
Use the following to check your work.
```
## NO NEED TO EDIT; use this to check your work
assertthat::assert_that(abs(as.numeric(Y_lo) - 2.0117) < 1e-3)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
*Observations*
* `Y_lo` is considerably smaller than `Y_mean`.
* Yes, the 10% quantile would be a conservative material strength value. This would still not be in compliance with [FAA regulations](https://arc.aiaa.org/doi/full/10.2514/1.J059578) for material properties, but it is certainly better than using the (sample) mean.
### 52\.6\.4 **q7** The following code reduces the variability due to noise before computing the quantile. Run the code below, and answer the questions under *Observations* below.
```
## NOTE: No need to edit; run and answer the questions below
Y_lo_improved <-
df_meas %>%
## Take average within each sample's measurements
group_by(id_sample) %>%
summarize(Y_sample = mean(Y)) %>%
## Take quantile over all the samples
summarize(Y_lo = quantile(Y_sample, p = 0.1)) %>%
pull(Y_lo)
Y_lo_improved
```
```
## 10%
## 2.566182
```
```
Y_lo
```
```
## 10%
## 2.011725
```
*Observations*
* The new value `Y_lo_improved` is less conservative, but it is also more efficient. This value reduces the effects of noise, which focuses on the design\-relevant deviation present in the material property of interest.
* To compute `Y_lo_improved`, we needed multiple observations `id_meas` on each sample `id_sample`.
* In reality, we cannot collect a dataset like `df_meas` for strength properties; this is because we can’t collect more than one strength measurement per sample!
* It is possible to collect repeated observations with any non\-destructive measurement. For material properties, this could include the elasticity, poisson ratio, density, etc.
*Aside*: This kind of statistical experimental design is sometimes called a [nested design](https://online.stat.psu.edu/stat503/lesson/14/14.1).
### 52\.6\.1 **q4** Inspect the following graph. Answer the questions under *observations* below.
```
## NOTE: No need to edit; run and inspect
df_meas %>%
ggplot(aes(id_sample, Y)) +
geom_point(
data = . %>%
group_by(id_sample) %>%
summarize(Y = mean(Y)),
color = "red",
size = 1
) +
geom_point(size = 0.2) +
theme_minimal()
```
*Observations*
\- Based on the visual, the variability due to deviation is obviously much larger than the variability due to noise: There is considerably more scatter between the red dots (each sample’s measurement mean), than there is scatter around each red dot.
We can make this quantitative with an *analysis of variance*:
```
fit_meas <-
df_meas %>%
lm(formula = Y ~ id_sample + id_meas)
anova(fit_meas)
```
```
## Analysis of Variance Table
##
## Response: Y
## Df Sum Sq Mean Sq F value Pr(>F)
## id_sample 1 26.0 26.040 0.1609 0.6889
## id_meas 1 0.9 0.869 0.0054 0.9417
## Residuals 147 23792.3 161.852
```
A [random effects model](https://en.wikipedia.org/wiki/Random_effects_model) would be more appropriate for these data, but the fixed\-effects ANOVA above gives a rough quantitative comparison.
### 52\.6\.2 **q5** Imagine `Y` represents the measured strength of the cast steel. Would it be safe to simply average all of the values and use that as the material strength for design?
* No, it would be foolish to take the average of all these strength measurements and use that value for design. In fact, in aircraft design, this approach would be *illegal* according to [Federal Airworthiness Regulations](https://www.law.cornell.edu/cfr/text/14/25.613).
### 52\.6\.3 **q6** Compute the `0.1` quantile of the `Y` measurements. Would this be a conservative value to use as a material strength for design?
```
## TODO: Compute the 0.01 quantile of the `Y` values; complete the code below
# For comparison, here's the mean of the data
Y_mean <-
df_meas %>%
summarize(Y_mean = mean(Y)) %>%
pull(Y_mean)
Y_lo <-
df_meas %>%
summarize(Y_lo = quantile(Y, p = 0.1)) %>%
pull(Y_lo)
# Compare the values
Y_mean
```
```
## [1] 12.30886
```
```
Y_lo
```
```
## 10%
## 2.011725
```
Use the following to check your work.
```
## NO NEED TO EDIT; use this to check your work
assertthat::assert_that(abs(as.numeric(Y_lo) - 2.0117) < 1e-3)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
*Observations*
* `Y_lo` is considerably smaller than `Y_mean`.
* Yes, the 10% quantile would be a conservative material strength value. This would still not be in compliance with [FAA regulations](https://arc.aiaa.org/doi/full/10.2514/1.J059578) for material properties, but it is certainly better than using the (sample) mean.
### 52\.6\.4 **q7** The following code reduces the variability due to noise before computing the quantile. Run the code below, and answer the questions under *Observations* below.
```
## NOTE: No need to edit; run and answer the questions below
Y_lo_improved <-
df_meas %>%
## Take average within each sample's measurements
group_by(id_sample) %>%
summarize(Y_sample = mean(Y)) %>%
## Take quantile over all the samples
summarize(Y_lo = quantile(Y_sample, p = 0.1)) %>%
pull(Y_lo)
Y_lo_improved
```
```
## 10%
## 2.566182
```
```
Y_lo
```
```
## 10%
## 2.011725
```
*Observations*
* The new value `Y_lo_improved` is less conservative, but it is also more efficient. This value reduces the effects of noise, which focuses on the design\-relevant deviation present in the material property of interest.
* To compute `Y_lo_improved`, we needed multiple observations `id_meas` on each sample `id_sample`.
* In reality, we cannot collect a dataset like `df_meas` for strength properties; this is because we can’t collect more than one strength measurement per sample!
* It is possible to collect repeated observations with any non\-destructive measurement. For material properties, this could include the elasticity, poisson ratio, density, etc.
*Aside*: This kind of statistical experimental design is sometimes called a [nested design](https://online.stat.psu.edu/stat503/lesson/14/14.1).
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/vis-improving-graphs.html |
53 Vis: Improving Graphs
========================
*Purpose*: Creating a *presentation\-quality* graph is an *iterative exercise*. There are many different ways to show the same data, some of which are more effective for communication than others. Let’s return to the ideas from “How Humans See Data” and use them to improve upon some graphs: This will give you practice iterating on visuals.
*Reading*: [How Humans See Data](https://www.youtube.com/watch?v=fSgEeI2Xpdc&list=PLluqivwOH1ouKkbM0c6x-g7DQnXF0UmC0&index=37&t=0s) (Video from prior exercise, for reference)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
53\.1 Improve these graphs!
---------------------------
Using the ideas from the reading (video), state some issues with the following graphs. Remember the *visual hierarchy*:
1. Position along a common scale
2. Position on identical but nonaligned scales
3. Length
4. Angle; Slope (With slope not too close to 0, \\(\\pi/2\\), or \\(\\pi\\).)
5. Area
6. Volume; Density; Color saturation
7. Color hue
### 53\.1\.1 **q1** Use concepts from the reading to improve the following graph. *Make sure your graph shows all the same variables*, no more and no fewer.
```
## NOTE: No need to edit; run and inspect
mpg %>%
ggplot(aes(manufacturer, cty)) +
geom_boxplot() +
coord_flip()
```
Create your improved graph here
```
## TODO: Create an improved version of the graph above
## NOTE: This is just one possibility
mpg %>%
ggplot(aes(fct_reorder(manufacturer, cty), cty)) +
geom_boxplot() +
coord_flip()
```
### 53\.1\.2 **q2** Use concepts from the reading to improve the following graph. *Make sure your graph shows all the same variables*, no more and no fewer.
```
## NOTE: No need to edit; run and inspect
as_tibble(mtcars) %>%
mutate(model = rownames(mtcars)) %>%
ggplot(aes(x = "", y = "", size = mpg)) +
geom_point() +
facet_wrap(~model)
```
Create your improved graph here
```
## TODO: Create an improved version of the graph above
## NOTE: This is just one possibility
as_tibble(mtcars) %>%
mutate(
model = rownames(mtcars),
model = fct_reorder(model, mpg)
) %>%
ggplot(aes(x = model, y = mpg)) +
geom_col() +
coord_flip()
```
### 53\.1\.3 **q3** Use concepts from the reading to improve the following graph. *Make sure your graph shows all the same variables*, no more and no fewer.
```
## NOTE: No need to edit; run and inspect
diamonds %>%
ggplot(aes(clarity, fill = cut)) +
geom_bar()
```
Create your improved graph here
```
## TODO: Create an improved version of the graph above
## NOTE: This is just one possibility
diamonds %>%
count(cut, clarity) %>%
ggplot(aes(clarity, n, color = cut, group = cut)) +
geom_line()
```
### 53\.1\.4 **q4** Use concepts from the reading to improve the following graph. *Make sure your graph shows all the same variables*, no more and no fewer.
```
## NOTE: No need to edit; run and inspect
diamonds %>%
ggplot(aes(x = "", fill = cut)) +
geom_bar() +
coord_polar("y") +
labs(x = "")
```
Create your improved graph here
```
## TODO: Create an improved version of the graph above
## NOTE: This is just one possibility
diamonds %>%
ggplot(aes(cut)) +
geom_bar()
```
53\.1 Improve these graphs!
---------------------------
Using the ideas from the reading (video), state some issues with the following graphs. Remember the *visual hierarchy*:
1. Position along a common scale
2. Position on identical but nonaligned scales
3. Length
4. Angle; Slope (With slope not too close to 0, \\(\\pi/2\\), or \\(\\pi\\).)
5. Area
6. Volume; Density; Color saturation
7. Color hue
### 53\.1\.1 **q1** Use concepts from the reading to improve the following graph. *Make sure your graph shows all the same variables*, no more and no fewer.
```
## NOTE: No need to edit; run and inspect
mpg %>%
ggplot(aes(manufacturer, cty)) +
geom_boxplot() +
coord_flip()
```
Create your improved graph here
```
## TODO: Create an improved version of the graph above
## NOTE: This is just one possibility
mpg %>%
ggplot(aes(fct_reorder(manufacturer, cty), cty)) +
geom_boxplot() +
coord_flip()
```
### 53\.1\.2 **q2** Use concepts from the reading to improve the following graph. *Make sure your graph shows all the same variables*, no more and no fewer.
```
## NOTE: No need to edit; run and inspect
as_tibble(mtcars) %>%
mutate(model = rownames(mtcars)) %>%
ggplot(aes(x = "", y = "", size = mpg)) +
geom_point() +
facet_wrap(~model)
```
Create your improved graph here
```
## TODO: Create an improved version of the graph above
## NOTE: This is just one possibility
as_tibble(mtcars) %>%
mutate(
model = rownames(mtcars),
model = fct_reorder(model, mpg)
) %>%
ggplot(aes(x = model, y = mpg)) +
geom_col() +
coord_flip()
```
### 53\.1\.3 **q3** Use concepts from the reading to improve the following graph. *Make sure your graph shows all the same variables*, no more and no fewer.
```
## NOTE: No need to edit; run and inspect
diamonds %>%
ggplot(aes(clarity, fill = cut)) +
geom_bar()
```
Create your improved graph here
```
## TODO: Create an improved version of the graph above
## NOTE: This is just one possibility
diamonds %>%
count(cut, clarity) %>%
ggplot(aes(clarity, n, color = cut, group = cut)) +
geom_line()
```
### 53\.1\.4 **q4** Use concepts from the reading to improve the following graph. *Make sure your graph shows all the same variables*, no more and no fewer.
```
## NOTE: No need to edit; run and inspect
diamonds %>%
ggplot(aes(x = "", fill = cut)) +
geom_bar() +
coord_polar("y") +
labs(x = "")
```
Create your improved graph here
```
## TODO: Create an improved version of the graph above
## NOTE: This is just one possibility
diamonds %>%
ggplot(aes(cut)) +
geom_bar()
```
### 53\.1\.1 **q1** Use concepts from the reading to improve the following graph. *Make sure your graph shows all the same variables*, no more and no fewer.
```
## NOTE: No need to edit; run and inspect
mpg %>%
ggplot(aes(manufacturer, cty)) +
geom_boxplot() +
coord_flip()
```
Create your improved graph here
```
## TODO: Create an improved version of the graph above
## NOTE: This is just one possibility
mpg %>%
ggplot(aes(fct_reorder(manufacturer, cty), cty)) +
geom_boxplot() +
coord_flip()
```
### 53\.1\.2 **q2** Use concepts from the reading to improve the following graph. *Make sure your graph shows all the same variables*, no more and no fewer.
```
## NOTE: No need to edit; run and inspect
as_tibble(mtcars) %>%
mutate(model = rownames(mtcars)) %>%
ggplot(aes(x = "", y = "", size = mpg)) +
geom_point() +
facet_wrap(~model)
```
Create your improved graph here
```
## TODO: Create an improved version of the graph above
## NOTE: This is just one possibility
as_tibble(mtcars) %>%
mutate(
model = rownames(mtcars),
model = fct_reorder(model, mpg)
) %>%
ggplot(aes(x = model, y = mpg)) +
geom_col() +
coord_flip()
```
### 53\.1\.3 **q3** Use concepts from the reading to improve the following graph. *Make sure your graph shows all the same variables*, no more and no fewer.
```
## NOTE: No need to edit; run and inspect
diamonds %>%
ggplot(aes(clarity, fill = cut)) +
geom_bar()
```
Create your improved graph here
```
## TODO: Create an improved version of the graph above
## NOTE: This is just one possibility
diamonds %>%
count(cut, clarity) %>%
ggplot(aes(clarity, n, color = cut, group = cut)) +
geom_line()
```
### 53\.1\.4 **q4** Use concepts from the reading to improve the following graph. *Make sure your graph shows all the same variables*, no more and no fewer.
```
## NOTE: No need to edit; run and inspect
diamonds %>%
ggplot(aes(x = "", fill = cut)) +
geom_bar() +
coord_polar("y") +
labs(x = "")
```
Create your improved graph here
```
## TODO: Create an improved version of the graph above
## NOTE: This is just one possibility
diamonds %>%
ggplot(aes(cut)) +
geom_bar()
```
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/model-introduction-to-modeling.html |
54 Model: Introduction to Modeling
==================================
*Purpose*: Modeling is a key tool for data science: We will use models to understand the relationships between variables and to make predictions. **Building models is subtle and difficult**. To that end, this will be a high\-level tour through the key parts of building and assessing a model. In this exercise, you’ll learn what a model is, how to *fit* a model, how to *assess* a fitted model, some ways to *improve* a model, and how to *quantify* how trustworthy a model is.
*Reading*: (*None*, this exercise *is* the reading.)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(modelr)
library(broom)
```
```
##
## Attaching package: 'broom'
```
```
## The following object is masked from 'package:modelr':
##
## bootstrap
```
```
## NOTE: No need to edit this chunk
set.seed(101)
## Select training data
df_train <-
diamonds %>%
slice(1:1e4)
## Select test data
df_test <-
diamonds %>%
slice((1e4 + 1):2e4)
```
54\.1 A simple model
--------------------
In what follows, we’ll try to fit a *linear, one\-dimensional* model for the `price` of a diamond. It will be *linear* in the sense that it will be a linear function of its inputs; i.e. for an input \\(x\\) we’ll limit ourselves to scaling the input \\(\\beta \\times x\\). It will also be *one\-dimensional* in the sense that we will only consider one input; namely, the `carat` of the diamond. Thus, my model for predicted price \\(\\hat{f}\\) will be
\\\[\\hat{f} \= \\beta\_0 \+ \\beta\_{\\text{carat}} (\\text{carat}) \+ \\epsilon,\\]
where \\(\\epsilon\\) is an *additive error* term, which we’ll model as a random variable. Remember that \\(\\hat{f}\\) notation indicates an estimate for the quantity \\(f\\). To start modeling, I’ll choose *parameters* for my model by selecting values for the slope and intercept.
```
## Set model parameter values [theta]
slope <- 1000 / 0.5 # Eyeball: $1000 / (1/2) carat
intercept <- 0
## Represent model as an `abline`
df_train %>%
ggplot(aes(carat, price)) +
geom_point() +
geom_abline(
slope = slope,
intercept = intercept,
linetype = 2,
color = "salmon"
)
```
That doesn’t look very good; the line tends to miss the higher\-carat values. I manually adjust the slope up by a factor of two:
```
## Set model parameter values [theta]
slope <- 2000 / 0.5 # Adjusted by factor of 2
intercept <- 0
## Represent model as an `abline`
df_train %>%
ggplot(aes(carat, price)) +
geom_point() +
geom_abline(
slope = slope,
intercept = intercept,
linetype = 2,
color = "salmon"
)
```
This *manual* approach to *fitting a model*—choosing parameter values—is labor\-intensive and silly. Fortunately, there’s a better way. We can *optimize* the parameter values by minimizing a chosen metric.
First, let’s visualize the quantities we will seek to minimize:
```
## Set model parameter values [theta]
slope <- 2000 / 0.5
intercept <- 0
## Compute predicted values
df_train %>%
mutate(price_pred = slope * carat + intercept) %>%
## Visualize *residuals* as vertical bars
ggplot(aes(carat, price)) +
geom_point() +
geom_segment(
aes(xend = carat, yend = price_pred),
color = "salmon"
) +
geom_line(
aes(y = price_pred),
linetype = 2,
color = "salmon"
)
```
This plot shows the *residuals* of the model, that is
\\\[\\text{Residual}\_i(\\theta) \= \\hat{f}\_i(\\theta) \- f\_i,\\]
where \\(f\_i\\) is the i\-th observed output value (`price`), \\(\\hat{f}\_i(\\theta)\\) is the i\-th prediction from the model (`price_pred`), and \\(\\theta\\) is the set of parameter values for the model. For instance, the *linear, one\-dimensional* model above has as parameters `theta = c(slope, intercept)`. We can use these residuals to define an error metric and fit a model.
54\.2 Fitting a model
---------------------
Define the *mean squared error* (MSE) via
\\\[\\text{MSE}(\\theta) \= \\frac{1}{n} \\sum\_{i\=1}^n \\text{Residual}\_i(\\theta)^2 \= \\frac{1}{n} \\sum\_{i\=1}^n (\\hat{f}\_i(\\theta) \- f\_i)^2\.\\]
This is a summary of the total error of the model. While we could carry out this optimization by hand, the `R` routine `lm()` (which stands for *linear model*) automates this procedure. We simply give it `data` over which to fit the model, and a `formula` defining which inputs and output to consider.
```
## Fit model
fit_carat <-
df_train %>%
lm(
data = ., # Data for fit
formula = price ~ carat # Formula for fit
)
fit_carat %>% tidy()
```
```
## # A tibble: 2 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) -84.6 21.0 -4.04 0.0000546
## 2 carat 4129. 23.9 173. 0
```
The `tidy()` function takes a fit and returns the model’s parameters; here we can see the `estimate` values for the coefficients, as well as some statistical information (which we’ll discuss in a future exercise). The `formula` argument uses R’s formula notation, where `Y ~ X` means “fit a linear model with `Y` as the value to predict, and with `X` as an input.” The formula `price ~ carat` translates to the linear model
\\\[\\widehat{\\text{price}} \= \\beta\_0 \+ \\beta\_{\\text{carat}} (\\text{carat}) \+ \\epsilon.\\]
This slightly\-mysterious formula notation `price ~ carat` is convenient for defining many kinds of models, as we’ll see in the following task.
### 54\.2\.1 **q1** Fit a basic model.
Copy the code above to fit a model of the form
\\\[\\widehat{\\text{price}} \= \\beta\_0 \+ \\beta\_{\\text{carat}} (\\text{carat}) \+ \\beta\_{\\text{cut}} (\\text{cut}) \+ \\epsilon.\\]
Answer the questions below to investigate how this model form handles the variable `cut`.
```
fit_q1 <-
df_train %>%
lm(
data = .,
formula = price ~ carat + cut
)
fit_q1 %>% tidy()
```
```
## # A tibble: 6 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) -294. 22.1 -13.3 3.39e-40
## 2 carat 4280. 24.0 178. 0
## 3 cut.L 349. 17.7 19.7 4.10e-85
## 4 cut.Q -135. 15.8 -8.56 1.26e-17
## 5 cut.C 208. 14.3 14.5 2.94e-47
## 6 cut^4 82.2 12.1 6.78 1.26e-11
```
**Observations**:
* `carat` is a continuous variable, while `cut` is an (ordinal) factor; it only takes fixed non\-numerical values.
* We can’t reasonably multiply `cut` by a constant as it is not a number.
* The `term` for `carat` is just one numerical value (a slope), while there are multiple `term`s for `cut`.
+ These are `lm()`s way of *encoding* the `cut` factor as a numerical value: Note that there are `5` levels for `cut` and `4` terms representing `cut`.
*Aside*: Handling *factors* in modeling is handled automatically by `lm()` by introducing [*dummy variables*](https://en.wikipedia.org/wiki/Dummy_variable_(statistics)). Conceptually, what the linear model does is fit a single *contant value* for each level of the factor. This gives us a different prediction for each factor level, as the next example shows.
```
## NOTE: No need to edit; just run and inspect
fit_cut <-
lm(
data = df_train,
formula = price ~ cut
)
df_train %>%
add_predictions(fit_cut, var = "price_pred") %>%
ggplot(aes(cut)) +
geom_errorbar(aes(ymin = price_pred, ymax = price_pred), color = "salmon") +
geom_point(aes(y = price))
```
54\.3 Assessing a model
-----------------------
Next, let’s visually inspect the results of model `fit_carat` using the function `modelr::add_predictions()`:
```
## Compute predicted values
df_train %>%
add_predictions(
model = fit_carat,
var = "price_pred"
) %>%
ggplot(aes(carat, price)) +
geom_point() +
geom_line(
aes(y = price_pred),
linetype = 2,
color = "salmon"
)
```
Frankly, these model predictions don’t look very good! We know that diamond prices probably depend on the “4 C’s”; maybe your model using more predictors will be more effective?
### 54\.3\.1 **q2** Repeat the code above from chunk `vis-carat` to produce a similar visual with your model `fit_q1`. *This visual is unlikely to be effective*, note in your observations why that might be.
```
df_train %>%
add_predictions(
model = fit_q1,
var = "price_pred"
) %>%
ggplot(aes(carat, price)) +
geom_point() +
geom_line(
aes(y = price_pred),
linetype = 2,
color = "salmon"
)
```
**Observations**:
* A diamond can have a different value of `cut` at a fixed value of `carat`; this means our model can take different values at a fixed value of `carat`. This leads to the “squiggles” we see above.
Visualizing the results against a single variable quickly breaks down when we have more than one predictor! Let’s learn some other tools for assessing model accuracy.
54\.4 Model Diagnostics
-----------------------
The plot above allows us to visually assess the model performance, but sometimes we’ll want a quick *numerical summary* of model accuracy, particularly when comparing multiple models for the same data. The functions `modelr::mse` and `modelr::rsquare` are two *error metrics* we can use to summarize accuracy:
```
## Compute metrics
mse(fit_carat, df_train)
```
```
## [1] 309104.4
```
```
rsquare(fit_carat, df_train)
```
```
## [1] 0.7491572
```
* `mse` is the [*mean squared error*](https://en.wikipedia.org/wiki/Mean_squared_error). Lower values are more accurate.
+ The `mse` has no formal upper bound, so we can only compare `mse` values between models.
+ The `mse` also has the square\-units of the quantity we’re predicting; for instance our model’s `mse` has units of \\(\\$^2\\).
* `rsquare`, also known as the [*coefficient of determination*](https://en.wikipedia.org/wiki/Coefficient_of_determination), lies between `[0, 1]`. Higher values are more accurate.
+ The `rsquare` has bounded values, so we can think about it in absolute terms: a model with `rsquare == 1` is essentially perfect, and values closer to `1` are better.
### 54\.4\.1 **q3** Compute the `mse` and `rsquare` for your model `fit_q1`, and compare the values against those for `fit_carat`. Is your model more accurate?
```
mse(fit_q1, df_train)
```
```
## [1] 290675.2
```
```
rsquare(fit_q1, df_train)
```
```
## [1] 0.7641128
```
**Observations**:
* The model `fit_q1` is slightly more accurate than `fit_carat`, at least on `df_train`
*Aside*: What’s an acceptable r\-squared value? That really depends on the application. For some physics\-related problems \\(R \\approx 0\.9\\) might be considered unacceptably low, while for some human\-behavior related problems \\(R \\approx 0\.7\\) might be considered quite good!
While it’s difficult to visualize model results against *multiple variables*, we can always compare *predicted vs actual* values. If the model fit were perfect, then the predicted \\(\\hat{f}\\) and actual \\(f\\) values would like along a straight line with slope one.
```
## NOTE: No need to change this
## predicted vs actual
df_train %>%
add_predictions(
model = fit_carat,
var = "price_pred"
) %>%
ggplot(aes(price, price_pred)) +
geom_abline(slope = 1, intercept = 0, color = "grey50", size = 2) +
geom_point()
```
```
## Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
## ℹ Please use `linewidth` instead.
```
This fit looks quite poor—there is a great deal of scatter of actual values away from the predicted values. What’s more, the scatter doesn’t look random; there seem to be some consistent patterns (e.g. “stripes”) in the plot that suggest there may be additional patterns we could incorporate in our model, if we added more variables. Let’s try that!
54\.5 Improving a model
-----------------------
The plot above suggests there may be some patterns we’re not accounting for in our model: Let’s build another model to use that intuition.
### 54\.5\.1 **q4** Fit an updated model.
Fit a model `fit_4c` of the form:
\\\[\\widehat{\\text{price}} \= \\beta\_0 \+ \\beta\_{\\text{carat}} (\\text{carat}) \+ \\beta\_{\\text{cut}} (\\text{cut}) \+ \\beta\_{\\text{color}} (\\text{color}) \+ \\beta\_{\\text{clarity}} (\\text{clarity}) \+ \\epsilon.\\]
Compute the `mse` and `rsquare` of your new model, and compare against the previous models.
```
fit_4c <-
df_train %>%
lm(
data = .,
formula = price ~ carat + cut + color + clarity
)
rsquare(fit_q1, df_train)
```
```
## [1] 0.7641128
```
```
rsquare(fit_4c, df_train)
```
```
## [1] 0.897396
```
**Observations**:
* I find that adding all four C’s improves the model accuracy quite a bit.
Generally, *adding more variables tends to improve model accuracy*. However, *this is not always the case*. In a future exercise we’ll learn more about selecting meaningful variables in the context of modeling.
Note that even when we use all 4 C’s, we still do not have a *perfect* model. Generally, any model we fit will have some inaccuracy. If we plan to use our model to make decisions, it’s important to have some sense of **how much we can trust our predictions**. Metrics like model error are a coarse description of general model accuracy, but we can get much more useful information *for individual predictions* by **quantifying uncertainty**.
54\.6 Quantifying uncertainty
-----------------------------
We’ve talked about **confidence intervals** before for estimates like the sample mean. Let’s take a (brief) look now at *prediction intervals* (PI). The code below approximates prediction intervals based on the `fit_carat` model.
```
## NOTE: No need to edit this chunk
## Helper function to compute uncertainty bounds
add_uncertainties <- function(data, model, prefix = "pred", ...) {
df_fit <-
stats::predict(model, data, ...) %>%
as_tibble() %>%
rename_with(~ str_c(prefix, "_", .))
bind_cols(data, df_fit)
}
## Generate predictions with uncertainties
df_pred_uq <-
df_train %>%
add_uncertainties(
model = fit_carat,
prefix = "pred",
interval = "prediction",
level = 0.95
)
df_pred_uq %>% glimpse()
```
```
## Rows: 10,000
## Columns: 13
## $ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0…
## $ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Ve…
## $ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I…
## $ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1,…
## $ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 6…
## $ table <dbl> 55, 61, 65, 58, 58, 57, 57, 55, 61, 61, 55, 56, 61, 54, 62, 5…
## $ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 3…
## $ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4…
## $ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4…
## $ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2…
## $ pred_fit <dbl> 865.1024, 782.5204, 865.1024, 1112.8483, 1195.4303, 906.3934,…
## $ pred_lwr <dbl> -225.25853, -307.86569, -225.25853, 22.55812, 105.16206, -183…
## $ pred_upr <dbl> 1955.463, 1872.906, 1955.463, 2203.139, 2285.699, 1996.742, 1…
```
The helper function `add_uncertainties()` added the columns `pred_fit` (the predicted price), as well as two new columns: `pred_lwr` and `pred_upr`. These are the bounds of a *prediction interval* (PI), an interview meant to capture *not* a future sample statistic, *but rather* a future observation.
The following visualization illustrates the computed prediction intervals: I visualize the prediction intervals with `geom_errorbar`. Note that we get a PI *for each observation*; every dot gets an interval.
Since we have access to the true values `price`, we can assess whether the true observed values fall within the model prediction intervals; this happens when the diagonal falls within the interval on a predicted\-vs\-actual plot.
```
## NOTE: No need to edit this chunk
# Visualize
df_pred_uq %>%
filter(price < 1000) %>%
ggplot(aes(price)) +
geom_abline(slope = 1, intercept = 0, size = 2, color = "grey50") +
geom_errorbar(
data = . %>% filter(pred_lwr <= price & price <= pred_upr),
aes(ymin = pred_lwr, ymax = pred_upr),
width = 0,
size = 0.5,
alpha = 1 / 2,
color = "darkturquoise"
) +
geom_errorbar(
data = . %>% filter(price < pred_lwr | pred_upr < price),
aes(ymin = pred_lwr, ymax = pred_upr),
width = 0,
size = 0.5,
color = "salmon"
) +
geom_point(aes(y = pred_fit), size = 0.1) +
theme_minimal()
```
Ideally these prediction intervals should include a desired fraction of observed values; let’s compute the *empirical coverage* to see if this matches our desired `level = 0.95`.
```
## NOTE: No need to edit this chunk
# Compute empirical coverage
df_pred_uq %>%
summarize(coverage = mean(pred_lwr <= price & price <= pred_upr))
```
```
## # A tibble: 1 × 1
## coverage
## <dbl>
## 1 0.959
```
The empirical coverage is quite close to our desired level.
### 54\.6\.1 **q6** Use the helper function `add_uncertainties()` to add prediction intervals to `df_train` based on the model `fit_4c`.
```
df_q6 <-
df_train %>%
add_uncertainties(
model = fit_4c,
prefix = "pred",
interval = "prediction",
level = 0.95
)
## NOTE: No need to edit code below
# Compute empirical coverage
df_q6 %>%
summarize(coverage = mean(pred_lwr <= price & price <= pred_upr))
```
```
## # A tibble: 1 × 1
## coverage
## <dbl>
## 1 0.956
```
```
# Visualize
df_q6 %>%
ggplot(aes(price)) +
geom_abline(slope = 1, intercept = 0, size = 2, color = "grey50") +
geom_errorbar(
data = . %>% filter(pred_lwr <= price & price <= pred_upr),
aes(ymin = pred_lwr, ymax = pred_upr),
width = 0,
size = 0.1,
alpha = 1 / 5,
color = "darkturquoise"
) +
geom_errorbar(
data = . %>% filter(price < pred_lwr | pred_upr < price),
aes(ymin = pred_lwr, ymax = pred_upr),
width = 0,
size = 0.1,
color = "salmon"
) +
geom_point(aes(y = pred_fit), size = 0.1) +
theme_minimal()
```
We will discuss prediction intervals further in a future exercise. For now, know that they give us a sense of how much we should trust our model predictions.
54\.7 Summary
-------------
To summarize this reading, here are the steps to fitting and using a model:
* *Choose* a model form, e.g. if only considering *linear models*, we may consider `price ~ carat` vs `price ~ carat + cut`.
* *Fit* the model with data; this is done by optimizing a user\-chosen metric, such as the `mse`.
* *Assess* the model with metrics (`mse, rsquare`) and plots (predicted\-vs\-actual).
* *Improve* the model if needed, e.g. by adding more predictors.
* *Quantify* the trustworthiness of the model, e.g. with prediction intervals.
* *Use* the model to do useful work! We’ll cover this in future exercises.
54\.8 Preview
-------------
Notice that in this exercise, we only used `df_test`, but I *also* defined a tibble `df_train`. What happens when we fit the model on `df_train`, but assess it on `df_test`?
```
## NOTE: No need to edit this chunk
rsquare(fit_4c, df_train)
```
```
## [1] 0.897396
```
```
rsquare(fit_4c, df_test)
```
```
## [1] 0.7860757
```
Note that `rsquare` on the *training data* `df_train` is much higher than `rsquare` on the *test data* `df_test`. This indicates that the assessment of model accuracy is *overly optimistic* when assessing the model with `df_train`. We will explore this idea more in `e-stat12-models-train-test`.
54\.1 A simple model
--------------------
In what follows, we’ll try to fit a *linear, one\-dimensional* model for the `price` of a diamond. It will be *linear* in the sense that it will be a linear function of its inputs; i.e. for an input \\(x\\) we’ll limit ourselves to scaling the input \\(\\beta \\times x\\). It will also be *one\-dimensional* in the sense that we will only consider one input; namely, the `carat` of the diamond. Thus, my model for predicted price \\(\\hat{f}\\) will be
\\\[\\hat{f} \= \\beta\_0 \+ \\beta\_{\\text{carat}} (\\text{carat}) \+ \\epsilon,\\]
where \\(\\epsilon\\) is an *additive error* term, which we’ll model as a random variable. Remember that \\(\\hat{f}\\) notation indicates an estimate for the quantity \\(f\\). To start modeling, I’ll choose *parameters* for my model by selecting values for the slope and intercept.
```
## Set model parameter values [theta]
slope <- 1000 / 0.5 # Eyeball: $1000 / (1/2) carat
intercept <- 0
## Represent model as an `abline`
df_train %>%
ggplot(aes(carat, price)) +
geom_point() +
geom_abline(
slope = slope,
intercept = intercept,
linetype = 2,
color = "salmon"
)
```
That doesn’t look very good; the line tends to miss the higher\-carat values. I manually adjust the slope up by a factor of two:
```
## Set model parameter values [theta]
slope <- 2000 / 0.5 # Adjusted by factor of 2
intercept <- 0
## Represent model as an `abline`
df_train %>%
ggplot(aes(carat, price)) +
geom_point() +
geom_abline(
slope = slope,
intercept = intercept,
linetype = 2,
color = "salmon"
)
```
This *manual* approach to *fitting a model*—choosing parameter values—is labor\-intensive and silly. Fortunately, there’s a better way. We can *optimize* the parameter values by minimizing a chosen metric.
First, let’s visualize the quantities we will seek to minimize:
```
## Set model parameter values [theta]
slope <- 2000 / 0.5
intercept <- 0
## Compute predicted values
df_train %>%
mutate(price_pred = slope * carat + intercept) %>%
## Visualize *residuals* as vertical bars
ggplot(aes(carat, price)) +
geom_point() +
geom_segment(
aes(xend = carat, yend = price_pred),
color = "salmon"
) +
geom_line(
aes(y = price_pred),
linetype = 2,
color = "salmon"
)
```
This plot shows the *residuals* of the model, that is
\\\[\\text{Residual}\_i(\\theta) \= \\hat{f}\_i(\\theta) \- f\_i,\\]
where \\(f\_i\\) is the i\-th observed output value (`price`), \\(\\hat{f}\_i(\\theta)\\) is the i\-th prediction from the model (`price_pred`), and \\(\\theta\\) is the set of parameter values for the model. For instance, the *linear, one\-dimensional* model above has as parameters `theta = c(slope, intercept)`. We can use these residuals to define an error metric and fit a model.
54\.2 Fitting a model
---------------------
Define the *mean squared error* (MSE) via
\\\[\\text{MSE}(\\theta) \= \\frac{1}{n} \\sum\_{i\=1}^n \\text{Residual}\_i(\\theta)^2 \= \\frac{1}{n} \\sum\_{i\=1}^n (\\hat{f}\_i(\\theta) \- f\_i)^2\.\\]
This is a summary of the total error of the model. While we could carry out this optimization by hand, the `R` routine `lm()` (which stands for *linear model*) automates this procedure. We simply give it `data` over which to fit the model, and a `formula` defining which inputs and output to consider.
```
## Fit model
fit_carat <-
df_train %>%
lm(
data = ., # Data for fit
formula = price ~ carat # Formula for fit
)
fit_carat %>% tidy()
```
```
## # A tibble: 2 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) -84.6 21.0 -4.04 0.0000546
## 2 carat 4129. 23.9 173. 0
```
The `tidy()` function takes a fit and returns the model’s parameters; here we can see the `estimate` values for the coefficients, as well as some statistical information (which we’ll discuss in a future exercise). The `formula` argument uses R’s formula notation, where `Y ~ X` means “fit a linear model with `Y` as the value to predict, and with `X` as an input.” The formula `price ~ carat` translates to the linear model
\\\[\\widehat{\\text{price}} \= \\beta\_0 \+ \\beta\_{\\text{carat}} (\\text{carat}) \+ \\epsilon.\\]
This slightly\-mysterious formula notation `price ~ carat` is convenient for defining many kinds of models, as we’ll see in the following task.
### 54\.2\.1 **q1** Fit a basic model.
Copy the code above to fit a model of the form
\\\[\\widehat{\\text{price}} \= \\beta\_0 \+ \\beta\_{\\text{carat}} (\\text{carat}) \+ \\beta\_{\\text{cut}} (\\text{cut}) \+ \\epsilon.\\]
Answer the questions below to investigate how this model form handles the variable `cut`.
```
fit_q1 <-
df_train %>%
lm(
data = .,
formula = price ~ carat + cut
)
fit_q1 %>% tidy()
```
```
## # A tibble: 6 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) -294. 22.1 -13.3 3.39e-40
## 2 carat 4280. 24.0 178. 0
## 3 cut.L 349. 17.7 19.7 4.10e-85
## 4 cut.Q -135. 15.8 -8.56 1.26e-17
## 5 cut.C 208. 14.3 14.5 2.94e-47
## 6 cut^4 82.2 12.1 6.78 1.26e-11
```
**Observations**:
* `carat` is a continuous variable, while `cut` is an (ordinal) factor; it only takes fixed non\-numerical values.
* We can’t reasonably multiply `cut` by a constant as it is not a number.
* The `term` for `carat` is just one numerical value (a slope), while there are multiple `term`s for `cut`.
+ These are `lm()`s way of *encoding* the `cut` factor as a numerical value: Note that there are `5` levels for `cut` and `4` terms representing `cut`.
*Aside*: Handling *factors* in modeling is handled automatically by `lm()` by introducing [*dummy variables*](https://en.wikipedia.org/wiki/Dummy_variable_(statistics)). Conceptually, what the linear model does is fit a single *contant value* for each level of the factor. This gives us a different prediction for each factor level, as the next example shows.
```
## NOTE: No need to edit; just run and inspect
fit_cut <-
lm(
data = df_train,
formula = price ~ cut
)
df_train %>%
add_predictions(fit_cut, var = "price_pred") %>%
ggplot(aes(cut)) +
geom_errorbar(aes(ymin = price_pred, ymax = price_pred), color = "salmon") +
geom_point(aes(y = price))
```
### 54\.2\.1 **q1** Fit a basic model.
Copy the code above to fit a model of the form
\\\[\\widehat{\\text{price}} \= \\beta\_0 \+ \\beta\_{\\text{carat}} (\\text{carat}) \+ \\beta\_{\\text{cut}} (\\text{cut}) \+ \\epsilon.\\]
Answer the questions below to investigate how this model form handles the variable `cut`.
```
fit_q1 <-
df_train %>%
lm(
data = .,
formula = price ~ carat + cut
)
fit_q1 %>% tidy()
```
```
## # A tibble: 6 × 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) -294. 22.1 -13.3 3.39e-40
## 2 carat 4280. 24.0 178. 0
## 3 cut.L 349. 17.7 19.7 4.10e-85
## 4 cut.Q -135. 15.8 -8.56 1.26e-17
## 5 cut.C 208. 14.3 14.5 2.94e-47
## 6 cut^4 82.2 12.1 6.78 1.26e-11
```
**Observations**:
* `carat` is a continuous variable, while `cut` is an (ordinal) factor; it only takes fixed non\-numerical values.
* We can’t reasonably multiply `cut` by a constant as it is not a number.
* The `term` for `carat` is just one numerical value (a slope), while there are multiple `term`s for `cut`.
+ These are `lm()`s way of *encoding* the `cut` factor as a numerical value: Note that there are `5` levels for `cut` and `4` terms representing `cut`.
*Aside*: Handling *factors* in modeling is handled automatically by `lm()` by introducing [*dummy variables*](https://en.wikipedia.org/wiki/Dummy_variable_(statistics)). Conceptually, what the linear model does is fit a single *contant value* for each level of the factor. This gives us a different prediction for each factor level, as the next example shows.
```
## NOTE: No need to edit; just run and inspect
fit_cut <-
lm(
data = df_train,
formula = price ~ cut
)
df_train %>%
add_predictions(fit_cut, var = "price_pred") %>%
ggplot(aes(cut)) +
geom_errorbar(aes(ymin = price_pred, ymax = price_pred), color = "salmon") +
geom_point(aes(y = price))
```
54\.3 Assessing a model
-----------------------
Next, let’s visually inspect the results of model `fit_carat` using the function `modelr::add_predictions()`:
```
## Compute predicted values
df_train %>%
add_predictions(
model = fit_carat,
var = "price_pred"
) %>%
ggplot(aes(carat, price)) +
geom_point() +
geom_line(
aes(y = price_pred),
linetype = 2,
color = "salmon"
)
```
Frankly, these model predictions don’t look very good! We know that diamond prices probably depend on the “4 C’s”; maybe your model using more predictors will be more effective?
### 54\.3\.1 **q2** Repeat the code above from chunk `vis-carat` to produce a similar visual with your model `fit_q1`. *This visual is unlikely to be effective*, note in your observations why that might be.
```
df_train %>%
add_predictions(
model = fit_q1,
var = "price_pred"
) %>%
ggplot(aes(carat, price)) +
geom_point() +
geom_line(
aes(y = price_pred),
linetype = 2,
color = "salmon"
)
```
**Observations**:
* A diamond can have a different value of `cut` at a fixed value of `carat`; this means our model can take different values at a fixed value of `carat`. This leads to the “squiggles” we see above.
Visualizing the results against a single variable quickly breaks down when we have more than one predictor! Let’s learn some other tools for assessing model accuracy.
### 54\.3\.1 **q2** Repeat the code above from chunk `vis-carat` to produce a similar visual with your model `fit_q1`. *This visual is unlikely to be effective*, note in your observations why that might be.
```
df_train %>%
add_predictions(
model = fit_q1,
var = "price_pred"
) %>%
ggplot(aes(carat, price)) +
geom_point() +
geom_line(
aes(y = price_pred),
linetype = 2,
color = "salmon"
)
```
**Observations**:
* A diamond can have a different value of `cut` at a fixed value of `carat`; this means our model can take different values at a fixed value of `carat`. This leads to the “squiggles” we see above.
Visualizing the results against a single variable quickly breaks down when we have more than one predictor! Let’s learn some other tools for assessing model accuracy.
54\.4 Model Diagnostics
-----------------------
The plot above allows us to visually assess the model performance, but sometimes we’ll want a quick *numerical summary* of model accuracy, particularly when comparing multiple models for the same data. The functions `modelr::mse` and `modelr::rsquare` are two *error metrics* we can use to summarize accuracy:
```
## Compute metrics
mse(fit_carat, df_train)
```
```
## [1] 309104.4
```
```
rsquare(fit_carat, df_train)
```
```
## [1] 0.7491572
```
* `mse` is the [*mean squared error*](https://en.wikipedia.org/wiki/Mean_squared_error). Lower values are more accurate.
+ The `mse` has no formal upper bound, so we can only compare `mse` values between models.
+ The `mse` also has the square\-units of the quantity we’re predicting; for instance our model’s `mse` has units of \\(\\$^2\\).
* `rsquare`, also known as the [*coefficient of determination*](https://en.wikipedia.org/wiki/Coefficient_of_determination), lies between `[0, 1]`. Higher values are more accurate.
+ The `rsquare` has bounded values, so we can think about it in absolute terms: a model with `rsquare == 1` is essentially perfect, and values closer to `1` are better.
### 54\.4\.1 **q3** Compute the `mse` and `rsquare` for your model `fit_q1`, and compare the values against those for `fit_carat`. Is your model more accurate?
```
mse(fit_q1, df_train)
```
```
## [1] 290675.2
```
```
rsquare(fit_q1, df_train)
```
```
## [1] 0.7641128
```
**Observations**:
* The model `fit_q1` is slightly more accurate than `fit_carat`, at least on `df_train`
*Aside*: What’s an acceptable r\-squared value? That really depends on the application. For some physics\-related problems \\(R \\approx 0\.9\\) might be considered unacceptably low, while for some human\-behavior related problems \\(R \\approx 0\.7\\) might be considered quite good!
While it’s difficult to visualize model results against *multiple variables*, we can always compare *predicted vs actual* values. If the model fit were perfect, then the predicted \\(\\hat{f}\\) and actual \\(f\\) values would like along a straight line with slope one.
```
## NOTE: No need to change this
## predicted vs actual
df_train %>%
add_predictions(
model = fit_carat,
var = "price_pred"
) %>%
ggplot(aes(price, price_pred)) +
geom_abline(slope = 1, intercept = 0, color = "grey50", size = 2) +
geom_point()
```
```
## Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
## ℹ Please use `linewidth` instead.
```
This fit looks quite poor—there is a great deal of scatter of actual values away from the predicted values. What’s more, the scatter doesn’t look random; there seem to be some consistent patterns (e.g. “stripes”) in the plot that suggest there may be additional patterns we could incorporate in our model, if we added more variables. Let’s try that!
### 54\.4\.1 **q3** Compute the `mse` and `rsquare` for your model `fit_q1`, and compare the values against those for `fit_carat`. Is your model more accurate?
```
mse(fit_q1, df_train)
```
```
## [1] 290675.2
```
```
rsquare(fit_q1, df_train)
```
```
## [1] 0.7641128
```
**Observations**:
* The model `fit_q1` is slightly more accurate than `fit_carat`, at least on `df_train`
*Aside*: What’s an acceptable r\-squared value? That really depends on the application. For some physics\-related problems \\(R \\approx 0\.9\\) might be considered unacceptably low, while for some human\-behavior related problems \\(R \\approx 0\.7\\) might be considered quite good!
While it’s difficult to visualize model results against *multiple variables*, we can always compare *predicted vs actual* values. If the model fit were perfect, then the predicted \\(\\hat{f}\\) and actual \\(f\\) values would like along a straight line with slope one.
```
## NOTE: No need to change this
## predicted vs actual
df_train %>%
add_predictions(
model = fit_carat,
var = "price_pred"
) %>%
ggplot(aes(price, price_pred)) +
geom_abline(slope = 1, intercept = 0, color = "grey50", size = 2) +
geom_point()
```
```
## Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
## ℹ Please use `linewidth` instead.
```
This fit looks quite poor—there is a great deal of scatter of actual values away from the predicted values. What’s more, the scatter doesn’t look random; there seem to be some consistent patterns (e.g. “stripes”) in the plot that suggest there may be additional patterns we could incorporate in our model, if we added more variables. Let’s try that!
54\.5 Improving a model
-----------------------
The plot above suggests there may be some patterns we’re not accounting for in our model: Let’s build another model to use that intuition.
### 54\.5\.1 **q4** Fit an updated model.
Fit a model `fit_4c` of the form:
\\\[\\widehat{\\text{price}} \= \\beta\_0 \+ \\beta\_{\\text{carat}} (\\text{carat}) \+ \\beta\_{\\text{cut}} (\\text{cut}) \+ \\beta\_{\\text{color}} (\\text{color}) \+ \\beta\_{\\text{clarity}} (\\text{clarity}) \+ \\epsilon.\\]
Compute the `mse` and `rsquare` of your new model, and compare against the previous models.
```
fit_4c <-
df_train %>%
lm(
data = .,
formula = price ~ carat + cut + color + clarity
)
rsquare(fit_q1, df_train)
```
```
## [1] 0.7641128
```
```
rsquare(fit_4c, df_train)
```
```
## [1] 0.897396
```
**Observations**:
* I find that adding all four C’s improves the model accuracy quite a bit.
Generally, *adding more variables tends to improve model accuracy*. However, *this is not always the case*. In a future exercise we’ll learn more about selecting meaningful variables in the context of modeling.
Note that even when we use all 4 C’s, we still do not have a *perfect* model. Generally, any model we fit will have some inaccuracy. If we plan to use our model to make decisions, it’s important to have some sense of **how much we can trust our predictions**. Metrics like model error are a coarse description of general model accuracy, but we can get much more useful information *for individual predictions* by **quantifying uncertainty**.
### 54\.5\.1 **q4** Fit an updated model.
Fit a model `fit_4c` of the form:
\\\[\\widehat{\\text{price}} \= \\beta\_0 \+ \\beta\_{\\text{carat}} (\\text{carat}) \+ \\beta\_{\\text{cut}} (\\text{cut}) \+ \\beta\_{\\text{color}} (\\text{color}) \+ \\beta\_{\\text{clarity}} (\\text{clarity}) \+ \\epsilon.\\]
Compute the `mse` and `rsquare` of your new model, and compare against the previous models.
```
fit_4c <-
df_train %>%
lm(
data = .,
formula = price ~ carat + cut + color + clarity
)
rsquare(fit_q1, df_train)
```
```
## [1] 0.7641128
```
```
rsquare(fit_4c, df_train)
```
```
## [1] 0.897396
```
**Observations**:
* I find that adding all four C’s improves the model accuracy quite a bit.
Generally, *adding more variables tends to improve model accuracy*. However, *this is not always the case*. In a future exercise we’ll learn more about selecting meaningful variables in the context of modeling.
Note that even when we use all 4 C’s, we still do not have a *perfect* model. Generally, any model we fit will have some inaccuracy. If we plan to use our model to make decisions, it’s important to have some sense of **how much we can trust our predictions**. Metrics like model error are a coarse description of general model accuracy, but we can get much more useful information *for individual predictions* by **quantifying uncertainty**.
54\.6 Quantifying uncertainty
-----------------------------
We’ve talked about **confidence intervals** before for estimates like the sample mean. Let’s take a (brief) look now at *prediction intervals* (PI). The code below approximates prediction intervals based on the `fit_carat` model.
```
## NOTE: No need to edit this chunk
## Helper function to compute uncertainty bounds
add_uncertainties <- function(data, model, prefix = "pred", ...) {
df_fit <-
stats::predict(model, data, ...) %>%
as_tibble() %>%
rename_with(~ str_c(prefix, "_", .))
bind_cols(data, df_fit)
}
## Generate predictions with uncertainties
df_pred_uq <-
df_train %>%
add_uncertainties(
model = fit_carat,
prefix = "pred",
interval = "prediction",
level = 0.95
)
df_pred_uq %>% glimpse()
```
```
## Rows: 10,000
## Columns: 13
## $ carat <dbl> 0.23, 0.21, 0.23, 0.29, 0.31, 0.24, 0.24, 0.26, 0.22, 0.23, 0…
## $ cut <ord> Ideal, Premium, Good, Premium, Good, Very Good, Very Good, Ve…
## $ color <ord> E, E, E, I, J, J, I, H, E, H, J, J, F, J, E, E, I, J, J, J, I…
## $ clarity <ord> SI2, SI1, VS1, VS2, SI2, VVS2, VVS1, SI1, VS2, VS1, SI1, VS1,…
## $ depth <dbl> 61.5, 59.8, 56.9, 62.4, 63.3, 62.8, 62.3, 61.9, 65.1, 59.4, 6…
## $ table <dbl> 55, 61, 65, 58, 58, 57, 57, 55, 61, 61, 55, 56, 61, 54, 62, 5…
## $ price <int> 326, 326, 327, 334, 335, 336, 336, 337, 337, 338, 339, 340, 3…
## $ x <dbl> 3.95, 3.89, 4.05, 4.20, 4.34, 3.94, 3.95, 4.07, 3.87, 4.00, 4…
## $ y <dbl> 3.98, 3.84, 4.07, 4.23, 4.35, 3.96, 3.98, 4.11, 3.78, 4.05, 4…
## $ z <dbl> 2.43, 2.31, 2.31, 2.63, 2.75, 2.48, 2.47, 2.53, 2.49, 2.39, 2…
## $ pred_fit <dbl> 865.1024, 782.5204, 865.1024, 1112.8483, 1195.4303, 906.3934,…
## $ pred_lwr <dbl> -225.25853, -307.86569, -225.25853, 22.55812, 105.16206, -183…
## $ pred_upr <dbl> 1955.463, 1872.906, 1955.463, 2203.139, 2285.699, 1996.742, 1…
```
The helper function `add_uncertainties()` added the columns `pred_fit` (the predicted price), as well as two new columns: `pred_lwr` and `pred_upr`. These are the bounds of a *prediction interval* (PI), an interview meant to capture *not* a future sample statistic, *but rather* a future observation.
The following visualization illustrates the computed prediction intervals: I visualize the prediction intervals with `geom_errorbar`. Note that we get a PI *for each observation*; every dot gets an interval.
Since we have access to the true values `price`, we can assess whether the true observed values fall within the model prediction intervals; this happens when the diagonal falls within the interval on a predicted\-vs\-actual plot.
```
## NOTE: No need to edit this chunk
# Visualize
df_pred_uq %>%
filter(price < 1000) %>%
ggplot(aes(price)) +
geom_abline(slope = 1, intercept = 0, size = 2, color = "grey50") +
geom_errorbar(
data = . %>% filter(pred_lwr <= price & price <= pred_upr),
aes(ymin = pred_lwr, ymax = pred_upr),
width = 0,
size = 0.5,
alpha = 1 / 2,
color = "darkturquoise"
) +
geom_errorbar(
data = . %>% filter(price < pred_lwr | pred_upr < price),
aes(ymin = pred_lwr, ymax = pred_upr),
width = 0,
size = 0.5,
color = "salmon"
) +
geom_point(aes(y = pred_fit), size = 0.1) +
theme_minimal()
```
Ideally these prediction intervals should include a desired fraction of observed values; let’s compute the *empirical coverage* to see if this matches our desired `level = 0.95`.
```
## NOTE: No need to edit this chunk
# Compute empirical coverage
df_pred_uq %>%
summarize(coverage = mean(pred_lwr <= price & price <= pred_upr))
```
```
## # A tibble: 1 × 1
## coverage
## <dbl>
## 1 0.959
```
The empirical coverage is quite close to our desired level.
### 54\.6\.1 **q6** Use the helper function `add_uncertainties()` to add prediction intervals to `df_train` based on the model `fit_4c`.
```
df_q6 <-
df_train %>%
add_uncertainties(
model = fit_4c,
prefix = "pred",
interval = "prediction",
level = 0.95
)
## NOTE: No need to edit code below
# Compute empirical coverage
df_q6 %>%
summarize(coverage = mean(pred_lwr <= price & price <= pred_upr))
```
```
## # A tibble: 1 × 1
## coverage
## <dbl>
## 1 0.956
```
```
# Visualize
df_q6 %>%
ggplot(aes(price)) +
geom_abline(slope = 1, intercept = 0, size = 2, color = "grey50") +
geom_errorbar(
data = . %>% filter(pred_lwr <= price & price <= pred_upr),
aes(ymin = pred_lwr, ymax = pred_upr),
width = 0,
size = 0.1,
alpha = 1 / 5,
color = "darkturquoise"
) +
geom_errorbar(
data = . %>% filter(price < pred_lwr | pred_upr < price),
aes(ymin = pred_lwr, ymax = pred_upr),
width = 0,
size = 0.1,
color = "salmon"
) +
geom_point(aes(y = pred_fit), size = 0.1) +
theme_minimal()
```
We will discuss prediction intervals further in a future exercise. For now, know that they give us a sense of how much we should trust our model predictions.
### 54\.6\.1 **q6** Use the helper function `add_uncertainties()` to add prediction intervals to `df_train` based on the model `fit_4c`.
```
df_q6 <-
df_train %>%
add_uncertainties(
model = fit_4c,
prefix = "pred",
interval = "prediction",
level = 0.95
)
## NOTE: No need to edit code below
# Compute empirical coverage
df_q6 %>%
summarize(coverage = mean(pred_lwr <= price & price <= pred_upr))
```
```
## # A tibble: 1 × 1
## coverage
## <dbl>
## 1 0.956
```
```
# Visualize
df_q6 %>%
ggplot(aes(price)) +
geom_abline(slope = 1, intercept = 0, size = 2, color = "grey50") +
geom_errorbar(
data = . %>% filter(pred_lwr <= price & price <= pred_upr),
aes(ymin = pred_lwr, ymax = pred_upr),
width = 0,
size = 0.1,
alpha = 1 / 5,
color = "darkturquoise"
) +
geom_errorbar(
data = . %>% filter(price < pred_lwr | pred_upr < price),
aes(ymin = pred_lwr, ymax = pred_upr),
width = 0,
size = 0.1,
color = "salmon"
) +
geom_point(aes(y = pred_fit), size = 0.1) +
theme_minimal()
```
We will discuss prediction intervals further in a future exercise. For now, know that they give us a sense of how much we should trust our model predictions.
54\.7 Summary
-------------
To summarize this reading, here are the steps to fitting and using a model:
* *Choose* a model form, e.g. if only considering *linear models*, we may consider `price ~ carat` vs `price ~ carat + cut`.
* *Fit* the model with data; this is done by optimizing a user\-chosen metric, such as the `mse`.
* *Assess* the model with metrics (`mse, rsquare`) and plots (predicted\-vs\-actual).
* *Improve* the model if needed, e.g. by adding more predictors.
* *Quantify* the trustworthiness of the model, e.g. with prediction intervals.
* *Use* the model to do useful work! We’ll cover this in future exercises.
54\.8 Preview
-------------
Notice that in this exercise, we only used `df_test`, but I *also* defined a tibble `df_train`. What happens when we fit the model on `df_train`, but assess it on `df_test`?
```
## NOTE: No need to edit this chunk
rsquare(fit_4c, df_train)
```
```
## [1] 0.897396
```
```
rsquare(fit_4c, df_test)
```
```
## [1] 0.7860757
```
Note that `rsquare` on the *training data* `df_train` is much higher than `rsquare` on the *test data* `df_test`. This indicates that the assessment of model accuracy is *overly optimistic* when assessing the model with `df_train`. We will explore this idea more in `e-stat12-models-train-test`.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/data-cleaning.html |
55 Data: Cleaning
=================
*Purpose*: Most of the data you’ll find in the wild is *messy*; you’ll need to clean those data before you can do useful work. In this case study, you’ll learn some more tricks for cleaning data. We’ll use these data for a future exercise on modeling, so we’ll build on the work you do here today.
*Reading*: (*None*, this exercise *is* the reading.)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
*Background*: This exercise’s data comes from the UCI Machine Learning Database; specifically their [Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). These data consist of clinical measurements on patients, and are intended to help predict heart disease.
```
## NOTE: No need to edit; run and inspect
url_disease <- "http://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.cleveland.data"
filename_disease <- "./data/uci_heart_disease.csv"
## Download the data locally
curl::curl_download(
url_disease,
destfile = filename_disease
)
```
This is a *messy* dataset; one we’ll have to clean if we want to make sense of it. Let’s load the data and document the ways in which it’s messy:
```
## NOTE: No need to edit; run and inspect
read_csv(filename_disease) %>% glimpse()
```
```
## New names:
## Rows: 302 Columns: 14
## ── Column specification
## ──────────────────────────────────────────────────────── Delimiter: "," chr
## (2): 0.0...12, 6.0 dbl (12): 63.0, 1.0...2, 1.0...3, 145.0, 233.0, 1.0...6,
## 2.0, 150.0, 0.0...9...
## ℹ Use `spec()` to retrieve the full column specification for this data. ℹ
## Specify the column types or set `show_col_types = FALSE` to quiet this message.
## • `1.0` -> `1.0...2`
## • `1.0` -> `1.0...3`
## • `1.0` -> `1.0...6`
## • `0.0` -> `0.0...9`
## • `0.0` -> `0.0...12`
```
```
## Rows: 302
## Columns: 14
## $ `63.0` <dbl> 67, 67, 37, 41, 56, 62, 57, 63, 53, 57, 56, 56, 44, 52, 57,…
## $ `1.0...2` <dbl> 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1,…
## $ `1.0...3` <dbl> 4, 4, 3, 2, 2, 4, 4, 4, 4, 4, 2, 3, 2, 3, 3, 2, 4, 3, 2, 1,…
## $ `145.0` <dbl> 160, 120, 130, 130, 120, 140, 120, 130, 140, 140, 140, 130,…
## $ `233.0` <dbl> 286, 229, 250, 204, 236, 268, 354, 254, 203, 192, 294, 256,…
## $ `1.0...6` <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0,…
## $ `2.0` <dbl> 2, 2, 0, 2, 0, 2, 0, 2, 2, 0, 2, 2, 0, 0, 0, 0, 0, 0, 0, 2,…
## $ `150.0` <dbl> 108, 129, 187, 172, 178, 160, 163, 147, 155, 148, 153, 142,…
## $ `0.0...9` <dbl> 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1,…
## $ `2.3` <dbl> 1.5, 2.6, 3.5, 1.4, 0.8, 3.6, 0.6, 1.4, 3.1, 0.4, 1.3, 0.6,…
## $ `3.0` <dbl> 2, 2, 3, 1, 1, 3, 1, 2, 3, 2, 2, 2, 1, 1, 1, 3, 1, 1, 1, 2,…
## $ `0.0...12` <chr> "3.0", "2.0", "0.0", "0.0", "0.0", "2.0", "0.0", "1.0", "0.…
## $ `6.0` <chr> "3.0", "7.0", "3.0", "3.0", "3.0", "3.0", "3.0", "7.0", "7.…
## $ `0` <dbl> 2, 1, 0, 0, 0, 3, 0, 2, 1, 0, 0, 2, 0, 0, 0, 1, 0, 0, 0, 0,…
```
*Observations*:
* The CSV comes without column names! `read_csv()` got confused and assigned the first row of data as names.
* Some of the numerical columns were incorrectly assigned `character` type.
* Some of the columns are coded as binary values `0, 1`, but they really represent variables like `sex %in% c("male", "female")`.
Let’s tackle these problems one at a time:
55\.1 Problem 1: No column names
--------------------------------
We’ll have a hard time making sense of these data without column names. Let’s fix that.
### 55\.1\.1 **q1** Obtain the data.
Following the [dataset documentation](https://archive.ics.uci.edu/ml/datasets/Heart+Disease), transcribe the correct column names and assign them as a character vector. You will use this to give the dataset sensible column names when you load it in q2\.
*Hint 1*: The relevant section from the dataset documentation is quoted here:
> Only 14 attributes used:
> 1\. \#3 (age)
> 2\. \#4 (sex)
> 3\. \#9 (cp)
> 4\. \#10 (trestbps)
> 5\. \#12 (chol)
> 6\. \#16 (fbs)
> 7\. \#19 (restecg)
> 8\. \#32 (thalach)
> 9\. \#38 (exang)
> 10\. \#40 (oldpeak)
> 11\. \#41 (slope)
> 12\. \#44 (ca)
> 13\. \#51 (thal)
> 14\. \#58 (num) (the predicted attribute)
*Hint 2*: A “copy\-paste\-edit” is probably the most effective approach here!
```
## TODO: Assign the column names to col_names; make sure they are strings
col_names <- c(
"age",
"sex",
"cp",
"trestbps",
"chol",
"fbs",
"restecg",
"thalach",
"exang",
"oldpeak",
"slope",
"ca",
"thal",
"num"
)
```
Use the following to check your code.
```
## NOTE: No need to change this
assertthat::assert_that(col_names[1] == "age")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[2] == "sex")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[3] == "cp")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[4] == "trestbps")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[5] == "chol")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[6] == "fbs")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[7] == "restecg")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[8] == "thalach")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[9] == "exang")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[10] == "oldpeak")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[11] == "slope")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[12] == "ca")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[13] == "thal")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[14] == "num")
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
55\.2 Problem 2: Incorrect types
--------------------------------
We saw above that `read_csv()` incorrectly guessed some of the column types. Let’s fix that by manually specifying each column’s type.
### 55\.2\.1 **q2** Call `read_csv()` with the `col_names` and `col_types` arguments. Use the column names you assigned above, and set all column types to `col_number()`.
*Hint*: Remember that you can always read the documentation to learn how to use a new argument!
```
## TODO: Use the col_names and col_types arguments to give the data the
## correct column names, and to set their types to col_number()
df_q2 <-
read_csv(
filename_disease,
col_names = col_names,
col_types = cols(
"age" = col_number(),
"sex" = col_number(),
"cp" = col_number(),
"trestbps" = col_number(),
"chol" = col_number(),
"fbs" = col_number(),
"restecg" = col_number(),
"thalach" = col_number(),
"exang" = col_number(),
"oldpeak" = col_number(),
"slope" = col_number(),
"ca" = col_number(),
"thal" = col_number(),
"num" = col_number()
)
)
```
```
## Warning: One or more parsing issues, call `problems()` on your data frame for details,
## e.g.:
## dat <- vroom(...)
## problems(dat)
```
```
df_q2 %>% glimpse()
```
```
## Rows: 303
## Columns: 14
## $ age <dbl> 63, 67, 67, 37, 41, 56, 62, 57, 63, 53, 57, 56, 56, 44, 52, 5…
## $ sex <dbl> 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1…
## $ cp <dbl> 1, 4, 4, 3, 2, 2, 4, 4, 4, 4, 4, 2, 3, 2, 3, 3, 2, 4, 3, 2, 1…
## $ trestbps <dbl> 145, 160, 120, 130, 130, 120, 140, 120, 130, 140, 140, 140, 1…
## $ chol <dbl> 233, 286, 229, 250, 204, 236, 268, 354, 254, 203, 192, 294, 2…
## $ fbs <dbl> 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0…
## $ restecg <dbl> 2, 2, 2, 0, 2, 0, 2, 0, 2, 2, 0, 2, 2, 0, 0, 0, 0, 0, 0, 0, 2…
## $ thalach <dbl> 150, 108, 129, 187, 172, 178, 160, 163, 147, 155, 148, 153, 1…
## $ exang <dbl> 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1…
## $ oldpeak <dbl> 2.3, 1.5, 2.6, 3.5, 1.4, 0.8, 3.6, 0.6, 1.4, 3.1, 0.4, 1.3, 0…
## $ slope <dbl> 3, 2, 2, 3, 1, 1, 3, 1, 2, 3, 2, 2, 2, 1, 1, 1, 3, 1, 1, 1, 2…
## $ ca <dbl> 0, 3, 2, 0, 0, 0, 2, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0…
## $ thal <dbl> 6, 3, 7, 3, 3, 3, 3, 3, 7, 7, 6, 3, 6, 7, 7, 3, 7, 3, 3, 3, 3…
## $ num <dbl> 0, 2, 1, 0, 0, 0, 3, 0, 2, 1, 0, 0, 2, 0, 0, 0, 1, 0, 0, 0, 0…
```
Use the following to check your code.
```
## NOTE: No need to change this
assertthat::assert_that(assertthat::are_equal(names(df_q2), col_names))
```
```
## [1] TRUE
```
```
assertthat::assert_that(all(map_chr(df_q2, class) == "numeric"))
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
55\.3 Problem 3: Uninformative values
-------------------------------------
The numeric codes given for some of the variables are uninformative; let’s replace those with more human\-readable values.
Rather than go and modify our raw data, we will instead *recode* the variables in our loaded dataset. *It is bad practice to modify your raw data!* Modifying your data in code provides *traceable documentation* for the edits you made; this is a key part of doing [reproducible science](https://www.nature.com/articles/s41562-016-0021). It takes more work, but *your results will be more trustworthy if you do things the right way!*
### 55\.3\.1 **q3** Create *conversion functions* to recode factor values as human\-readable strings. I have provided one function (`convert_sex`) as an example.
*Note*: “In the wild” you would be responsible for devising your own sensible level names. However, I’m going to provide specific codes such that I can write unittests to check your answers:
| Variable | Levels |
| --- | --- |
| `sex` | `1 = "male", 0 = "female"` |
| `fbs` | `1 = TRUE, 0 = FALSE` |
| `restecg` | `0 = "normal", 1 = "ST-T wave abnormality", 2 = "Estes' criteria"` |
| `exang` | `1 = TRUE, 0 = FALSE` |
| `slope` | `1 = "upsloping", 2 = "flat", 3 = "downsloping"` |
| `thal` | `3 = "normal", 6 = "fixed defect", 7 = "reversible defect"` |
```
## NOTE: This is an example conversion
convert_sex <- function(x) {
case_when(
x == 1 ~ "male",
x == 0 ~ "female",
TRUE ~ NA_character_
)
}
convert_cp <- function(x) {
case_when(
x == 1 ~ "typical angina",
x == 2 ~ "atypical angina",
x == 3 ~ "non-anginal pain",
x == 4 ~ "asymptomatic",
TRUE ~ NA_character_
)
}
convert_fbs <- function(x) {
if_else(x == 1, TRUE, FALSE)
}
convert_restecv <- function(x) {
case_when(
x == 0 ~ "normal",
x == 1 ~ "ST-T wave abnormality",
x == 2 ~ "Estes' criteria",
TRUE ~ NA_character_
)
}
convert_exang <- function(x) {
if_else(x == 1, TRUE, FALSE)
}
convert_slope <- function(x) {
case_when(
x == 1 ~ "upsloping",
x == 2 ~ "flat",
x == 3 ~ "downsloping",
TRUE ~ NA_character_
)
}
convert_thal <- function(x) {
case_when(
x == 3 ~ "normal",
x == 6 ~ "fixed defect",
x == 7 ~ "reversible defect",
TRUE ~ NA_character_
)
}
```
Use the following to check your code.
```
## NOTE: No need to change this
assertthat::assert_that(assertthat::are_equal(
convert_cp(c(1, 2, 3, 4)),
c("typical angina", "atypical angina", "non-anginal pain", "asymptomatic")
))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(
convert_fbs(c(1, 0)),
c(TRUE, FALSE)
))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(
convert_restecv(c(0, 1, 2)),
c("normal", "ST-T wave abnormality", "Estes' criteria")
))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(
convert_exang(c(1, 0)),
c(TRUE, FALSE)
))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(
convert_slope(c(1, 2, 3)),
c("upsloping", "flat", "downsloping")
))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(
convert_thal(c(3, 6, 7)),
c("normal", "fixed defect", "reversible defect")
))
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
### 55\.3\.2 **q4** Use your `convert_` functions from q3 to mutate the columns and recode the variables.
```
df_q4 <-
df_q2 %>%
mutate(
sex = convert_sex(sex),
cp = convert_cp(cp),
fbs = convert_fbs(fbs),
restecg = convert_restecv(restecg),
exang = convert_exang(exang),
slope = convert_slope(slope),
thal = convert_thal(thal)
)
df_q4
```
```
## # A tibble: 303 × 14
## age sex cp trest…¹ chol fbs restecg thalach exang oldpeak slope
## <dbl> <chr> <chr> <dbl> <dbl> <lgl> <chr> <dbl> <lgl> <dbl> <chr>
## 1 63 male typical… 145 233 TRUE Estes'… 150 FALSE 2.3 down…
## 2 67 male asympto… 160 286 FALSE Estes'… 108 TRUE 1.5 flat
## 3 67 male asympto… 120 229 FALSE Estes'… 129 TRUE 2.6 flat
## 4 37 male non-ang… 130 250 FALSE normal 187 FALSE 3.5 down…
## 5 41 female atypica… 130 204 FALSE Estes'… 172 FALSE 1.4 upsl…
## 6 56 male atypica… 120 236 FALSE normal 178 FALSE 0.8 upsl…
## 7 62 female asympto… 140 268 FALSE Estes'… 160 FALSE 3.6 down…
## 8 57 female asympto… 120 354 FALSE normal 163 TRUE 0.6 upsl…
## 9 63 male asympto… 130 254 FALSE Estes'… 147 FALSE 1.4 flat
## 10 53 male asympto… 140 203 TRUE Estes'… 155 TRUE 3.1 down…
## # … with 293 more rows, 3 more variables: ca <dbl>, thal <chr>, num <dbl>, and
## # abbreviated variable name ¹trestbps
```
55\.4 Prepare the Data for Modeling
-----------------------------------
Now we have a clean dataset we can use for EDA and modeling—great! Before we finish this exercise, let’s do some standard checks to understand these data:
### 55\.4\.1 **q5** Perform your *first checks* on `df_q4`. Answer the questions below.
*Hint*: You may need to do some “deeper checks” to answer some of the questions below.
```
df_q4 %>% summary()
```
```
## age sex cp trestbps
## Min. :29.00 Length:303 Length:303 Min. : 94.0
## 1st Qu.:48.00 Class :character Class :character 1st Qu.:120.0
## Median :56.00 Mode :character Mode :character Median :130.0
## Mean :54.44 Mean :131.7
## 3rd Qu.:61.00 3rd Qu.:140.0
## Max. :77.00 Max. :200.0
##
## chol fbs restecg thalach
## Min. :126.0 Mode :logical Length:303 Min. : 71.0
## 1st Qu.:211.0 FALSE:258 Class :character 1st Qu.:133.5
## Median :241.0 TRUE :45 Mode :character Median :153.0
## Mean :246.7 Mean :149.6
## 3rd Qu.:275.0 3rd Qu.:166.0
## Max. :564.0 Max. :202.0
##
## exang oldpeak slope ca
## Mode :logical Min. :0.00 Length:303 Min. :0.0000
## FALSE:204 1st Qu.:0.00 Class :character 1st Qu.:0.0000
## TRUE :99 Median :0.80 Mode :character Median :0.0000
## Mean :1.04 Mean :0.6722
## 3rd Qu.:1.60 3rd Qu.:1.0000
## Max. :6.20 Max. :3.0000
## NA's :4
## thal num
## Length:303 Min. :0.0000
## Class :character 1st Qu.:0.0000
## Mode :character Median :0.0000
## Mean :0.9373
## 3rd Qu.:2.0000
## Max. :4.0000
##
```
**Observations**:
Variables:
\- Numerical: `age, trestbps, chol, thalach, oldpeak, ca, num`
\- Factors: `sex, cp, restecg, slope, thal, heart_disease`
\- Logical: `fbs, exang, heart_disease`
Missingness:
```
map(
df_q4,
~ sum(is.na(.))
)
```
```
## $age
## [1] 0
##
## $sex
## [1] 0
##
## $cp
## [1] 0
##
## $trestbps
## [1] 0
##
## $chol
## [1] 0
##
## $fbs
## [1] 0
##
## $restecg
## [1] 0
##
## $thalach
## [1] 0
##
## $exang
## [1] 0
##
## $oldpeak
## [1] 0
##
## $slope
## [1] 0
##
## $ca
## [1] 4
##
## $thal
## [1] 2
##
## $num
## [1] 0
```
From this, we can see that most variables have no missing values, but `ca` has `4` and `thal` has `2`.
Missingness pattern:
```
df_q4 %>%
filter(is.na(ca) | is.na(thal)) %>%
select(ca, thal, everything())
```
```
## # A tibble: 6 × 14
## ca thal age sex cp trest…¹ chol fbs restecg thalach exang
## <dbl> <chr> <dbl> <chr> <chr> <dbl> <dbl> <lgl> <chr> <dbl> <lgl>
## 1 0 <NA> 53 fema… non-… 128 216 FALSE Estes'… 115 FALSE
## 2 NA normal 52 male non-… 138 223 FALSE normal 169 FALSE
## 3 NA reversible … 43 male asym… 132 247 TRUE Estes'… 143 TRUE
## 4 0 <NA> 52 male asym… 128 204 TRUE normal 156 TRUE
## 5 NA reversible … 58 male atyp… 125 220 FALSE normal 144 FALSE
## 6 NA normal 38 male non-… 138 175 FALSE normal 173 FALSE
## # … with 3 more variables: oldpeak <dbl>, slope <chr>, num <dbl>, and
## # abbreviated variable name ¹trestbps
```
There are six rows with missing values.
If we were just doing EDA, we could stop here. However we’re going to use these data for *modeling* in a future exercise. Most models can’t deal with `NA` values, so we must choose how to handle rows with `NA`’s. In cases where only a few observations are missing values, we can simply *filter out* those rows.
### 55\.4\.2 **q6** Filter out the rows with missing values.
```
df_q6 <-
df_q4 %>%
filter(!is.na(ca), !is.na(thal))
df_q6
```
```
## # A tibble: 297 × 14
## age sex cp trest…¹ chol fbs restecg thalach exang oldpeak slope
## <dbl> <chr> <chr> <dbl> <dbl> <lgl> <chr> <dbl> <lgl> <dbl> <chr>
## 1 63 male typical… 145 233 TRUE Estes'… 150 FALSE 2.3 down…
## 2 67 male asympto… 160 286 FALSE Estes'… 108 TRUE 1.5 flat
## 3 67 male asympto… 120 229 FALSE Estes'… 129 TRUE 2.6 flat
## 4 37 male non-ang… 130 250 FALSE normal 187 FALSE 3.5 down…
## 5 41 female atypica… 130 204 FALSE Estes'… 172 FALSE 1.4 upsl…
## 6 56 male atypica… 120 236 FALSE normal 178 FALSE 0.8 upsl…
## 7 62 female asympto… 140 268 FALSE Estes'… 160 FALSE 3.6 down…
## 8 57 female asympto… 120 354 FALSE normal 163 TRUE 0.6 upsl…
## 9 63 male asympto… 130 254 FALSE Estes'… 147 FALSE 1.4 flat
## 10 53 male asympto… 140 203 TRUE Estes'… 155 TRUE 3.1 down…
## # … with 287 more rows, 3 more variables: ca <dbl>, thal <chr>, num <dbl>, and
## # abbreviated variable name ¹trestbps
```
Use the following to check your code.
```
## NOTE: No need to change this
assertthat::assert_that(
dim(
df_q6 %>%
filter(rowSums(across(everything(), is.na)) > 0)
)[1] == 0
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
55\.5 In summary
----------------
* We cleaned the dataset by giving it sensible names and recoding factors with human\-readable values.
* We filtered out rows with missing values (`NA`’s) *because we intend to use these data for modeling*.
55\.1 Problem 1: No column names
--------------------------------
We’ll have a hard time making sense of these data without column names. Let’s fix that.
### 55\.1\.1 **q1** Obtain the data.
Following the [dataset documentation](https://archive.ics.uci.edu/ml/datasets/Heart+Disease), transcribe the correct column names and assign them as a character vector. You will use this to give the dataset sensible column names when you load it in q2\.
*Hint 1*: The relevant section from the dataset documentation is quoted here:
> Only 14 attributes used:
> 1\. \#3 (age)
> 2\. \#4 (sex)
> 3\. \#9 (cp)
> 4\. \#10 (trestbps)
> 5\. \#12 (chol)
> 6\. \#16 (fbs)
> 7\. \#19 (restecg)
> 8\. \#32 (thalach)
> 9\. \#38 (exang)
> 10\. \#40 (oldpeak)
> 11\. \#41 (slope)
> 12\. \#44 (ca)
> 13\. \#51 (thal)
> 14\. \#58 (num) (the predicted attribute)
*Hint 2*: A “copy\-paste\-edit” is probably the most effective approach here!
```
## TODO: Assign the column names to col_names; make sure they are strings
col_names <- c(
"age",
"sex",
"cp",
"trestbps",
"chol",
"fbs",
"restecg",
"thalach",
"exang",
"oldpeak",
"slope",
"ca",
"thal",
"num"
)
```
Use the following to check your code.
```
## NOTE: No need to change this
assertthat::assert_that(col_names[1] == "age")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[2] == "sex")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[3] == "cp")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[4] == "trestbps")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[5] == "chol")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[6] == "fbs")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[7] == "restecg")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[8] == "thalach")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[9] == "exang")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[10] == "oldpeak")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[11] == "slope")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[12] == "ca")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[13] == "thal")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[14] == "num")
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
### 55\.1\.1 **q1** Obtain the data.
Following the [dataset documentation](https://archive.ics.uci.edu/ml/datasets/Heart+Disease), transcribe the correct column names and assign them as a character vector. You will use this to give the dataset sensible column names when you load it in q2\.
*Hint 1*: The relevant section from the dataset documentation is quoted here:
> Only 14 attributes used:
> 1\. \#3 (age)
> 2\. \#4 (sex)
> 3\. \#9 (cp)
> 4\. \#10 (trestbps)
> 5\. \#12 (chol)
> 6\. \#16 (fbs)
> 7\. \#19 (restecg)
> 8\. \#32 (thalach)
> 9\. \#38 (exang)
> 10\. \#40 (oldpeak)
> 11\. \#41 (slope)
> 12\. \#44 (ca)
> 13\. \#51 (thal)
> 14\. \#58 (num) (the predicted attribute)
*Hint 2*: A “copy\-paste\-edit” is probably the most effective approach here!
```
## TODO: Assign the column names to col_names; make sure they are strings
col_names <- c(
"age",
"sex",
"cp",
"trestbps",
"chol",
"fbs",
"restecg",
"thalach",
"exang",
"oldpeak",
"slope",
"ca",
"thal",
"num"
)
```
Use the following to check your code.
```
## NOTE: No need to change this
assertthat::assert_that(col_names[1] == "age")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[2] == "sex")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[3] == "cp")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[4] == "trestbps")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[5] == "chol")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[6] == "fbs")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[7] == "restecg")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[8] == "thalach")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[9] == "exang")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[10] == "oldpeak")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[11] == "slope")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[12] == "ca")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[13] == "thal")
```
```
## [1] TRUE
```
```
assertthat::assert_that(col_names[14] == "num")
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
55\.2 Problem 2: Incorrect types
--------------------------------
We saw above that `read_csv()` incorrectly guessed some of the column types. Let’s fix that by manually specifying each column’s type.
### 55\.2\.1 **q2** Call `read_csv()` with the `col_names` and `col_types` arguments. Use the column names you assigned above, and set all column types to `col_number()`.
*Hint*: Remember that you can always read the documentation to learn how to use a new argument!
```
## TODO: Use the col_names and col_types arguments to give the data the
## correct column names, and to set their types to col_number()
df_q2 <-
read_csv(
filename_disease,
col_names = col_names,
col_types = cols(
"age" = col_number(),
"sex" = col_number(),
"cp" = col_number(),
"trestbps" = col_number(),
"chol" = col_number(),
"fbs" = col_number(),
"restecg" = col_number(),
"thalach" = col_number(),
"exang" = col_number(),
"oldpeak" = col_number(),
"slope" = col_number(),
"ca" = col_number(),
"thal" = col_number(),
"num" = col_number()
)
)
```
```
## Warning: One or more parsing issues, call `problems()` on your data frame for details,
## e.g.:
## dat <- vroom(...)
## problems(dat)
```
```
df_q2 %>% glimpse()
```
```
## Rows: 303
## Columns: 14
## $ age <dbl> 63, 67, 67, 37, 41, 56, 62, 57, 63, 53, 57, 56, 56, 44, 52, 5…
## $ sex <dbl> 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1…
## $ cp <dbl> 1, 4, 4, 3, 2, 2, 4, 4, 4, 4, 4, 2, 3, 2, 3, 3, 2, 4, 3, 2, 1…
## $ trestbps <dbl> 145, 160, 120, 130, 130, 120, 140, 120, 130, 140, 140, 140, 1…
## $ chol <dbl> 233, 286, 229, 250, 204, 236, 268, 354, 254, 203, 192, 294, 2…
## $ fbs <dbl> 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0…
## $ restecg <dbl> 2, 2, 2, 0, 2, 0, 2, 0, 2, 2, 0, 2, 2, 0, 0, 0, 0, 0, 0, 0, 2…
## $ thalach <dbl> 150, 108, 129, 187, 172, 178, 160, 163, 147, 155, 148, 153, 1…
## $ exang <dbl> 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1…
## $ oldpeak <dbl> 2.3, 1.5, 2.6, 3.5, 1.4, 0.8, 3.6, 0.6, 1.4, 3.1, 0.4, 1.3, 0…
## $ slope <dbl> 3, 2, 2, 3, 1, 1, 3, 1, 2, 3, 2, 2, 2, 1, 1, 1, 3, 1, 1, 1, 2…
## $ ca <dbl> 0, 3, 2, 0, 0, 0, 2, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0…
## $ thal <dbl> 6, 3, 7, 3, 3, 3, 3, 3, 7, 7, 6, 3, 6, 7, 7, 3, 7, 3, 3, 3, 3…
## $ num <dbl> 0, 2, 1, 0, 0, 0, 3, 0, 2, 1, 0, 0, 2, 0, 0, 0, 1, 0, 0, 0, 0…
```
Use the following to check your code.
```
## NOTE: No need to change this
assertthat::assert_that(assertthat::are_equal(names(df_q2), col_names))
```
```
## [1] TRUE
```
```
assertthat::assert_that(all(map_chr(df_q2, class) == "numeric"))
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 55\.2\.1 **q2** Call `read_csv()` with the `col_names` and `col_types` arguments. Use the column names you assigned above, and set all column types to `col_number()`.
*Hint*: Remember that you can always read the documentation to learn how to use a new argument!
```
## TODO: Use the col_names and col_types arguments to give the data the
## correct column names, and to set their types to col_number()
df_q2 <-
read_csv(
filename_disease,
col_names = col_names,
col_types = cols(
"age" = col_number(),
"sex" = col_number(),
"cp" = col_number(),
"trestbps" = col_number(),
"chol" = col_number(),
"fbs" = col_number(),
"restecg" = col_number(),
"thalach" = col_number(),
"exang" = col_number(),
"oldpeak" = col_number(),
"slope" = col_number(),
"ca" = col_number(),
"thal" = col_number(),
"num" = col_number()
)
)
```
```
## Warning: One or more parsing issues, call `problems()` on your data frame for details,
## e.g.:
## dat <- vroom(...)
## problems(dat)
```
```
df_q2 %>% glimpse()
```
```
## Rows: 303
## Columns: 14
## $ age <dbl> 63, 67, 67, 37, 41, 56, 62, 57, 63, 53, 57, 56, 56, 44, 52, 5…
## $ sex <dbl> 1, 1, 1, 1, 0, 1, 0, 0, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1, 0, 1, 1…
## $ cp <dbl> 1, 4, 4, 3, 2, 2, 4, 4, 4, 4, 4, 2, 3, 2, 3, 3, 2, 4, 3, 2, 1…
## $ trestbps <dbl> 145, 160, 120, 130, 130, 120, 140, 120, 130, 140, 140, 140, 1…
## $ chol <dbl> 233, 286, 229, 250, 204, 236, 268, 354, 254, 203, 192, 294, 2…
## $ fbs <dbl> 1, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0…
## $ restecg <dbl> 2, 2, 2, 0, 2, 0, 2, 0, 2, 2, 0, 2, 2, 0, 0, 0, 0, 0, 0, 0, 2…
## $ thalach <dbl> 150, 108, 129, 187, 172, 178, 160, 163, 147, 155, 148, 153, 1…
## $ exang <dbl> 0, 1, 1, 0, 0, 0, 0, 1, 0, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1…
## $ oldpeak <dbl> 2.3, 1.5, 2.6, 3.5, 1.4, 0.8, 3.6, 0.6, 1.4, 3.1, 0.4, 1.3, 0…
## $ slope <dbl> 3, 2, 2, 3, 1, 1, 3, 1, 2, 3, 2, 2, 2, 1, 1, 1, 3, 1, 1, 1, 2…
## $ ca <dbl> 0, 3, 2, 0, 0, 0, 2, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0…
## $ thal <dbl> 6, 3, 7, 3, 3, 3, 3, 3, 7, 7, 6, 3, 6, 7, 7, 3, 7, 3, 3, 3, 3…
## $ num <dbl> 0, 2, 1, 0, 0, 0, 3, 0, 2, 1, 0, 0, 2, 0, 0, 0, 1, 0, 0, 0, 0…
```
Use the following to check your code.
```
## NOTE: No need to change this
assertthat::assert_that(assertthat::are_equal(names(df_q2), col_names))
```
```
## [1] TRUE
```
```
assertthat::assert_that(all(map_chr(df_q2, class) == "numeric"))
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
55\.3 Problem 3: Uninformative values
-------------------------------------
The numeric codes given for some of the variables are uninformative; let’s replace those with more human\-readable values.
Rather than go and modify our raw data, we will instead *recode* the variables in our loaded dataset. *It is bad practice to modify your raw data!* Modifying your data in code provides *traceable documentation* for the edits you made; this is a key part of doing [reproducible science](https://www.nature.com/articles/s41562-016-0021). It takes more work, but *your results will be more trustworthy if you do things the right way!*
### 55\.3\.1 **q3** Create *conversion functions* to recode factor values as human\-readable strings. I have provided one function (`convert_sex`) as an example.
*Note*: “In the wild” you would be responsible for devising your own sensible level names. However, I’m going to provide specific codes such that I can write unittests to check your answers:
| Variable | Levels |
| --- | --- |
| `sex` | `1 = "male", 0 = "female"` |
| `fbs` | `1 = TRUE, 0 = FALSE` |
| `restecg` | `0 = "normal", 1 = "ST-T wave abnormality", 2 = "Estes' criteria"` |
| `exang` | `1 = TRUE, 0 = FALSE` |
| `slope` | `1 = "upsloping", 2 = "flat", 3 = "downsloping"` |
| `thal` | `3 = "normal", 6 = "fixed defect", 7 = "reversible defect"` |
```
## NOTE: This is an example conversion
convert_sex <- function(x) {
case_when(
x == 1 ~ "male",
x == 0 ~ "female",
TRUE ~ NA_character_
)
}
convert_cp <- function(x) {
case_when(
x == 1 ~ "typical angina",
x == 2 ~ "atypical angina",
x == 3 ~ "non-anginal pain",
x == 4 ~ "asymptomatic",
TRUE ~ NA_character_
)
}
convert_fbs <- function(x) {
if_else(x == 1, TRUE, FALSE)
}
convert_restecv <- function(x) {
case_when(
x == 0 ~ "normal",
x == 1 ~ "ST-T wave abnormality",
x == 2 ~ "Estes' criteria",
TRUE ~ NA_character_
)
}
convert_exang <- function(x) {
if_else(x == 1, TRUE, FALSE)
}
convert_slope <- function(x) {
case_when(
x == 1 ~ "upsloping",
x == 2 ~ "flat",
x == 3 ~ "downsloping",
TRUE ~ NA_character_
)
}
convert_thal <- function(x) {
case_when(
x == 3 ~ "normal",
x == 6 ~ "fixed defect",
x == 7 ~ "reversible defect",
TRUE ~ NA_character_
)
}
```
Use the following to check your code.
```
## NOTE: No need to change this
assertthat::assert_that(assertthat::are_equal(
convert_cp(c(1, 2, 3, 4)),
c("typical angina", "atypical angina", "non-anginal pain", "asymptomatic")
))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(
convert_fbs(c(1, 0)),
c(TRUE, FALSE)
))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(
convert_restecv(c(0, 1, 2)),
c("normal", "ST-T wave abnormality", "Estes' criteria")
))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(
convert_exang(c(1, 0)),
c(TRUE, FALSE)
))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(
convert_slope(c(1, 2, 3)),
c("upsloping", "flat", "downsloping")
))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(
convert_thal(c(3, 6, 7)),
c("normal", "fixed defect", "reversible defect")
))
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
### 55\.3\.2 **q4** Use your `convert_` functions from q3 to mutate the columns and recode the variables.
```
df_q4 <-
df_q2 %>%
mutate(
sex = convert_sex(sex),
cp = convert_cp(cp),
fbs = convert_fbs(fbs),
restecg = convert_restecv(restecg),
exang = convert_exang(exang),
slope = convert_slope(slope),
thal = convert_thal(thal)
)
df_q4
```
```
## # A tibble: 303 × 14
## age sex cp trest…¹ chol fbs restecg thalach exang oldpeak slope
## <dbl> <chr> <chr> <dbl> <dbl> <lgl> <chr> <dbl> <lgl> <dbl> <chr>
## 1 63 male typical… 145 233 TRUE Estes'… 150 FALSE 2.3 down…
## 2 67 male asympto… 160 286 FALSE Estes'… 108 TRUE 1.5 flat
## 3 67 male asympto… 120 229 FALSE Estes'… 129 TRUE 2.6 flat
## 4 37 male non-ang… 130 250 FALSE normal 187 FALSE 3.5 down…
## 5 41 female atypica… 130 204 FALSE Estes'… 172 FALSE 1.4 upsl…
## 6 56 male atypica… 120 236 FALSE normal 178 FALSE 0.8 upsl…
## 7 62 female asympto… 140 268 FALSE Estes'… 160 FALSE 3.6 down…
## 8 57 female asympto… 120 354 FALSE normal 163 TRUE 0.6 upsl…
## 9 63 male asympto… 130 254 FALSE Estes'… 147 FALSE 1.4 flat
## 10 53 male asympto… 140 203 TRUE Estes'… 155 TRUE 3.1 down…
## # … with 293 more rows, 3 more variables: ca <dbl>, thal <chr>, num <dbl>, and
## # abbreviated variable name ¹trestbps
```
### 55\.3\.1 **q3** Create *conversion functions* to recode factor values as human\-readable strings. I have provided one function (`convert_sex`) as an example.
*Note*: “In the wild” you would be responsible for devising your own sensible level names. However, I’m going to provide specific codes such that I can write unittests to check your answers:
| Variable | Levels |
| --- | --- |
| `sex` | `1 = "male", 0 = "female"` |
| `fbs` | `1 = TRUE, 0 = FALSE` |
| `restecg` | `0 = "normal", 1 = "ST-T wave abnormality", 2 = "Estes' criteria"` |
| `exang` | `1 = TRUE, 0 = FALSE` |
| `slope` | `1 = "upsloping", 2 = "flat", 3 = "downsloping"` |
| `thal` | `3 = "normal", 6 = "fixed defect", 7 = "reversible defect"` |
```
## NOTE: This is an example conversion
convert_sex <- function(x) {
case_when(
x == 1 ~ "male",
x == 0 ~ "female",
TRUE ~ NA_character_
)
}
convert_cp <- function(x) {
case_when(
x == 1 ~ "typical angina",
x == 2 ~ "atypical angina",
x == 3 ~ "non-anginal pain",
x == 4 ~ "asymptomatic",
TRUE ~ NA_character_
)
}
convert_fbs <- function(x) {
if_else(x == 1, TRUE, FALSE)
}
convert_restecv <- function(x) {
case_when(
x == 0 ~ "normal",
x == 1 ~ "ST-T wave abnormality",
x == 2 ~ "Estes' criteria",
TRUE ~ NA_character_
)
}
convert_exang <- function(x) {
if_else(x == 1, TRUE, FALSE)
}
convert_slope <- function(x) {
case_when(
x == 1 ~ "upsloping",
x == 2 ~ "flat",
x == 3 ~ "downsloping",
TRUE ~ NA_character_
)
}
convert_thal <- function(x) {
case_when(
x == 3 ~ "normal",
x == 6 ~ "fixed defect",
x == 7 ~ "reversible defect",
TRUE ~ NA_character_
)
}
```
Use the following to check your code.
```
## NOTE: No need to change this
assertthat::assert_that(assertthat::are_equal(
convert_cp(c(1, 2, 3, 4)),
c("typical angina", "atypical angina", "non-anginal pain", "asymptomatic")
))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(
convert_fbs(c(1, 0)),
c(TRUE, FALSE)
))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(
convert_restecv(c(0, 1, 2)),
c("normal", "ST-T wave abnormality", "Estes' criteria")
))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(
convert_exang(c(1, 0)),
c(TRUE, FALSE)
))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(
convert_slope(c(1, 2, 3)),
c("upsloping", "flat", "downsloping")
))
```
```
## [1] TRUE
```
```
assertthat::assert_that(assertthat::are_equal(
convert_thal(c(3, 6, 7)),
c("normal", "fixed defect", "reversible defect")
))
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
### 55\.3\.2 **q4** Use your `convert_` functions from q3 to mutate the columns and recode the variables.
```
df_q4 <-
df_q2 %>%
mutate(
sex = convert_sex(sex),
cp = convert_cp(cp),
fbs = convert_fbs(fbs),
restecg = convert_restecv(restecg),
exang = convert_exang(exang),
slope = convert_slope(slope),
thal = convert_thal(thal)
)
df_q4
```
```
## # A tibble: 303 × 14
## age sex cp trest…¹ chol fbs restecg thalach exang oldpeak slope
## <dbl> <chr> <chr> <dbl> <dbl> <lgl> <chr> <dbl> <lgl> <dbl> <chr>
## 1 63 male typical… 145 233 TRUE Estes'… 150 FALSE 2.3 down…
## 2 67 male asympto… 160 286 FALSE Estes'… 108 TRUE 1.5 flat
## 3 67 male asympto… 120 229 FALSE Estes'… 129 TRUE 2.6 flat
## 4 37 male non-ang… 130 250 FALSE normal 187 FALSE 3.5 down…
## 5 41 female atypica… 130 204 FALSE Estes'… 172 FALSE 1.4 upsl…
## 6 56 male atypica… 120 236 FALSE normal 178 FALSE 0.8 upsl…
## 7 62 female asympto… 140 268 FALSE Estes'… 160 FALSE 3.6 down…
## 8 57 female asympto… 120 354 FALSE normal 163 TRUE 0.6 upsl…
## 9 63 male asympto… 130 254 FALSE Estes'… 147 FALSE 1.4 flat
## 10 53 male asympto… 140 203 TRUE Estes'… 155 TRUE 3.1 down…
## # … with 293 more rows, 3 more variables: ca <dbl>, thal <chr>, num <dbl>, and
## # abbreviated variable name ¹trestbps
```
55\.4 Prepare the Data for Modeling
-----------------------------------
Now we have a clean dataset we can use for EDA and modeling—great! Before we finish this exercise, let’s do some standard checks to understand these data:
### 55\.4\.1 **q5** Perform your *first checks* on `df_q4`. Answer the questions below.
*Hint*: You may need to do some “deeper checks” to answer some of the questions below.
```
df_q4 %>% summary()
```
```
## age sex cp trestbps
## Min. :29.00 Length:303 Length:303 Min. : 94.0
## 1st Qu.:48.00 Class :character Class :character 1st Qu.:120.0
## Median :56.00 Mode :character Mode :character Median :130.0
## Mean :54.44 Mean :131.7
## 3rd Qu.:61.00 3rd Qu.:140.0
## Max. :77.00 Max. :200.0
##
## chol fbs restecg thalach
## Min. :126.0 Mode :logical Length:303 Min. : 71.0
## 1st Qu.:211.0 FALSE:258 Class :character 1st Qu.:133.5
## Median :241.0 TRUE :45 Mode :character Median :153.0
## Mean :246.7 Mean :149.6
## 3rd Qu.:275.0 3rd Qu.:166.0
## Max. :564.0 Max. :202.0
##
## exang oldpeak slope ca
## Mode :logical Min. :0.00 Length:303 Min. :0.0000
## FALSE:204 1st Qu.:0.00 Class :character 1st Qu.:0.0000
## TRUE :99 Median :0.80 Mode :character Median :0.0000
## Mean :1.04 Mean :0.6722
## 3rd Qu.:1.60 3rd Qu.:1.0000
## Max. :6.20 Max. :3.0000
## NA's :4
## thal num
## Length:303 Min. :0.0000
## Class :character 1st Qu.:0.0000
## Mode :character Median :0.0000
## Mean :0.9373
## 3rd Qu.:2.0000
## Max. :4.0000
##
```
**Observations**:
Variables:
\- Numerical: `age, trestbps, chol, thalach, oldpeak, ca, num`
\- Factors: `sex, cp, restecg, slope, thal, heart_disease`
\- Logical: `fbs, exang, heart_disease`
Missingness:
```
map(
df_q4,
~ sum(is.na(.))
)
```
```
## $age
## [1] 0
##
## $sex
## [1] 0
##
## $cp
## [1] 0
##
## $trestbps
## [1] 0
##
## $chol
## [1] 0
##
## $fbs
## [1] 0
##
## $restecg
## [1] 0
##
## $thalach
## [1] 0
##
## $exang
## [1] 0
##
## $oldpeak
## [1] 0
##
## $slope
## [1] 0
##
## $ca
## [1] 4
##
## $thal
## [1] 2
##
## $num
## [1] 0
```
From this, we can see that most variables have no missing values, but `ca` has `4` and `thal` has `2`.
Missingness pattern:
```
df_q4 %>%
filter(is.na(ca) | is.na(thal)) %>%
select(ca, thal, everything())
```
```
## # A tibble: 6 × 14
## ca thal age sex cp trest…¹ chol fbs restecg thalach exang
## <dbl> <chr> <dbl> <chr> <chr> <dbl> <dbl> <lgl> <chr> <dbl> <lgl>
## 1 0 <NA> 53 fema… non-… 128 216 FALSE Estes'… 115 FALSE
## 2 NA normal 52 male non-… 138 223 FALSE normal 169 FALSE
## 3 NA reversible … 43 male asym… 132 247 TRUE Estes'… 143 TRUE
## 4 0 <NA> 52 male asym… 128 204 TRUE normal 156 TRUE
## 5 NA reversible … 58 male atyp… 125 220 FALSE normal 144 FALSE
## 6 NA normal 38 male non-… 138 175 FALSE normal 173 FALSE
## # … with 3 more variables: oldpeak <dbl>, slope <chr>, num <dbl>, and
## # abbreviated variable name ¹trestbps
```
There are six rows with missing values.
If we were just doing EDA, we could stop here. However we’re going to use these data for *modeling* in a future exercise. Most models can’t deal with `NA` values, so we must choose how to handle rows with `NA`’s. In cases where only a few observations are missing values, we can simply *filter out* those rows.
### 55\.4\.2 **q6** Filter out the rows with missing values.
```
df_q6 <-
df_q4 %>%
filter(!is.na(ca), !is.na(thal))
df_q6
```
```
## # A tibble: 297 × 14
## age sex cp trest…¹ chol fbs restecg thalach exang oldpeak slope
## <dbl> <chr> <chr> <dbl> <dbl> <lgl> <chr> <dbl> <lgl> <dbl> <chr>
## 1 63 male typical… 145 233 TRUE Estes'… 150 FALSE 2.3 down…
## 2 67 male asympto… 160 286 FALSE Estes'… 108 TRUE 1.5 flat
## 3 67 male asympto… 120 229 FALSE Estes'… 129 TRUE 2.6 flat
## 4 37 male non-ang… 130 250 FALSE normal 187 FALSE 3.5 down…
## 5 41 female atypica… 130 204 FALSE Estes'… 172 FALSE 1.4 upsl…
## 6 56 male atypica… 120 236 FALSE normal 178 FALSE 0.8 upsl…
## 7 62 female asympto… 140 268 FALSE Estes'… 160 FALSE 3.6 down…
## 8 57 female asympto… 120 354 FALSE normal 163 TRUE 0.6 upsl…
## 9 63 male asympto… 130 254 FALSE Estes'… 147 FALSE 1.4 flat
## 10 53 male asympto… 140 203 TRUE Estes'… 155 TRUE 3.1 down…
## # … with 287 more rows, 3 more variables: ca <dbl>, thal <chr>, num <dbl>, and
## # abbreviated variable name ¹trestbps
```
Use the following to check your code.
```
## NOTE: No need to change this
assertthat::assert_that(
dim(
df_q6 %>%
filter(rowSums(across(everything(), is.na)) > 0)
)[1] == 0
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
### 55\.4\.1 **q5** Perform your *first checks* on `df_q4`. Answer the questions below.
*Hint*: You may need to do some “deeper checks” to answer some of the questions below.
```
df_q4 %>% summary()
```
```
## age sex cp trestbps
## Min. :29.00 Length:303 Length:303 Min. : 94.0
## 1st Qu.:48.00 Class :character Class :character 1st Qu.:120.0
## Median :56.00 Mode :character Mode :character Median :130.0
## Mean :54.44 Mean :131.7
## 3rd Qu.:61.00 3rd Qu.:140.0
## Max. :77.00 Max. :200.0
##
## chol fbs restecg thalach
## Min. :126.0 Mode :logical Length:303 Min. : 71.0
## 1st Qu.:211.0 FALSE:258 Class :character 1st Qu.:133.5
## Median :241.0 TRUE :45 Mode :character Median :153.0
## Mean :246.7 Mean :149.6
## 3rd Qu.:275.0 3rd Qu.:166.0
## Max. :564.0 Max. :202.0
##
## exang oldpeak slope ca
## Mode :logical Min. :0.00 Length:303 Min. :0.0000
## FALSE:204 1st Qu.:0.00 Class :character 1st Qu.:0.0000
## TRUE :99 Median :0.80 Mode :character Median :0.0000
## Mean :1.04 Mean :0.6722
## 3rd Qu.:1.60 3rd Qu.:1.0000
## Max. :6.20 Max. :3.0000
## NA's :4
## thal num
## Length:303 Min. :0.0000
## Class :character 1st Qu.:0.0000
## Mode :character Median :0.0000
## Mean :0.9373
## 3rd Qu.:2.0000
## Max. :4.0000
##
```
**Observations**:
Variables:
\- Numerical: `age, trestbps, chol, thalach, oldpeak, ca, num`
\- Factors: `sex, cp, restecg, slope, thal, heart_disease`
\- Logical: `fbs, exang, heart_disease`
Missingness:
```
map(
df_q4,
~ sum(is.na(.))
)
```
```
## $age
## [1] 0
##
## $sex
## [1] 0
##
## $cp
## [1] 0
##
## $trestbps
## [1] 0
##
## $chol
## [1] 0
##
## $fbs
## [1] 0
##
## $restecg
## [1] 0
##
## $thalach
## [1] 0
##
## $exang
## [1] 0
##
## $oldpeak
## [1] 0
##
## $slope
## [1] 0
##
## $ca
## [1] 4
##
## $thal
## [1] 2
##
## $num
## [1] 0
```
From this, we can see that most variables have no missing values, but `ca` has `4` and `thal` has `2`.
Missingness pattern:
```
df_q4 %>%
filter(is.na(ca) | is.na(thal)) %>%
select(ca, thal, everything())
```
```
## # A tibble: 6 × 14
## ca thal age sex cp trest…¹ chol fbs restecg thalach exang
## <dbl> <chr> <dbl> <chr> <chr> <dbl> <dbl> <lgl> <chr> <dbl> <lgl>
## 1 0 <NA> 53 fema… non-… 128 216 FALSE Estes'… 115 FALSE
## 2 NA normal 52 male non-… 138 223 FALSE normal 169 FALSE
## 3 NA reversible … 43 male asym… 132 247 TRUE Estes'… 143 TRUE
## 4 0 <NA> 52 male asym… 128 204 TRUE normal 156 TRUE
## 5 NA reversible … 58 male atyp… 125 220 FALSE normal 144 FALSE
## 6 NA normal 38 male non-… 138 175 FALSE normal 173 FALSE
## # … with 3 more variables: oldpeak <dbl>, slope <chr>, num <dbl>, and
## # abbreviated variable name ¹trestbps
```
There are six rows with missing values.
If we were just doing EDA, we could stop here. However we’re going to use these data for *modeling* in a future exercise. Most models can’t deal with `NA` values, so we must choose how to handle rows with `NA`’s. In cases where only a few observations are missing values, we can simply *filter out* those rows.
### 55\.4\.2 **q6** Filter out the rows with missing values.
```
df_q6 <-
df_q4 %>%
filter(!is.na(ca), !is.na(thal))
df_q6
```
```
## # A tibble: 297 × 14
## age sex cp trest…¹ chol fbs restecg thalach exang oldpeak slope
## <dbl> <chr> <chr> <dbl> <dbl> <lgl> <chr> <dbl> <lgl> <dbl> <chr>
## 1 63 male typical… 145 233 TRUE Estes'… 150 FALSE 2.3 down…
## 2 67 male asympto… 160 286 FALSE Estes'… 108 TRUE 1.5 flat
## 3 67 male asympto… 120 229 FALSE Estes'… 129 TRUE 2.6 flat
## 4 37 male non-ang… 130 250 FALSE normal 187 FALSE 3.5 down…
## 5 41 female atypica… 130 204 FALSE Estes'… 172 FALSE 1.4 upsl…
## 6 56 male atypica… 120 236 FALSE normal 178 FALSE 0.8 upsl…
## 7 62 female asympto… 140 268 FALSE Estes'… 160 FALSE 3.6 down…
## 8 57 female asympto… 120 354 FALSE normal 163 TRUE 0.6 upsl…
## 9 63 male asympto… 130 254 FALSE Estes'… 147 FALSE 1.4 flat
## 10 53 male asympto… 140 203 TRUE Estes'… 155 TRUE 3.1 down…
## # … with 287 more rows, 3 more variables: ca <dbl>, thal <chr>, num <dbl>, and
## # abbreviated variable name ¹trestbps
```
Use the following to check your code.
```
## NOTE: No need to change this
assertthat::assert_that(
dim(
df_q6 %>%
filter(rowSums(across(everything(), is.na)) > 0)
)[1] == 0
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
55\.5 In summary
----------------
* We cleaned the dataset by giving it sensible names and recoding factors with human\-readable values.
* We filtered out rows with missing values (`NA`’s) *because we intend to use these data for modeling*.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/model-model-selection-and-the-test-validate-framework.html |
56 Model: Model Selection and the Test\-Validate Framework
==========================================================
*Purpose*: When designing a model, we need to make choices about the model form. However, since we are *optimizing* the model to fit our data, we need to be careful not to bias our assessments and make poor modeling choices. We can use a *training* and *validation* split of our data to help make these choices. To understand these issues, we’ll discuss underfitting, overfitting, and the test\-validate framework.
*Reading*: [Training, validation, and test sets](https://en.wikipedia.org/wiki/Training,_validation,_and_test_sets) (Optional)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(modelr)
library(broom)
```
```
##
## Attaching package: 'broom'
```
```
## The following object is masked from 'package:modelr':
##
## bootstrap
```
We’ll look at two cases: First a simple problem studying polynomials, then a more realistic case using the `diamonds` dataset.
56\.1 Illustrative Case: Polynomial Regression
----------------------------------------------
To illustrate the ideas behind the test\-validate framework, let’s study a very simple problem: Fitting a polynomial. The following code sets up this example.
```
## NOTE: No need to edit this chunk
set.seed(101)
# Ground-truth function we seek to approximate
fcn_true <- function(x) {12 * (x - 0.5)^3 - 2 * x + 1}
# Generate data
n_samples <- 100
df_truth <-
tibble(x = seq(0, 1, length.out = n_samples)) %>%
mutate(
y = fcn_true(x), # True values
y_meas = y + 0.05 * rnorm(n_samples) # Measured with noise
)
# Select training data
df_measurements <-
df_truth %>%
slice_sample(n = 20) %>%
select(x, y_meas)
# Visualize
df_truth %>%
ggplot(aes(x, y)) +
geom_line() +
geom_point(
data = df_measurements,
mapping = aes(y = y_meas, color = "Measurements")
)
```
In what follows, we will behave as though we only have access to `df_measurements`—this is to model a “real” case where we have limited data. We will attempt to fit a polynomial to the data; remember that a polynomial of degree \\(d\\) is a function of the form
\\\[f\_{\\text{polynomial}}(x) \= \\sum\_{i\=0}^d \\beta\_i x^i,\\]
where the \\(\\beta\_i\\) are coefficients, and \\(x^0 \= 1\\) is a constant.
56\.2 Underfitting
------------------
The following code fits a polynomial of degree 2 to the available data `df_measurements`.
### 56\.2\.1 **q1** Run the following code and inspect the (visual) results. Describe whether the model (`Predicted`) captures the “trends” in the measured data (black dots).
```
## NOTE: No need to edit this code; run and inspect
# Fit a polynomial of degree = 2
fit_d2 <-
df_measurements %>%
lm(
data = .,
formula = y_meas ~ poly(x, degree = 2)
)
# Visualize the results
df_truth %>%
add_predictions(fit_d2, var = "y_pred") %>%
ggplot(aes(x)) +
geom_line(aes(y = y, color = "True")) +
geom_line(aes(y = y_pred, color = "Predicted")) +
geom_point(data = df_measurements, aes(y = y_meas)) +
scale_color_discrete(name = "") +
theme_minimal()
```
**Observations**:
* The `Predicted` values do not capture the trend in the measured points, nor in the `True` function.
This phenomenon is called [*underfitting*](https://en.wikipedia.org/wiki/Overfitting#Underfitting): This is when the model is not “flexible” enough to capture trends observed in the data. We can increase the flexibility of the model by increasing the polynomial order, which we’ll do below.
56\.3 Overfitting
-----------------
Let’s increase the polynomial order and re\-fit the data to try to solve the underfitting problem.
### 56\.3\.1 **q2** Copy the code from above to fit a `degree = 17` polynomial to the measurements.
```
## TASK: Fit a high-degree polynomial to df_measurements
fit_over <-
df_measurements %>%
lm(data = ., formula = y_meas ~ poly(x, degree = 17))
## NOTE: No need to modify code below
y_limits <-
c(
df_truth %>% pull(y) %>% min(),
df_truth %>% pull(y) %>% max()
)
df_truth %>%
add_predictions(fit_over, var = "y_pred") %>%
ggplot(aes(x)) +
geom_line(aes(y = y, color = "True")) +
geom_line(aes(y = y_pred, color = "Predicted")) +
geom_point(data = df_measurements, aes(y = y_meas)) +
scale_color_discrete(name = "") +
coord_cartesian(ylim = y_limits) +
theme_minimal()
```
**Observations**:
* The predictions are *perfect* at the measured points.
* The predictions are *terrible* outside the measured points.
The phenomenon we see with the high\-degree case above is called [*overfitting*](https://en.wikipedia.org/wiki/Overfitting). Overfitting tends to occur when our model is “too flexible”; this excess flexibility allows the model to fit to extraneous patterns, such as measurement noise or data artifacts due to sampling.
So we have a “Goldilocks” problem:
* We need the model to be *flexible enough* to fit patterns in the data. (Avoid *underfitting*)
* We need the model to be *not too flexible* so as not to fit to noise. (Avoid *overfitting*)
Quantities such as polynomial order that control model flexibility are called [hyperparameters](https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)); essentially, these are parameters that are not set during the optimization we discussed in `e-stat11-models-intro`. We might choose to set hyperparameter values based on minimizing the model error.
However, if we try to set the hyperparameters based on the *training error*, we’re going to make some bad choices. The next task gives us a hint why.
### 56\.3\.2 **q3** Compute the `mse` for the 2nd and high\-degree polynomial models on `df_measurements`. Which model has the lower error? Which hyperparameter value (polynomial degree) would you choose, based *solely* on these numbers?
*Hint*: We learned how to do this in `e-stat11-models-intro`.
```
# TASK: Compute the mse for fit_d2 and fit_over on df_measurements
mse(fit_d2, df_measurements)
```
```
## [1] 0.03025928
```
```
mse(fit_over, df_measurements)
```
```
## [1] 0.001280951
```
**Observations**:
* `fit_over` has lower error on `df_measurements`.
* Based *solely* on these results, we would be inclined to choose high\-degree polynomial model.
* However, this would be a poor decision, as we have a highly biased measure of model error. We would be better served by studying the error on a *validation* set.
56\.4 A Solution: Validation Data
---------------------------------
A solution to the problem above is to reserve a set of *validation data* to tune the hyperparameters of our model. Note that this requires us to *split* our data into different sets: training data and validation data. The following code makes that split on `df_measurements`.
```
## NOTE: No need to edit this chunk
set.seed(101)
# Select "training" data from our available measurements
df_train <-
df_measurements %>%
slice_sample(n = 10)
# Use the remaining data as "validation" data
df_validate <-
anti_join(
df_measurements,
df_train,
by = "x"
)
# Visualize the split
df_truth %>%
ggplot(aes(x, y)) +
geom_line() +
geom_point(
data = df_train %>% mutate(source = "Train"),
mapping = aes(y = y_meas, color = source)
) +
geom_point(
data = df_validate %>% mutate(source = "Validate"),
mapping = aes(y = y_meas, color = source)
) +
scale_color_discrete(name = "Data")
```
**Idea**:
* Fit the model on the *training* data `df_train`.
* Assess the model on *validation* data `df_validate`.
* Use the assessment on validation data to choose the polynomial order.
The following code *sweeps* through different values of polynomial order, fits a polynomial, and computes the associated error on both the `Train` and `Validate` sets.
```
## NOTE: No need to change this code
df_sweep <-
map_dfr(
seq(1, 9, by = 1),
function(order) {
# Fit a temporary model
fit_tmp <-
lm(
data = df_train,
formula = y_meas ~ poly(x, order)
)
# Compute error on the Train and Validate sets
tibble(
error_Train = mse(fit_tmp, df_train),
error_Validate = mse(fit_tmp, df_validate),
order = order
)
}
) %>%
pivot_longer(
names_to = c(".value", "source"),
names_sep = "_",
cols = matches("error")
)
```
In the next task, you will compare the resulting error metrics.
### 56\.4\.1 **q4** Inspect the results of the degree sweep, and answer the questions below.
```
## NOTE: No need to edit; inspect and write your observations
df_sweep %>%
ggplot(aes(order, error, color = source)) +
geom_line() +
scale_y_log10() +
scale_x_continuous(breaks = seq(1, 10, by = 1)) +
scale_color_discrete(name = "Method") +
coord_cartesian(ylim = c(1e-3, 1)) +
theme(legend.position = "bottom") +
labs(
x = "Polynomial Order",
y = "Mean Squared Error"
)
```
**Observations**
* Training error is minimized at polynomial order 8, or possibly higher.
* Validation error is minimized at polynomial order 3\.
* Selecting the polynomial order via the validation error leads to the correct choice.
56\.5 Intermediate Summary
--------------------------
We’ve seen a few ideas:
* A model that is *not flexible enough* will tend to *underfit* a dataset.
* A model that is *too flexible* will tend to *overfit* a dataset.
* The *training error* is an optimistic measure of accuracy, it is not an appropriate metric for setting hyperparameter values.
* To set hyperparameter values, we are better off “holding out” a *validation set* from our data, and using the *validation error* to make model decisions.
56\.6 More Challenging Case: Modeling Diamond Prices
----------------------------------------------------
Above we made our model more flexible by changing the polynomial order. For instance, a 2nd\-order polynomial model would be
\\\[\\hat{y}\_2 \= \\beta\_0 \+ \\beta\_1 x \+ \\beta\_2 x^2 \+ \\epsilon,\\]
while a 5th\-order polynomial model would be
\\\[\\hat{y}\_5 \= \\beta\_0 \+ \\beta\_1 x \+ \\beta\_2 x^2\+ \\beta\_3 x^3\+ \\beta\_4 x^4 \+ \\beta\_5 x^5 \+ \\epsilon.\\]
In effect, we are *adding another predictor* of the form \\(\\beta\_i x^i\\) every time we increase the polynomial order. Increasing polynomial order is just one way we increase model flexibility; another way is to *add additional variables to the model*.
For instance, in the diamonds dataset we have a number of variables that we could use as predictors:
```
diamonds %>%
select(-price) %>%
names()
```
```
## [1] "carat" "cut" "color" "clarity" "depth" "table" "x"
## [8] "y" "z"
```
Let’s put the train\-validation idea to work! Below I set up training and validation sets of the diamonds data, and train a very silly model that blindly uses all the variables available as predictors. The challenge: Can you beat this model?
```
## NOTE: No need to edit this setup
# Create a test-validate split
set.seed(101)
diamonds_randomized <-
diamonds %>%
slice(sample(dim(diamonds)[1]))
diamonds_train <-
diamonds_randomized %>%
slice(1:10000)
diamonds_validate <-
diamonds_randomized %>%
slice(10001:20000)
# Try to beat this naive model that uses all variables!
fit_full <-
lm(
data = diamonds_train,
formula = price ~ . # The `.` notation here means "use all variables"
)
```
### 56\.6\.1 **q5** Build your own model!
Choose which predictors to include by modifying the `formula` argument below. Use the validation data to help guide your choice. Answer the questions below.
*Hint*: We’ve done EDA on `diamonds` before. *Use your knowledge* from that past EDA to choose variables you think will be informative for predicting the `price`.
```
## NOTE: This is just one possible model!
fit_q5 <-
lm(
data = diamonds_train,
formula = price ~ carat + cut + color + clarity
)
# Compare the two models on the validation set
mse(fit_q5, diamonds_validate)
```
```
## [1] 1306804
```
```
mse(fit_full, diamonds_validate)
```
```
## [1] 1568726
```
**Observations**:
* `carat` by itself does a decent job predicting `price`.
+ Based on EDA we’ve done before, it appears that `carat` is the major decider in diamond price.
* `cut`, `color`, and `clarity` help, but do not have the same predictive power (by themselves) as `carat`.
+ Based on EDA we’ve done before, we know that `cut`, `color`, and `clarity` are important for `price`, but not quite as important as `carat`.
* `x`, `y`, and `z` *alone* have predictive power similar to `carat` alone. They probably correlate well with the weight, as they measure the dimensions of the diamond.
* `x`, `y`, and `z` *together* have very poor predictive power
* `depth` and `table` do not have the same predictive power as `carat`.
* The best combination of predictors I found was `carat + cut + color + clarity`.
*Aside*: The process of choosing predictors—sometimes called *features*—is called [feature selection](https://en.wikipedia.org/wiki/Feature_selection).
One last thing: Note above that I first *randomized* the diamonds before selecting training and validation data. *This is really important!* Let’s see what happens if we *don’t randomize* the data before splitting:
### 56\.6\.2 **q6** Visualize a histogram for the prices of `diamonds_train_bad` and `diamonds_validate_bad`. Answer the questions below.
```
## NOTE: No need to edit this part
diamonds_train_bad <-
diamonds %>%
slice(1:10000)
diamonds_validate_bad <-
diamonds %>%
slice(10001:20000)
## TODO: Visualize a histogram of prices for both `bad` sets.
bind_rows(
diamonds_train_bad %>% mutate(source = "Train"),
diamonds_validate_bad %>% mutate(source = "Validate")
) %>%
ggplot(aes(price)) +
geom_histogram(bins = 100) +
facet_grid(source ~ .)
```
**Observations**:
* `diamonds_test_bad` and `diamonds_validate_bad` have very little overlap! It seems the `diamonds` datset has some ordering along `price`, which greatly affects our split.
* If we were to train and then validate, we would be training on lower\-price diamonds and predicting on higher\-price diamonds. This might actually be appropriate if we’re trying to extrapolate from low to high. But if we are trying to get a representative estimate of error for training, this would be an inappropriate split.
56\.1 Illustrative Case: Polynomial Regression
----------------------------------------------
To illustrate the ideas behind the test\-validate framework, let’s study a very simple problem: Fitting a polynomial. The following code sets up this example.
```
## NOTE: No need to edit this chunk
set.seed(101)
# Ground-truth function we seek to approximate
fcn_true <- function(x) {12 * (x - 0.5)^3 - 2 * x + 1}
# Generate data
n_samples <- 100
df_truth <-
tibble(x = seq(0, 1, length.out = n_samples)) %>%
mutate(
y = fcn_true(x), # True values
y_meas = y + 0.05 * rnorm(n_samples) # Measured with noise
)
# Select training data
df_measurements <-
df_truth %>%
slice_sample(n = 20) %>%
select(x, y_meas)
# Visualize
df_truth %>%
ggplot(aes(x, y)) +
geom_line() +
geom_point(
data = df_measurements,
mapping = aes(y = y_meas, color = "Measurements")
)
```
In what follows, we will behave as though we only have access to `df_measurements`—this is to model a “real” case where we have limited data. We will attempt to fit a polynomial to the data; remember that a polynomial of degree \\(d\\) is a function of the form
\\\[f\_{\\text{polynomial}}(x) \= \\sum\_{i\=0}^d \\beta\_i x^i,\\]
where the \\(\\beta\_i\\) are coefficients, and \\(x^0 \= 1\\) is a constant.
56\.2 Underfitting
------------------
The following code fits a polynomial of degree 2 to the available data `df_measurements`.
### 56\.2\.1 **q1** Run the following code and inspect the (visual) results. Describe whether the model (`Predicted`) captures the “trends” in the measured data (black dots).
```
## NOTE: No need to edit this code; run and inspect
# Fit a polynomial of degree = 2
fit_d2 <-
df_measurements %>%
lm(
data = .,
formula = y_meas ~ poly(x, degree = 2)
)
# Visualize the results
df_truth %>%
add_predictions(fit_d2, var = "y_pred") %>%
ggplot(aes(x)) +
geom_line(aes(y = y, color = "True")) +
geom_line(aes(y = y_pred, color = "Predicted")) +
geom_point(data = df_measurements, aes(y = y_meas)) +
scale_color_discrete(name = "") +
theme_minimal()
```
**Observations**:
* The `Predicted` values do not capture the trend in the measured points, nor in the `True` function.
This phenomenon is called [*underfitting*](https://en.wikipedia.org/wiki/Overfitting#Underfitting): This is when the model is not “flexible” enough to capture trends observed in the data. We can increase the flexibility of the model by increasing the polynomial order, which we’ll do below.
### 56\.2\.1 **q1** Run the following code and inspect the (visual) results. Describe whether the model (`Predicted`) captures the “trends” in the measured data (black dots).
```
## NOTE: No need to edit this code; run and inspect
# Fit a polynomial of degree = 2
fit_d2 <-
df_measurements %>%
lm(
data = .,
formula = y_meas ~ poly(x, degree = 2)
)
# Visualize the results
df_truth %>%
add_predictions(fit_d2, var = "y_pred") %>%
ggplot(aes(x)) +
geom_line(aes(y = y, color = "True")) +
geom_line(aes(y = y_pred, color = "Predicted")) +
geom_point(data = df_measurements, aes(y = y_meas)) +
scale_color_discrete(name = "") +
theme_minimal()
```
**Observations**:
* The `Predicted` values do not capture the trend in the measured points, nor in the `True` function.
This phenomenon is called [*underfitting*](https://en.wikipedia.org/wiki/Overfitting#Underfitting): This is when the model is not “flexible” enough to capture trends observed in the data. We can increase the flexibility of the model by increasing the polynomial order, which we’ll do below.
56\.3 Overfitting
-----------------
Let’s increase the polynomial order and re\-fit the data to try to solve the underfitting problem.
### 56\.3\.1 **q2** Copy the code from above to fit a `degree = 17` polynomial to the measurements.
```
## TASK: Fit a high-degree polynomial to df_measurements
fit_over <-
df_measurements %>%
lm(data = ., formula = y_meas ~ poly(x, degree = 17))
## NOTE: No need to modify code below
y_limits <-
c(
df_truth %>% pull(y) %>% min(),
df_truth %>% pull(y) %>% max()
)
df_truth %>%
add_predictions(fit_over, var = "y_pred") %>%
ggplot(aes(x)) +
geom_line(aes(y = y, color = "True")) +
geom_line(aes(y = y_pred, color = "Predicted")) +
geom_point(data = df_measurements, aes(y = y_meas)) +
scale_color_discrete(name = "") +
coord_cartesian(ylim = y_limits) +
theme_minimal()
```
**Observations**:
* The predictions are *perfect* at the measured points.
* The predictions are *terrible* outside the measured points.
The phenomenon we see with the high\-degree case above is called [*overfitting*](https://en.wikipedia.org/wiki/Overfitting). Overfitting tends to occur when our model is “too flexible”; this excess flexibility allows the model to fit to extraneous patterns, such as measurement noise or data artifacts due to sampling.
So we have a “Goldilocks” problem:
* We need the model to be *flexible enough* to fit patterns in the data. (Avoid *underfitting*)
* We need the model to be *not too flexible* so as not to fit to noise. (Avoid *overfitting*)
Quantities such as polynomial order that control model flexibility are called [hyperparameters](https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)); essentially, these are parameters that are not set during the optimization we discussed in `e-stat11-models-intro`. We might choose to set hyperparameter values based on minimizing the model error.
However, if we try to set the hyperparameters based on the *training error*, we’re going to make some bad choices. The next task gives us a hint why.
### 56\.3\.2 **q3** Compute the `mse` for the 2nd and high\-degree polynomial models on `df_measurements`. Which model has the lower error? Which hyperparameter value (polynomial degree) would you choose, based *solely* on these numbers?
*Hint*: We learned how to do this in `e-stat11-models-intro`.
```
# TASK: Compute the mse for fit_d2 and fit_over on df_measurements
mse(fit_d2, df_measurements)
```
```
## [1] 0.03025928
```
```
mse(fit_over, df_measurements)
```
```
## [1] 0.001280951
```
**Observations**:
* `fit_over` has lower error on `df_measurements`.
* Based *solely* on these results, we would be inclined to choose high\-degree polynomial model.
* However, this would be a poor decision, as we have a highly biased measure of model error. We would be better served by studying the error on a *validation* set.
### 56\.3\.1 **q2** Copy the code from above to fit a `degree = 17` polynomial to the measurements.
```
## TASK: Fit a high-degree polynomial to df_measurements
fit_over <-
df_measurements %>%
lm(data = ., formula = y_meas ~ poly(x, degree = 17))
## NOTE: No need to modify code below
y_limits <-
c(
df_truth %>% pull(y) %>% min(),
df_truth %>% pull(y) %>% max()
)
df_truth %>%
add_predictions(fit_over, var = "y_pred") %>%
ggplot(aes(x)) +
geom_line(aes(y = y, color = "True")) +
geom_line(aes(y = y_pred, color = "Predicted")) +
geom_point(data = df_measurements, aes(y = y_meas)) +
scale_color_discrete(name = "") +
coord_cartesian(ylim = y_limits) +
theme_minimal()
```
**Observations**:
* The predictions are *perfect* at the measured points.
* The predictions are *terrible* outside the measured points.
The phenomenon we see with the high\-degree case above is called [*overfitting*](https://en.wikipedia.org/wiki/Overfitting). Overfitting tends to occur when our model is “too flexible”; this excess flexibility allows the model to fit to extraneous patterns, such as measurement noise or data artifacts due to sampling.
So we have a “Goldilocks” problem:
* We need the model to be *flexible enough* to fit patterns in the data. (Avoid *underfitting*)
* We need the model to be *not too flexible* so as not to fit to noise. (Avoid *overfitting*)
Quantities such as polynomial order that control model flexibility are called [hyperparameters](https://en.wikipedia.org/wiki/Hyperparameter_(machine_learning)); essentially, these are parameters that are not set during the optimization we discussed in `e-stat11-models-intro`. We might choose to set hyperparameter values based on minimizing the model error.
However, if we try to set the hyperparameters based on the *training error*, we’re going to make some bad choices. The next task gives us a hint why.
### 56\.3\.2 **q3** Compute the `mse` for the 2nd and high\-degree polynomial models on `df_measurements`. Which model has the lower error? Which hyperparameter value (polynomial degree) would you choose, based *solely* on these numbers?
*Hint*: We learned how to do this in `e-stat11-models-intro`.
```
# TASK: Compute the mse for fit_d2 and fit_over on df_measurements
mse(fit_d2, df_measurements)
```
```
## [1] 0.03025928
```
```
mse(fit_over, df_measurements)
```
```
## [1] 0.001280951
```
**Observations**:
* `fit_over` has lower error on `df_measurements`.
* Based *solely* on these results, we would be inclined to choose high\-degree polynomial model.
* However, this would be a poor decision, as we have a highly biased measure of model error. We would be better served by studying the error on a *validation* set.
56\.4 A Solution: Validation Data
---------------------------------
A solution to the problem above is to reserve a set of *validation data* to tune the hyperparameters of our model. Note that this requires us to *split* our data into different sets: training data and validation data. The following code makes that split on `df_measurements`.
```
## NOTE: No need to edit this chunk
set.seed(101)
# Select "training" data from our available measurements
df_train <-
df_measurements %>%
slice_sample(n = 10)
# Use the remaining data as "validation" data
df_validate <-
anti_join(
df_measurements,
df_train,
by = "x"
)
# Visualize the split
df_truth %>%
ggplot(aes(x, y)) +
geom_line() +
geom_point(
data = df_train %>% mutate(source = "Train"),
mapping = aes(y = y_meas, color = source)
) +
geom_point(
data = df_validate %>% mutate(source = "Validate"),
mapping = aes(y = y_meas, color = source)
) +
scale_color_discrete(name = "Data")
```
**Idea**:
* Fit the model on the *training* data `df_train`.
* Assess the model on *validation* data `df_validate`.
* Use the assessment on validation data to choose the polynomial order.
The following code *sweeps* through different values of polynomial order, fits a polynomial, and computes the associated error on both the `Train` and `Validate` sets.
```
## NOTE: No need to change this code
df_sweep <-
map_dfr(
seq(1, 9, by = 1),
function(order) {
# Fit a temporary model
fit_tmp <-
lm(
data = df_train,
formula = y_meas ~ poly(x, order)
)
# Compute error on the Train and Validate sets
tibble(
error_Train = mse(fit_tmp, df_train),
error_Validate = mse(fit_tmp, df_validate),
order = order
)
}
) %>%
pivot_longer(
names_to = c(".value", "source"),
names_sep = "_",
cols = matches("error")
)
```
In the next task, you will compare the resulting error metrics.
### 56\.4\.1 **q4** Inspect the results of the degree sweep, and answer the questions below.
```
## NOTE: No need to edit; inspect and write your observations
df_sweep %>%
ggplot(aes(order, error, color = source)) +
geom_line() +
scale_y_log10() +
scale_x_continuous(breaks = seq(1, 10, by = 1)) +
scale_color_discrete(name = "Method") +
coord_cartesian(ylim = c(1e-3, 1)) +
theme(legend.position = "bottom") +
labs(
x = "Polynomial Order",
y = "Mean Squared Error"
)
```
**Observations**
* Training error is minimized at polynomial order 8, or possibly higher.
* Validation error is minimized at polynomial order 3\.
* Selecting the polynomial order via the validation error leads to the correct choice.
### 56\.4\.1 **q4** Inspect the results of the degree sweep, and answer the questions below.
```
## NOTE: No need to edit; inspect and write your observations
df_sweep %>%
ggplot(aes(order, error, color = source)) +
geom_line() +
scale_y_log10() +
scale_x_continuous(breaks = seq(1, 10, by = 1)) +
scale_color_discrete(name = "Method") +
coord_cartesian(ylim = c(1e-3, 1)) +
theme(legend.position = "bottom") +
labs(
x = "Polynomial Order",
y = "Mean Squared Error"
)
```
**Observations**
* Training error is minimized at polynomial order 8, or possibly higher.
* Validation error is minimized at polynomial order 3\.
* Selecting the polynomial order via the validation error leads to the correct choice.
56\.5 Intermediate Summary
--------------------------
We’ve seen a few ideas:
* A model that is *not flexible enough* will tend to *underfit* a dataset.
* A model that is *too flexible* will tend to *overfit* a dataset.
* The *training error* is an optimistic measure of accuracy, it is not an appropriate metric for setting hyperparameter values.
* To set hyperparameter values, we are better off “holding out” a *validation set* from our data, and using the *validation error* to make model decisions.
56\.6 More Challenging Case: Modeling Diamond Prices
----------------------------------------------------
Above we made our model more flexible by changing the polynomial order. For instance, a 2nd\-order polynomial model would be
\\\[\\hat{y}\_2 \= \\beta\_0 \+ \\beta\_1 x \+ \\beta\_2 x^2 \+ \\epsilon,\\]
while a 5th\-order polynomial model would be
\\\[\\hat{y}\_5 \= \\beta\_0 \+ \\beta\_1 x \+ \\beta\_2 x^2\+ \\beta\_3 x^3\+ \\beta\_4 x^4 \+ \\beta\_5 x^5 \+ \\epsilon.\\]
In effect, we are *adding another predictor* of the form \\(\\beta\_i x^i\\) every time we increase the polynomial order. Increasing polynomial order is just one way we increase model flexibility; another way is to *add additional variables to the model*.
For instance, in the diamonds dataset we have a number of variables that we could use as predictors:
```
diamonds %>%
select(-price) %>%
names()
```
```
## [1] "carat" "cut" "color" "clarity" "depth" "table" "x"
## [8] "y" "z"
```
Let’s put the train\-validation idea to work! Below I set up training and validation sets of the diamonds data, and train a very silly model that blindly uses all the variables available as predictors. The challenge: Can you beat this model?
```
## NOTE: No need to edit this setup
# Create a test-validate split
set.seed(101)
diamonds_randomized <-
diamonds %>%
slice(sample(dim(diamonds)[1]))
diamonds_train <-
diamonds_randomized %>%
slice(1:10000)
diamonds_validate <-
diamonds_randomized %>%
slice(10001:20000)
# Try to beat this naive model that uses all variables!
fit_full <-
lm(
data = diamonds_train,
formula = price ~ . # The `.` notation here means "use all variables"
)
```
### 56\.6\.1 **q5** Build your own model!
Choose which predictors to include by modifying the `formula` argument below. Use the validation data to help guide your choice. Answer the questions below.
*Hint*: We’ve done EDA on `diamonds` before. *Use your knowledge* from that past EDA to choose variables you think will be informative for predicting the `price`.
```
## NOTE: This is just one possible model!
fit_q5 <-
lm(
data = diamonds_train,
formula = price ~ carat + cut + color + clarity
)
# Compare the two models on the validation set
mse(fit_q5, diamonds_validate)
```
```
## [1] 1306804
```
```
mse(fit_full, diamonds_validate)
```
```
## [1] 1568726
```
**Observations**:
* `carat` by itself does a decent job predicting `price`.
+ Based on EDA we’ve done before, it appears that `carat` is the major decider in diamond price.
* `cut`, `color`, and `clarity` help, but do not have the same predictive power (by themselves) as `carat`.
+ Based on EDA we’ve done before, we know that `cut`, `color`, and `clarity` are important for `price`, but not quite as important as `carat`.
* `x`, `y`, and `z` *alone* have predictive power similar to `carat` alone. They probably correlate well with the weight, as they measure the dimensions of the diamond.
* `x`, `y`, and `z` *together* have very poor predictive power
* `depth` and `table` do not have the same predictive power as `carat`.
* The best combination of predictors I found was `carat + cut + color + clarity`.
*Aside*: The process of choosing predictors—sometimes called *features*—is called [feature selection](https://en.wikipedia.org/wiki/Feature_selection).
One last thing: Note above that I first *randomized* the diamonds before selecting training and validation data. *This is really important!* Let’s see what happens if we *don’t randomize* the data before splitting:
### 56\.6\.2 **q6** Visualize a histogram for the prices of `diamonds_train_bad` and `diamonds_validate_bad`. Answer the questions below.
```
## NOTE: No need to edit this part
diamonds_train_bad <-
diamonds %>%
slice(1:10000)
diamonds_validate_bad <-
diamonds %>%
slice(10001:20000)
## TODO: Visualize a histogram of prices for both `bad` sets.
bind_rows(
diamonds_train_bad %>% mutate(source = "Train"),
diamonds_validate_bad %>% mutate(source = "Validate")
) %>%
ggplot(aes(price)) +
geom_histogram(bins = 100) +
facet_grid(source ~ .)
```
**Observations**:
* `diamonds_test_bad` and `diamonds_validate_bad` have very little overlap! It seems the `diamonds` datset has some ordering along `price`, which greatly affects our split.
* If we were to train and then validate, we would be training on lower\-price diamonds and predicting on higher\-price diamonds. This might actually be appropriate if we’re trying to extrapolate from low to high. But if we are trying to get a representative estimate of error for training, this would be an inappropriate split.
### 56\.6\.1 **q5** Build your own model!
Choose which predictors to include by modifying the `formula` argument below. Use the validation data to help guide your choice. Answer the questions below.
*Hint*: We’ve done EDA on `diamonds` before. *Use your knowledge* from that past EDA to choose variables you think will be informative for predicting the `price`.
```
## NOTE: This is just one possible model!
fit_q5 <-
lm(
data = diamonds_train,
formula = price ~ carat + cut + color + clarity
)
# Compare the two models on the validation set
mse(fit_q5, diamonds_validate)
```
```
## [1] 1306804
```
```
mse(fit_full, diamonds_validate)
```
```
## [1] 1568726
```
**Observations**:
* `carat` by itself does a decent job predicting `price`.
+ Based on EDA we’ve done before, it appears that `carat` is the major decider in diamond price.
* `cut`, `color`, and `clarity` help, but do not have the same predictive power (by themselves) as `carat`.
+ Based on EDA we’ve done before, we know that `cut`, `color`, and `clarity` are important for `price`, but not quite as important as `carat`.
* `x`, `y`, and `z` *alone* have predictive power similar to `carat` alone. They probably correlate well with the weight, as they measure the dimensions of the diamond.
* `x`, `y`, and `z` *together* have very poor predictive power
* `depth` and `table` do not have the same predictive power as `carat`.
* The best combination of predictors I found was `carat + cut + color + clarity`.
*Aside*: The process of choosing predictors—sometimes called *features*—is called [feature selection](https://en.wikipedia.org/wiki/Feature_selection).
One last thing: Note above that I first *randomized* the diamonds before selecting training and validation data. *This is really important!* Let’s see what happens if we *don’t randomize* the data before splitting:
### 56\.6\.2 **q6** Visualize a histogram for the prices of `diamonds_train_bad` and `diamonds_validate_bad`. Answer the questions below.
```
## NOTE: No need to edit this part
diamonds_train_bad <-
diamonds %>%
slice(1:10000)
diamonds_validate_bad <-
diamonds %>%
slice(10001:20000)
## TODO: Visualize a histogram of prices for both `bad` sets.
bind_rows(
diamonds_train_bad %>% mutate(source = "Train"),
diamonds_validate_bad %>% mutate(source = "Validate")
) %>%
ggplot(aes(price)) +
geom_histogram(bins = 100) +
facet_grid(source ~ .)
```
**Observations**:
* `diamonds_test_bad` and `diamonds_validate_bad` have very little overlap! It seems the `diamonds` datset has some ordering along `price`, which greatly affects our split.
* If we were to train and then validate, we would be training on lower\-price diamonds and predicting on higher\-price diamonds. This might actually be appropriate if we’re trying to extrapolate from low to high. But if we are trying to get a representative estimate of error for training, this would be an inappropriate split.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/data-liberating-data-with-webplotdigitizer.html |
57 Data: Liberating data with WebPlotDigitizer
==============================================
*Purpose*: Sometimes data are messy—we know how to deal with that. Other times data are “locked up” in a format we can’t easily analyze, such as in an image. In this exercise you’ll learn how to *liberate* data from a plot using WebPlotDigitizer.
*Reading*: (*None*, this exercise *is* the reading.)
*Optional Reading*: [WebPlotDigitizer tutorial video](https://youtu.be/P7GbGdMvopU) \~ 19 minutes. (I recommend you give this a watch if you want some inspiration on other use cases: There are a lot of very clever ways to use this tool!)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
*Background*: [WebPlotDigitizer](https://automeris.io/WebPlotDigitizer/) is one of those tools that is *insanely useful*, but *no one ever teaches*. I didn’t learn about this until six years into graduate school. You’re going to learn some very practical skills in this exercise!
*Note*: I originally extracted these data from an [Economist](https://www.economist.com/graphic-detail/2020/05/13/the-spread-of-covid-has-caused-a-surge-in-american-meat-prices) article on American meat prices and production in 2020\.
57\.1 Setup
-----------
### 57\.1\.1 **q1** Get WebPlotDigitizer.
Go to the [WebPlotDigitizer](https://automeris.io/WebPlotDigitizer/) website and download the desktop version (matching your operating system).
*Note*: On Mac OS X you may have to open `Security & Privacy` in order to launch WebPlotDigitizer on your machine.
57\.2 Extract
-------------
### 57\.2\.1 **q2** Extract the data from the following image:
Beef production
This image shows the percent change in US beef production as reported in this [Economist](https://www.economist.com/graphic-detail/2020/05/13/the-spread-of-covid-has-caused-a-surge-in-american-meat-prices) article. We’ll go through extraction step\-by\-step:
1. Click the `Load Image(s)` button, and select `./images/beef_production.png`.
2\. Choose the `2D (X-Y) Plot` type.
3\. Make sure to *read these instructions*!
4\. Place the four control points; it doesn’t matter what *precise* values you pick, just that you know the X values for the first two, and the Y values for the second two.
*Note*: Once you’ve placed a single point, you can use the arrow keys on your keyboard to make *micro adjustments* to the point; this means *you don’t have to be super\-accurate* with your mouse. Use this to your advantage!
5\. *Calibrate* the axes by entering the X and Y values you placed. Note that you can give decimals, dates, times, or exponents.
6\. Now that you have a set of axes, you can *extract* the data. This plot is fairly high\-contrast, so we can use the *Automatic Extraction* tools. Click on the `Box` setting, and select the foreground color to match the color of the data curve (in this case, black).
Load image
7. Once you’ve selected the box tool, draw a rectangle over an area containing the data. Note that if you cover the labels, the algorithm will try to extract those too!
8\. Click the `Run` button; you should see red dots covering the data curve.
Load image
9. Now you can save the data to a file; make sure the dataset is selected (highlighted in orange) and click the `View Data` button.
10\. Click the `Download .CSV` button and give the file a sensible name.
Congrats! You just *liberated* data from a plot!
### 57\.2\.2 **q3** Extract the data from the following plot. This will give you price data to compare against the production data.
Beef price
57\.3 Use the extracted data
----------------------------
### 57\.3\.1 **q4** Load the price and production datasets you extracted. Join and plot price vs production; what kind of relationship do you see?
```
## NOTE: Your filenames may vary!
df_price <- read_csv(
"./data/beef_price.csv",
col_names = c("date", "price_percent")
)
```
```
## Rows: 232 Columns: 2
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## dbl (1): price_percent
## date (1): date
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
```
```
df_production <- read_csv(
"./data/beef_production.csv",
col_names = c("date", "production_percent")
)
```
```
## Rows: 227 Columns: 2
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## dbl (1): production_percent
## date (1): date
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
```
```
## NOTE: I'm relying on WebPlotDigitizer to produce dates in order to
## make this join work. This will probably fail if you have numbers
## rather than dates.
df_both <-
inner_join(
df_price,
df_production,
by = "date"
)
df_both %>%
ggplot(aes(production_percent, price_percent, color = date)) +
geom_point()
```
**Observations**:
* In the middle of the pandemic beef production dropped quickly without a large change in price.
* After production dropped by 20% beef price began to spike.
* As the pandemic continued in the US, beef production increased slightly, but price continued to rise.
57\.1 Setup
-----------
### 57\.1\.1 **q1** Get WebPlotDigitizer.
Go to the [WebPlotDigitizer](https://automeris.io/WebPlotDigitizer/) website and download the desktop version (matching your operating system).
*Note*: On Mac OS X you may have to open `Security & Privacy` in order to launch WebPlotDigitizer on your machine.
### 57\.1\.1 **q1** Get WebPlotDigitizer.
Go to the [WebPlotDigitizer](https://automeris.io/WebPlotDigitizer/) website and download the desktop version (matching your operating system).
*Note*: On Mac OS X you may have to open `Security & Privacy` in order to launch WebPlotDigitizer on your machine.
57\.2 Extract
-------------
### 57\.2\.1 **q2** Extract the data from the following image:
Beef production
This image shows the percent change in US beef production as reported in this [Economist](https://www.economist.com/graphic-detail/2020/05/13/the-spread-of-covid-has-caused-a-surge-in-american-meat-prices) article. We’ll go through extraction step\-by\-step:
1. Click the `Load Image(s)` button, and select `./images/beef_production.png`.
2\. Choose the `2D (X-Y) Plot` type.
3\. Make sure to *read these instructions*!
4\. Place the four control points; it doesn’t matter what *precise* values you pick, just that you know the X values for the first two, and the Y values for the second two.
*Note*: Once you’ve placed a single point, you can use the arrow keys on your keyboard to make *micro adjustments* to the point; this means *you don’t have to be super\-accurate* with your mouse. Use this to your advantage!
5\. *Calibrate* the axes by entering the X and Y values you placed. Note that you can give decimals, dates, times, or exponents.
6\. Now that you have a set of axes, you can *extract* the data. This plot is fairly high\-contrast, so we can use the *Automatic Extraction* tools. Click on the `Box` setting, and select the foreground color to match the color of the data curve (in this case, black).
Load image
7. Once you’ve selected the box tool, draw a rectangle over an area containing the data. Note that if you cover the labels, the algorithm will try to extract those too!
8\. Click the `Run` button; you should see red dots covering the data curve.
Load image
9. Now you can save the data to a file; make sure the dataset is selected (highlighted in orange) and click the `View Data` button.
10\. Click the `Download .CSV` button and give the file a sensible name.
Congrats! You just *liberated* data from a plot!
### 57\.2\.2 **q3** Extract the data from the following plot. This will give you price data to compare against the production data.
Beef price
### 57\.2\.1 **q2** Extract the data from the following image:
Beef production
This image shows the percent change in US beef production as reported in this [Economist](https://www.economist.com/graphic-detail/2020/05/13/the-spread-of-covid-has-caused-a-surge-in-american-meat-prices) article. We’ll go through extraction step\-by\-step:
1. Click the `Load Image(s)` button, and select `./images/beef_production.png`.
2\. Choose the `2D (X-Y) Plot` type.
3\. Make sure to *read these instructions*!
4\. Place the four control points; it doesn’t matter what *precise* values you pick, just that you know the X values for the first two, and the Y values for the second two.
*Note*: Once you’ve placed a single point, you can use the arrow keys on your keyboard to make *micro adjustments* to the point; this means *you don’t have to be super\-accurate* with your mouse. Use this to your advantage!
5\. *Calibrate* the axes by entering the X and Y values you placed. Note that you can give decimals, dates, times, or exponents.
6\. Now that you have a set of axes, you can *extract* the data. This plot is fairly high\-contrast, so we can use the *Automatic Extraction* tools. Click on the `Box` setting, and select the foreground color to match the color of the data curve (in this case, black).
Load image
7. Once you’ve selected the box tool, draw a rectangle over an area containing the data. Note that if you cover the labels, the algorithm will try to extract those too!
8\. Click the `Run` button; you should see red dots covering the data curve.
Load image
9. Now you can save the data to a file; make sure the dataset is selected (highlighted in orange) and click the `View Data` button.
10\. Click the `Download .CSV` button and give the file a sensible name.
Congrats! You just *liberated* data from a plot!
### 57\.2\.2 **q3** Extract the data from the following plot. This will give you price data to compare against the production data.
Beef price
57\.3 Use the extracted data
----------------------------
### 57\.3\.1 **q4** Load the price and production datasets you extracted. Join and plot price vs production; what kind of relationship do you see?
```
## NOTE: Your filenames may vary!
df_price <- read_csv(
"./data/beef_price.csv",
col_names = c("date", "price_percent")
)
```
```
## Rows: 232 Columns: 2
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## dbl (1): price_percent
## date (1): date
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
```
```
df_production <- read_csv(
"./data/beef_production.csv",
col_names = c("date", "production_percent")
)
```
```
## Rows: 227 Columns: 2
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## dbl (1): production_percent
## date (1): date
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
```
```
## NOTE: I'm relying on WebPlotDigitizer to produce dates in order to
## make this join work. This will probably fail if you have numbers
## rather than dates.
df_both <-
inner_join(
df_price,
df_production,
by = "date"
)
df_both %>%
ggplot(aes(production_percent, price_percent, color = date)) +
geom_point()
```
**Observations**:
* In the middle of the pandemic beef production dropped quickly without a large change in price.
* After production dropped by 20% beef price began to spike.
* As the pandemic continued in the US, beef production increased slightly, but price continued to rise.
### 57\.3\.1 **q4** Load the price and production datasets you extracted. Join and plot price vs production; what kind of relationship do you see?
```
## NOTE: Your filenames may vary!
df_price <- read_csv(
"./data/beef_price.csv",
col_names = c("date", "price_percent")
)
```
```
## Rows: 232 Columns: 2
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## dbl (1): price_percent
## date (1): date
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
```
```
df_production <- read_csv(
"./data/beef_production.csv",
col_names = c("date", "production_percent")
)
```
```
## Rows: 227 Columns: 2
## ── Column specification ────────────────────────────────────────────────────────
## Delimiter: ","
## dbl (1): production_percent
## date (1): date
##
## ℹ Use `spec()` to retrieve the full column specification for this data.
## ℹ Specify the column types or set `show_col_types = FALSE` to quiet this message.
```
```
## NOTE: I'm relying on WebPlotDigitizer to produce dates in order to
## make this join work. This will probably fail if you have numbers
## rather than dates.
df_both <-
inner_join(
df_price,
df_production,
by = "date"
)
df_both %>%
ggplot(aes(production_percent, price_percent, color = date)) +
geom_point()
```
**Observations**:
* In the middle of the pandemic beef production dropped quickly without a large change in price.
* After production dropped by 20% beef price began to spike.
* As the pandemic continued in the US, beef production increased slightly, but price continued to rise.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/model-warnings-when-interpreting-linear-models.html |
58 Model: Warnings when interpreting linear models
==================================================
*Purpose*: When fitting a model, we might like to use that model to interpret how predictors affect some outcome of interest. This is a useful thing to do, but interpreting models is also *very challenging*. This exercise will give you a *couple warnings* about interpreting models.
*Reading*: (None, this is the reading)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(modelr)
library(broom)
```
```
##
## Attaching package: 'broom'
```
```
## The following object is masked from 'package:modelr':
##
## bootstrap
```
For this exercise, we’ll use the familiar diamonds dataset.
```
## NOTE: No need to edit this setup
# Create a test-validate split
set.seed(101)
diamonds_randomized <-
diamonds %>%
slice(sample(dim(diamonds)[1]))
diamonds_train <-
diamonds_randomized %>%
slice(1:10000)
```
58\.1 1st Warning: Models Are a Function of the Population
----------------------------------------------------------
Remember that any time we’re doing statistics, we must first **define the population**. That means when we’re fitting models, we need to pay attention to the data we feed the model for training.
Let’s start with a curious observation; look at the effect of `cut` on `price` at low and high carat values:
```
## NOTE: No need to edit this chunk
diamonds_train %>%
mutate(
grouping = if_else(carat < 1.0, "Lower carat", "Upper carat")
) %>%
ggplot(aes(cut, price)) +
geom_boxplot() +
scale_y_log10() +
facet_grid(~ grouping)
```
The trend in `cut` is what we’d expect at upper values (`carat > 1`), but reversed at lower values (`carat <= 1`)! Let’s see how this affects *model predictions*.
### 58\.1\.1 **q1** Compare two models.
Fit two models on `diamonds_train`, one for `carat <= 1` and one for `carat > 1`. Use only `cut` as the predictor. First, make a prediction about how the predictions for the two models will compare, and then inspect the model results below.
```
fit_lower <-
diamonds_train %>%
filter(carat <= 1) %>%
lm(formula = price ~ cut)
fit_upper <-
diamonds_train %>%
filter(carat > 1) %>%
lm(formula = price ~ cut)
## NOTE: No need to modify this code
tibble(cut = c("Fair", "Good", "Very Good", "Premium", "Ideal")) %>%
mutate(
cut = fct_relevel(cut, "Fair", "Good", "Very Good", "Premium", "Ideal")
) %>%
add_predictions(fit_lower, var = "price_pred-lower") %>%
add_predictions(fit_upper, var = "price_pred-upper") %>%
pivot_longer(
names_to = c(".value", "model"),
names_sep = "-",
cols = matches("price")
) %>%
ggplot(aes(cut, price_pred, color = model)) +
geom_line(aes(group = model)) +
geom_point() +
scale_y_log10()
```
**Observations**:
* I expected the lower model to have decreasing price with increasing cut, and the upper model to have the reverse trend.
* Yup! The model behavior matched my expectations.
*Why is this happening?* Let’s investigate!
### 58\.1\.2 **q2** Change the model
Repeat the same exercise above, but instead of `price ~ cut` fit `carat ~ cut`. Interpret the model results: Can the behavior we see below help explain the behavior above?
```
fit_carat_lower <-
diamonds_train %>%
filter(carat <= 1) %>%
lm(formula = carat ~ cut)
fit_carat_upper <-
diamonds_train %>%
filter(carat > 1) %>%
lm(formula = carat ~ cut)
## NOTE: No need to change this code
tibble(cut = c("Fair", "Good", "Very Good", "Premium", "Ideal")) %>%
mutate(
cut = fct_relevel(cut, "Fair", "Good", "Very Good", "Premium", "Ideal")
) %>%
add_predictions(fit_carat_lower, var = "carat_pred-lower") %>%
add_predictions(fit_carat_upper, var = "carat_pred-upper") %>%
pivot_longer(
names_to = c(".value", "model"),
names_sep = "-",
cols = matches("carat")
) %>%
ggplot(aes(cut, carat_pred, color = model)) +
geom_line(aes(group = model)) +
geom_point() +
scale_y_log10()
```
**Observations**:
* We see that the lower model has carat *decreasing* with increasing cut, while the upper model gives carat a relatively *flat relationship* with cut.
* This trend could account for the behavior we saw above: For the lower model, the variable `cut` could be used as a proxy for `carat`, which would lead to a negative model trend.
We can try to fix these issues by adding more predictors. But that leads to our second warning….
58\.2 2nd Warning: Model Coefficients are a Function of All Chosen Predictors
-----------------------------------------------------------------------------
Our models are not just a function of the population, but also of the *specific set of predictors we choose* for the model. That may seem like an obvious statement, but the effects are profound: Adding a new predictor `x2` can change the model’s behavior according to another predictor, say `x1`. This could change an effect enough to *reverse the sign* of a predictor!
The following task will demonstrate this effect.
### 58\.2\.1 **q3** Fit two models, one with both carat and cut, and another with cut only. Fit only to the low\-carat diamonds (`carat <= 1`). Use the provided code to compare the model behavior with `cut`, and answer the questions under *observations* below.
```
fit_carat_cut <-
diamonds_train %>%
filter(carat <= 1) %>%
lm(formula = price ~ carat + cut)
fit_cut_only <-
diamonds_train %>%
filter(carat <= 1) %>%
lm(formula = price ~ cut)
## NOTE: No need to change this code
tibble(
cut = c("Fair", "Good", "Very Good", "Premium", "Ideal"),
carat = c(0.4)
) %>%
mutate(
cut = fct_relevel(cut, "Fair", "Good", "Very Good", "Premium", "Ideal")
) %>%
add_predictions(fit_carat_cut, var = "price_pred-carat_cut") %>%
add_predictions(fit_cut_only, var = "price_pred-cut_only") %>%
pivot_longer(
names_to = c(".value", "model"),
names_sep = "-",
cols = matches("price")
) %>%
ggplot(aes(cut, price_pred, color = model)) +
geom_line(aes(group = model)) +
geom_point() +
scale_y_log10()
```
**Observations**:
* `cut` has a negative effect on `price` for the `cut_only` model.
* `cut` has a positive effect on `price` for the `carat_cut` model.
* We saw above that `cut` predicts `carat` at low carat values; when we don’t include `carat` in the model, the `cut` variable acts as a surrogate for `carat`. When we do include `carat` in the model, the model uses `carat` to predict the price, and can more correctly account for the behavior of \`cut.
58\.3 Main Punchline
--------------------
When fitting a model, we might be tempted to interpret the model parameters. Sometimes this can be helpful, but as we’ve seen above the model behavior is a complex function of the population, the available data, and the specific predictors we choose for the model.
When *making predictions* this is not so much of an issue. But when trying to *interpret a model*, we need to exercise caution. A more formal treatment of these ideas is to think about [confounding variables](https://en.wikipedia.org/wiki/Confounding). The more general statistical exercise of assigning *causal* behavior to different variables is called [causal inference](https://en.wikipedia.org/wiki/Causal_inference). These topics are slippery, and largely outside the scope of this course.
If you’d like to learn more, I *highly* recommend taking more formal courses in statistics!
58\.1 1st Warning: Models Are a Function of the Population
----------------------------------------------------------
Remember that any time we’re doing statistics, we must first **define the population**. That means when we’re fitting models, we need to pay attention to the data we feed the model for training.
Let’s start with a curious observation; look at the effect of `cut` on `price` at low and high carat values:
```
## NOTE: No need to edit this chunk
diamonds_train %>%
mutate(
grouping = if_else(carat < 1.0, "Lower carat", "Upper carat")
) %>%
ggplot(aes(cut, price)) +
geom_boxplot() +
scale_y_log10() +
facet_grid(~ grouping)
```
The trend in `cut` is what we’d expect at upper values (`carat > 1`), but reversed at lower values (`carat <= 1`)! Let’s see how this affects *model predictions*.
### 58\.1\.1 **q1** Compare two models.
Fit two models on `diamonds_train`, one for `carat <= 1` and one for `carat > 1`. Use only `cut` as the predictor. First, make a prediction about how the predictions for the two models will compare, and then inspect the model results below.
```
fit_lower <-
diamonds_train %>%
filter(carat <= 1) %>%
lm(formula = price ~ cut)
fit_upper <-
diamonds_train %>%
filter(carat > 1) %>%
lm(formula = price ~ cut)
## NOTE: No need to modify this code
tibble(cut = c("Fair", "Good", "Very Good", "Premium", "Ideal")) %>%
mutate(
cut = fct_relevel(cut, "Fair", "Good", "Very Good", "Premium", "Ideal")
) %>%
add_predictions(fit_lower, var = "price_pred-lower") %>%
add_predictions(fit_upper, var = "price_pred-upper") %>%
pivot_longer(
names_to = c(".value", "model"),
names_sep = "-",
cols = matches("price")
) %>%
ggplot(aes(cut, price_pred, color = model)) +
geom_line(aes(group = model)) +
geom_point() +
scale_y_log10()
```
**Observations**:
* I expected the lower model to have decreasing price with increasing cut, and the upper model to have the reverse trend.
* Yup! The model behavior matched my expectations.
*Why is this happening?* Let’s investigate!
### 58\.1\.2 **q2** Change the model
Repeat the same exercise above, but instead of `price ~ cut` fit `carat ~ cut`. Interpret the model results: Can the behavior we see below help explain the behavior above?
```
fit_carat_lower <-
diamonds_train %>%
filter(carat <= 1) %>%
lm(formula = carat ~ cut)
fit_carat_upper <-
diamonds_train %>%
filter(carat > 1) %>%
lm(formula = carat ~ cut)
## NOTE: No need to change this code
tibble(cut = c("Fair", "Good", "Very Good", "Premium", "Ideal")) %>%
mutate(
cut = fct_relevel(cut, "Fair", "Good", "Very Good", "Premium", "Ideal")
) %>%
add_predictions(fit_carat_lower, var = "carat_pred-lower") %>%
add_predictions(fit_carat_upper, var = "carat_pred-upper") %>%
pivot_longer(
names_to = c(".value", "model"),
names_sep = "-",
cols = matches("carat")
) %>%
ggplot(aes(cut, carat_pred, color = model)) +
geom_line(aes(group = model)) +
geom_point() +
scale_y_log10()
```
**Observations**:
* We see that the lower model has carat *decreasing* with increasing cut, while the upper model gives carat a relatively *flat relationship* with cut.
* This trend could account for the behavior we saw above: For the lower model, the variable `cut` could be used as a proxy for `carat`, which would lead to a negative model trend.
We can try to fix these issues by adding more predictors. But that leads to our second warning….
### 58\.1\.1 **q1** Compare two models.
Fit two models on `diamonds_train`, one for `carat <= 1` and one for `carat > 1`. Use only `cut` as the predictor. First, make a prediction about how the predictions for the two models will compare, and then inspect the model results below.
```
fit_lower <-
diamonds_train %>%
filter(carat <= 1) %>%
lm(formula = price ~ cut)
fit_upper <-
diamonds_train %>%
filter(carat > 1) %>%
lm(formula = price ~ cut)
## NOTE: No need to modify this code
tibble(cut = c("Fair", "Good", "Very Good", "Premium", "Ideal")) %>%
mutate(
cut = fct_relevel(cut, "Fair", "Good", "Very Good", "Premium", "Ideal")
) %>%
add_predictions(fit_lower, var = "price_pred-lower") %>%
add_predictions(fit_upper, var = "price_pred-upper") %>%
pivot_longer(
names_to = c(".value", "model"),
names_sep = "-",
cols = matches("price")
) %>%
ggplot(aes(cut, price_pred, color = model)) +
geom_line(aes(group = model)) +
geom_point() +
scale_y_log10()
```
**Observations**:
* I expected the lower model to have decreasing price with increasing cut, and the upper model to have the reverse trend.
* Yup! The model behavior matched my expectations.
*Why is this happening?* Let’s investigate!
### 58\.1\.2 **q2** Change the model
Repeat the same exercise above, but instead of `price ~ cut` fit `carat ~ cut`. Interpret the model results: Can the behavior we see below help explain the behavior above?
```
fit_carat_lower <-
diamonds_train %>%
filter(carat <= 1) %>%
lm(formula = carat ~ cut)
fit_carat_upper <-
diamonds_train %>%
filter(carat > 1) %>%
lm(formula = carat ~ cut)
## NOTE: No need to change this code
tibble(cut = c("Fair", "Good", "Very Good", "Premium", "Ideal")) %>%
mutate(
cut = fct_relevel(cut, "Fair", "Good", "Very Good", "Premium", "Ideal")
) %>%
add_predictions(fit_carat_lower, var = "carat_pred-lower") %>%
add_predictions(fit_carat_upper, var = "carat_pred-upper") %>%
pivot_longer(
names_to = c(".value", "model"),
names_sep = "-",
cols = matches("carat")
) %>%
ggplot(aes(cut, carat_pred, color = model)) +
geom_line(aes(group = model)) +
geom_point() +
scale_y_log10()
```
**Observations**:
* We see that the lower model has carat *decreasing* with increasing cut, while the upper model gives carat a relatively *flat relationship* with cut.
* This trend could account for the behavior we saw above: For the lower model, the variable `cut` could be used as a proxy for `carat`, which would lead to a negative model trend.
We can try to fix these issues by adding more predictors. But that leads to our second warning….
58\.2 2nd Warning: Model Coefficients are a Function of All Chosen Predictors
-----------------------------------------------------------------------------
Our models are not just a function of the population, but also of the *specific set of predictors we choose* for the model. That may seem like an obvious statement, but the effects are profound: Adding a new predictor `x2` can change the model’s behavior according to another predictor, say `x1`. This could change an effect enough to *reverse the sign* of a predictor!
The following task will demonstrate this effect.
### 58\.2\.1 **q3** Fit two models, one with both carat and cut, and another with cut only. Fit only to the low\-carat diamonds (`carat <= 1`). Use the provided code to compare the model behavior with `cut`, and answer the questions under *observations* below.
```
fit_carat_cut <-
diamonds_train %>%
filter(carat <= 1) %>%
lm(formula = price ~ carat + cut)
fit_cut_only <-
diamonds_train %>%
filter(carat <= 1) %>%
lm(formula = price ~ cut)
## NOTE: No need to change this code
tibble(
cut = c("Fair", "Good", "Very Good", "Premium", "Ideal"),
carat = c(0.4)
) %>%
mutate(
cut = fct_relevel(cut, "Fair", "Good", "Very Good", "Premium", "Ideal")
) %>%
add_predictions(fit_carat_cut, var = "price_pred-carat_cut") %>%
add_predictions(fit_cut_only, var = "price_pred-cut_only") %>%
pivot_longer(
names_to = c(".value", "model"),
names_sep = "-",
cols = matches("price")
) %>%
ggplot(aes(cut, price_pred, color = model)) +
geom_line(aes(group = model)) +
geom_point() +
scale_y_log10()
```
**Observations**:
* `cut` has a negative effect on `price` for the `cut_only` model.
* `cut` has a positive effect on `price` for the `carat_cut` model.
* We saw above that `cut` predicts `carat` at low carat values; when we don’t include `carat` in the model, the `cut` variable acts as a surrogate for `carat`. When we do include `carat` in the model, the model uses `carat` to predict the price, and can more correctly account for the behavior of \`cut.
### 58\.2\.1 **q3** Fit two models, one with both carat and cut, and another with cut only. Fit only to the low\-carat diamonds (`carat <= 1`). Use the provided code to compare the model behavior with `cut`, and answer the questions under *observations* below.
```
fit_carat_cut <-
diamonds_train %>%
filter(carat <= 1) %>%
lm(formula = price ~ carat + cut)
fit_cut_only <-
diamonds_train %>%
filter(carat <= 1) %>%
lm(formula = price ~ cut)
## NOTE: No need to change this code
tibble(
cut = c("Fair", "Good", "Very Good", "Premium", "Ideal"),
carat = c(0.4)
) %>%
mutate(
cut = fct_relevel(cut, "Fair", "Good", "Very Good", "Premium", "Ideal")
) %>%
add_predictions(fit_carat_cut, var = "price_pred-carat_cut") %>%
add_predictions(fit_cut_only, var = "price_pred-cut_only") %>%
pivot_longer(
names_to = c(".value", "model"),
names_sep = "-",
cols = matches("price")
) %>%
ggplot(aes(cut, price_pred, color = model)) +
geom_line(aes(group = model)) +
geom_point() +
scale_y_log10()
```
**Observations**:
* `cut` has a negative effect on `price` for the `cut_only` model.
* `cut` has a positive effect on `price` for the `carat_cut` model.
* We saw above that `cut` predicts `carat` at low carat values; when we don’t include `carat` in the model, the `cut` variable acts as a surrogate for `carat`. When we do include `carat` in the model, the model uses `carat` to predict the price, and can more correctly account for the behavior of \`cut.
58\.3 Main Punchline
--------------------
When fitting a model, we might be tempted to interpret the model parameters. Sometimes this can be helpful, but as we’ve seen above the model behavior is a complex function of the population, the available data, and the specific predictors we choose for the model.
When *making predictions* this is not so much of an issue. But when trying to *interpret a model*, we need to exercise caution. A more formal treatment of these ideas is to think about [confounding variables](https://en.wikipedia.org/wiki/Confounding). The more general statistical exercise of assigning *causal* behavior to different variables is called [causal inference](https://en.wikipedia.org/wiki/Causal_inference). These topics are slippery, and largely outside the scope of this course.
If you’d like to learn more, I *highly* recommend taking more formal courses in statistics!
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/data-liberating-data-with-tabula.html |
59 Data: Liberating data with Tabula
====================================
*Purpose*: Sometimes data are messy—we know how to deal with that. Other times data are “locked up” in a format we can’t easily analyze, such as in a PDF. In this exercise you’ll learn how to *liberate* data from a PDF table using Tabula.
*Reading*: (*None*, this exercise *is* the reading.)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
*Background*: [Tabula](https://tabula.technology/) is a piece of software developed for journalists carrying out investigative reporting. It was developed with support from organizations like [ProPublica](http://propublica.org/) and [The New York Times](http://www.nytimes.com/). This tool is meant to help investigators parse unwieldy PDFs and liberate useful information.
59\.1 Setup
-----------
### 59\.1\.1 **q1** Install Tabula.
Download and install [Tabula](https://tabula.technology/); the webpage has installation instructions.
*Note*: Tabula’s interface is through a locally\-hosted server; it should automatically open a browser window for Tabula. If it does not, then open <http://localhost:8080/> after you’ve launched Tabula.
59\.2 Liberating Data
---------------------
### 59\.2\.1 **q2\.1** Obtain the data.
Download `FY2019 independent financial audit report (PDF)` from the Needham, MA [financial reports page](https://www.needhamma.gov/1673/Financial-Reports).
### 59\.2\.2 **q2\.2** Try it the hard way.
Try copy\-pasting from the FY2019 report the table `Government-Wide Financial Analysis` into a text document or your favorite spreadsheet editor. **This is unlikely to produce the desired result.** (Please don’t spend any time trying to format this copied data—you’re about to learn a better way!)
Tabula is a tool that will help us *liberate* the data; basically, it’s a copy\-paste for PDF tables *that actually works*.
### 59\.2\.3 **q3** Extract from the FY2019 report the `Government-Wide Financial Analysis` table.
We’ll do this in steps:
1. Click the browse button to select your downloaded FY2019 report and click *Import*.
Tabula browse
2. Wait for the file to finish processing; this takes about 2 minutes on my laptop.
Tabula browse
3. Once Tabula has imported the file, your view will switch to a view of the PDF.
Tabula browse
4. Scroll to the `Government-Wide Financial Analysis` table; click and drag to select the data. Click *Preview \& Export Extracted Data*.
Tabula browse
5. You will arrive at a preview of the extracted data. You may find that Tabula has merged some of the columns; if this happens click the *Revise selection(s)* button to go back and adjust your selection.
Tabula browse
6. Once you have a preview that matches the columns above, select the CSV filetype and click the *Export* button. Download the file to your `data` folder and give it a sensible filename.
### 59\.2\.4 **q4** Load and clean the data.
Load and clean the table you extracted above. Use the column names `category` and `[government|business|total]_[2019|2018]`. Do not *tidy* (pivot) the data yet, but make sure the appropriate columns are numeric.
*Note*: In accounting practice, numbers in parentheses are understood to be negative, e.g. `(1000) = -1000`.
```
df_2019_raw <- read_csv(
"./data/needham_fs19.csv",
skip = 1,
col_names = c(
"category",
"X2",
"governmental_2019",
"X4",
"governmental_2018",
"X6",
"business_2019",
"X8",
"business_2018",
"X10",
"total_2019",
"X12",
"total_2018"
)
)
df_2019 <-
df_2019_raw %>%
select(-contains("X")) %>%
## across() allows us to apply the same mutation to multiple
## columns; remove all internal spaces from numbers
mutate(across(-category, ~str_remove_all(., "\\s"))) %>%
## Handle numbers enclosed by parentheses; make them negative
## and remove all parentheses for the number parser
mutate(across(
-category,
~if_else(
str_detect(., "\\("),
str_c("-", str_remove_all(., "[\\(\\)]")),
str_remove_all(., "[\\(\\)]")
)
)) %>%
## Use the number parser to handle conversions
mutate(across(-category, parse_number)) %>%
## Fix a couple chopped lines
mutate(
category = if_else(
category == "resources",
"Total assets and deferred outflow of resources",
category
)
) %>%
mutate(
category = str_replace(
category,
"resources, and net position",
"Total liabilities, deferred inflow of resources, and net position"
)
) %>%
filter(!is.na(governmental_2019))
df_2019 %>% glimpse()
```
Use the following to check your work:
```
## NOTE: No need to edit; check a couple problematic values
assertthat::assert_that(
(
df_2019 %>%
filter(category == "Deferred outflow of resources") %>%
pull(business_2019)
) == 1160
)
assertthat::assert_that(
(
df_2019 %>%
filter(category == "Unrestricted") %>%
pull(governmental_2019)
) == -62396
)
print("Excellent!")
```
Where Tabula really shines is in cases where you need to process *many* documents; if you find yourself needing to process a whole folder of PDF’s, consider using Tabula.
59\.1 Setup
-----------
### 59\.1\.1 **q1** Install Tabula.
Download and install [Tabula](https://tabula.technology/); the webpage has installation instructions.
*Note*: Tabula’s interface is through a locally\-hosted server; it should automatically open a browser window for Tabula. If it does not, then open <http://localhost:8080/> after you’ve launched Tabula.
### 59\.1\.1 **q1** Install Tabula.
Download and install [Tabula](https://tabula.technology/); the webpage has installation instructions.
*Note*: Tabula’s interface is through a locally\-hosted server; it should automatically open a browser window for Tabula. If it does not, then open <http://localhost:8080/> after you’ve launched Tabula.
59\.2 Liberating Data
---------------------
### 59\.2\.1 **q2\.1** Obtain the data.
Download `FY2019 independent financial audit report (PDF)` from the Needham, MA [financial reports page](https://www.needhamma.gov/1673/Financial-Reports).
### 59\.2\.2 **q2\.2** Try it the hard way.
Try copy\-pasting from the FY2019 report the table `Government-Wide Financial Analysis` into a text document or your favorite spreadsheet editor. **This is unlikely to produce the desired result.** (Please don’t spend any time trying to format this copied data—you’re about to learn a better way!)
Tabula is a tool that will help us *liberate* the data; basically, it’s a copy\-paste for PDF tables *that actually works*.
### 59\.2\.3 **q3** Extract from the FY2019 report the `Government-Wide Financial Analysis` table.
We’ll do this in steps:
1. Click the browse button to select your downloaded FY2019 report and click *Import*.
Tabula browse
2. Wait for the file to finish processing; this takes about 2 minutes on my laptop.
Tabula browse
3. Once Tabula has imported the file, your view will switch to a view of the PDF.
Tabula browse
4. Scroll to the `Government-Wide Financial Analysis` table; click and drag to select the data. Click *Preview \& Export Extracted Data*.
Tabula browse
5. You will arrive at a preview of the extracted data. You may find that Tabula has merged some of the columns; if this happens click the *Revise selection(s)* button to go back and adjust your selection.
Tabula browse
6. Once you have a preview that matches the columns above, select the CSV filetype and click the *Export* button. Download the file to your `data` folder and give it a sensible filename.
### 59\.2\.4 **q4** Load and clean the data.
Load and clean the table you extracted above. Use the column names `category` and `[government|business|total]_[2019|2018]`. Do not *tidy* (pivot) the data yet, but make sure the appropriate columns are numeric.
*Note*: In accounting practice, numbers in parentheses are understood to be negative, e.g. `(1000) = -1000`.
```
df_2019_raw <- read_csv(
"./data/needham_fs19.csv",
skip = 1,
col_names = c(
"category",
"X2",
"governmental_2019",
"X4",
"governmental_2018",
"X6",
"business_2019",
"X8",
"business_2018",
"X10",
"total_2019",
"X12",
"total_2018"
)
)
df_2019 <-
df_2019_raw %>%
select(-contains("X")) %>%
## across() allows us to apply the same mutation to multiple
## columns; remove all internal spaces from numbers
mutate(across(-category, ~str_remove_all(., "\\s"))) %>%
## Handle numbers enclosed by parentheses; make them negative
## and remove all parentheses for the number parser
mutate(across(
-category,
~if_else(
str_detect(., "\\("),
str_c("-", str_remove_all(., "[\\(\\)]")),
str_remove_all(., "[\\(\\)]")
)
)) %>%
## Use the number parser to handle conversions
mutate(across(-category, parse_number)) %>%
## Fix a couple chopped lines
mutate(
category = if_else(
category == "resources",
"Total assets and deferred outflow of resources",
category
)
) %>%
mutate(
category = str_replace(
category,
"resources, and net position",
"Total liabilities, deferred inflow of resources, and net position"
)
) %>%
filter(!is.na(governmental_2019))
df_2019 %>% glimpse()
```
Use the following to check your work:
```
## NOTE: No need to edit; check a couple problematic values
assertthat::assert_that(
(
df_2019 %>%
filter(category == "Deferred outflow of resources") %>%
pull(business_2019)
) == 1160
)
assertthat::assert_that(
(
df_2019 %>%
filter(category == "Unrestricted") %>%
pull(governmental_2019)
) == -62396
)
print("Excellent!")
```
Where Tabula really shines is in cases where you need to process *many* documents; if you find yourself needing to process a whole folder of PDF’s, consider using Tabula.
### 59\.2\.1 **q2\.1** Obtain the data.
Download `FY2019 independent financial audit report (PDF)` from the Needham, MA [financial reports page](https://www.needhamma.gov/1673/Financial-Reports).
### 59\.2\.2 **q2\.2** Try it the hard way.
Try copy\-pasting from the FY2019 report the table `Government-Wide Financial Analysis` into a text document or your favorite spreadsheet editor. **This is unlikely to produce the desired result.** (Please don’t spend any time trying to format this copied data—you’re about to learn a better way!)
Tabula is a tool that will help us *liberate* the data; basically, it’s a copy\-paste for PDF tables *that actually works*.
### 59\.2\.3 **q3** Extract from the FY2019 report the `Government-Wide Financial Analysis` table.
We’ll do this in steps:
1. Click the browse button to select your downloaded FY2019 report and click *Import*.
Tabula browse
2. Wait for the file to finish processing; this takes about 2 minutes on my laptop.
Tabula browse
3. Once Tabula has imported the file, your view will switch to a view of the PDF.
Tabula browse
4. Scroll to the `Government-Wide Financial Analysis` table; click and drag to select the data. Click *Preview \& Export Extracted Data*.
Tabula browse
5. You will arrive at a preview of the extracted data. You may find that Tabula has merged some of the columns; if this happens click the *Revise selection(s)* button to go back and adjust your selection.
Tabula browse
6. Once you have a preview that matches the columns above, select the CSV filetype and click the *Export* button. Download the file to your `data` folder and give it a sensible filename.
### 59\.2\.4 **q4** Load and clean the data.
Load and clean the table you extracted above. Use the column names `category` and `[government|business|total]_[2019|2018]`. Do not *tidy* (pivot) the data yet, but make sure the appropriate columns are numeric.
*Note*: In accounting practice, numbers in parentheses are understood to be negative, e.g. `(1000) = -1000`.
```
df_2019_raw <- read_csv(
"./data/needham_fs19.csv",
skip = 1,
col_names = c(
"category",
"X2",
"governmental_2019",
"X4",
"governmental_2018",
"X6",
"business_2019",
"X8",
"business_2018",
"X10",
"total_2019",
"X12",
"total_2018"
)
)
df_2019 <-
df_2019_raw %>%
select(-contains("X")) %>%
## across() allows us to apply the same mutation to multiple
## columns; remove all internal spaces from numbers
mutate(across(-category, ~str_remove_all(., "\\s"))) %>%
## Handle numbers enclosed by parentheses; make them negative
## and remove all parentheses for the number parser
mutate(across(
-category,
~if_else(
str_detect(., "\\("),
str_c("-", str_remove_all(., "[\\(\\)]")),
str_remove_all(., "[\\(\\)]")
)
)) %>%
## Use the number parser to handle conversions
mutate(across(-category, parse_number)) %>%
## Fix a couple chopped lines
mutate(
category = if_else(
category == "resources",
"Total assets and deferred outflow of resources",
category
)
) %>%
mutate(
category = str_replace(
category,
"resources, and net position",
"Total liabilities, deferred inflow of resources, and net position"
)
) %>%
filter(!is.na(governmental_2019))
df_2019 %>% glimpse()
```
Use the following to check your work:
```
## NOTE: No need to edit; check a couple problematic values
assertthat::assert_that(
(
df_2019 %>%
filter(category == "Deferred outflow of resources") %>%
pull(business_2019)
) == 1160
)
assertthat::assert_that(
(
df_2019 %>%
filter(category == "Unrestricted") %>%
pull(governmental_2019)
) == -62396
)
print("Excellent!")
```
Where Tabula really shines is in cases where you need to process *many* documents; if you find yourself needing to process a whole folder of PDF’s, consider using Tabula.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/model-logistic-regression.html |
60 Model: Logistic Regression
=============================
*Purpose*: So far we’ve talked about models to predict continuous values. However, we can also use models to make predictions about *binary outcomes*—classification. Classifiers are useful for a variety of uses, but they introduce a fair bit more complexity than simple linear models. In this exercise you’ll learn about *logistic regression*: a variation on linear regression that is useful for classification.
*Reading*: [StatQuest: Logistic Regression](https://www.youtube.com/watch?v=vN5cNN2-HWE) (Required, just watch up to 10:47 and dont’ worry about the p\-value stuff).
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(modelr)
library(broom)
```
```
##
## Attaching package: 'broom'
```
```
## The following object is masked from 'package:modelr':
##
## bootstrap
```
*Note*: This exercise is heavily inspired by Josh Starmer’s [logistic regression](https://github.com/StatQuest/logistic_regression_demo/blob/master/logistic_regression_demo.R) example.
*Background*: This exercise’s data comes from the UCI Machine Learning Database; specifically their [Heart Disease Data Set](https://archive.ics.uci.edu/ml/datasets/Heart+Disease). These data consist of clinical measurements on patients, and are intended to help study heart disease.
60\.1 Setup
-----------
Note: The following chunk contains *a lot of stuff*, but you already did this in e\-data13\-cleaning!
```
## NOTE: No need to edit; you did all this in a previous exercise!
url_disease <- "http://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.cleveland.data"
filename_disease <- "./data/uci_heart_disease.csv"
## Download the data locally
curl::curl_download(
url_disease,
destfile = filename_disease
)
## Wrangle the data
col_names <- c(
"age",
"sex",
"cp",
"trestbps",
"chol",
"fbs",
"restecg",
"thalach",
"exang",
"oldpeak",
"slope",
"ca",
"thal",
"num"
)
## Recoding functions
convert_sex <- function(x) {
case_when(
x == 1 ~ "male",
x == 0 ~ "female",
TRUE ~ NA_character_
)
}
convert_cp <- function(x) {
case_when(
x == 1 ~ "typical angina",
x == 2 ~ "atypical angina",
x == 3 ~ "non-anginal pain",
x == 4 ~ "asymptomatic",
TRUE ~ NA_character_
)
}
convert_fbs <- function(x) {
if_else(x == 1, TRUE, FALSE)
}
convert_restecv <- function(x) {
case_when(
x == 0 ~ "normal",
x == 1 ~ "ST-T wave abnormality",
x == 2 ~ "Estes' criteria",
TRUE ~ NA_character_
)
}
convert_exang <- function(x) {
if_else(x == 1, TRUE, FALSE)
}
convert_slope <- function(x) {
case_when(
x == 1 ~ "upsloping",
x == 2 ~ "flat",
x == 3 ~ "downsloping",
TRUE ~ NA_character_
)
}
convert_thal <- function(x) {
case_when(
x == 3 ~ "normal",
x == 6 ~ "fixed defect",
x == 7 ~ "reversible defect",
TRUE ~ NA_character_
)
}
## Load and wrangle
df_heart_disease <-
read_csv(
filename_disease,
col_names = col_names,
col_types = cols(
"age" = col_number(),
"sex" = col_number(),
"cp" = col_number(),
"trestbps" = col_number(),
"chol" = col_number(),
"fbs" = col_number(),
"restecg" = col_number(),
"thalach" = col_number(),
"exang" = col_number(),
"oldpeak" = col_number(),
"slope" = col_number(),
"ca" = col_number(),
"thal" = col_number(),
"num" = col_number()
)
) %>%
mutate(
sex = convert_sex(sex),
cp = convert_cp(cp),
fbs = convert_fbs(fbs),
restecg = convert_restecv(restecg),
exang = convert_exang(exang),
slope = convert_slope(slope),
thal = convert_thal(thal)
)
```
```
## Warning: One or more parsing issues, call `problems()` on your data frame for details,
## e.g.:
## dat <- vroom(...)
## problems(dat)
```
```
df_heart_disease
```
```
## # A tibble: 303 × 14
## age sex cp trest…¹ chol fbs restecg thalach exang oldpeak slope
## <dbl> <chr> <chr> <dbl> <dbl> <lgl> <chr> <dbl> <lgl> <dbl> <chr>
## 1 63 male typical… 145 233 TRUE Estes'… 150 FALSE 2.3 down…
## 2 67 male asympto… 160 286 FALSE Estes'… 108 TRUE 1.5 flat
## 3 67 male asympto… 120 229 FALSE Estes'… 129 TRUE 2.6 flat
## 4 37 male non-ang… 130 250 FALSE normal 187 FALSE 3.5 down…
## 5 41 female atypica… 130 204 FALSE Estes'… 172 FALSE 1.4 upsl…
## 6 56 male atypica… 120 236 FALSE normal 178 FALSE 0.8 upsl…
## 7 62 female asympto… 140 268 FALSE Estes'… 160 FALSE 3.6 down…
## 8 57 female asympto… 120 354 FALSE normal 163 TRUE 0.6 upsl…
## 9 63 male asympto… 130 254 FALSE Estes'… 147 FALSE 1.4 flat
## 10 53 male asympto… 140 203 TRUE Estes'… 155 TRUE 3.1 down…
## # … with 293 more rows, 3 more variables: ca <dbl>, thal <chr>, num <dbl>, and
## # abbreviated variable name ¹trestbps
```
The data above are *clean*, but we still need to prepare them for *modeling*. Remember from e\-data13\-cleaning that we had to filter out rows with `NA` values. Additionally, we’re going to convert `num` (a numerical factor) into a binary outcome indicating the presence of heart disease:
```
## NOTE: No need to edit; preparing the data for modeling
df_data <-
df_heart_disease %>%
rowid_to_column() %>%
## Filter rows with NA's (you did this in e-data13-cleaning)
filter(!is.na(ca), !is.na(thal)) %>%
## Create binary outcome for heart disease
mutate(heart_disease = num > 0)
```
The last step of data setup is up to you!
### 60\.1\.1 **q1** Perform a train\-validate split of `df_data`. Make sure to *shuffle* the data when splitting, and ensure that `df_train` and `df_validate` together contain the entire dataset.
```
n_train <- 200
df_train <-
df_data %>%
slice_sample(n = n_train)
df_validate <-
anti_join(
df_data,
df_train,
by = "rowid"
)
```
Use the following to check your code.
```
## NOTE: No need to change this
# Correct size
assertthat::assert_that(
dim(bind_rows(df_train, df_validate))[1] == dim(df_data)[1]
)
```
```
## [1] TRUE
```
```
# All rowid's appear exactly once
assertthat::assert_that(all(
bind_rows(df_train, df_validate) %>% count(rowid) %>% pull(n) == 1
))
```
```
## [1] TRUE
```
```
# Data shuffled
assertthat::assert_that(
!all(
bind_rows(df_train, df_validate) %>% pull(rowid) ==
df_data %>% pull(rowid)
)
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
60\.2 Logistic Regression
-------------------------
As the required video introduced, logistic regression bears some resemblance to linear regression. However, rather than predicting *continuous* outcomes (such as the price of a diamond), we will instead predict a *binary* outcome (in the present exercise: whether or not a given patient has heart disease).
In order to “fit a line” to this kind of binary data, we make a careful choice about what to model: Rather than model the binary outcome directly, we instead model the *probability* (a continuous value) that a given observation falls into one category or the other. We can then categorize observations based on predicted probabilities with some user\-specified threshold (which we’ll talk more about in a future exercise).
There’s one more trick we need to make this scheme work: Probabilities lie between \\(p \\in \[0, 1]\\), but the response of linear regression can be any real value between \\(x \\in (\-\\infty, \+\\infty)\\). To deal with this, we use the *logit function* to “warp space” and transform between the interval \\(p \\in \[0, 1]\\) and the whole real line \\(x \\in (\-\\infty, \+\\infty)\\).
```
## We'll need the logit and inverse-logit functions to "warp space"
logit <- function(p) {
odds_ratio <- p / (1 - p)
log(odds_ratio)
}
inv.logit <- function(x) {
exp(x) / (1 + exp(x))
}
```
The result of the logit function is a [log\-odds ratio](https://www.youtube.com/watch?v=ARfXDSkQf1Y), which is just a different way of expressing a probability. This is what it looks like to map from probabilities `p` to log\-odds ratios:
```
tibble(p = seq(0, 1, length.out = 100)) %>%
mutate(x = logit(p)) %>%
ggplot(aes(p, x)) +
geom_vline(xintercept = 0, linetype = 2) +
geom_vline(xintercept = 1, linetype = 2) +
geom_line() +
labs(x = "Probability", y = "Logit Value (log-odds ratio)")
```
And this is what it looks like to carry out the *inverse mapping* from log\-odds ratios to probabilities:
```
tibble(p = seq(0, 1, length.out = 100)) %>%
mutate(x = logit(p)) %>%
ggplot(aes(x, p)) +
geom_hline(yintercept = 0, linetype = 2) +
geom_hline(yintercept = 1, linetype = 2) +
geom_line() +
labs(y = "Probability", x = "Logit Value (log-odds ratio)")
```
This curve (the inverse\-logit) is the one we’ll stretch and shift in order to fit a logistic regression.
60\.3 A Worked Example
----------------------
The following code chunk fits a logistic regression model to your training data, predicts classification probabilities on the validation data, and visualizes the results so we can assess the model. You’ll practice carrying out these steps soon: First let’s practice interpreting a logistic regression model’s outputs.
### 60\.3\.1 **q2** Run the following code and study the results. Answer the questions under *observations* below.
```
## NOTE: No need to edit; just run and answer the questions below
## Fit a basic logistic regression model: biological-sex only
fit_basic <- glm(
formula = heart_disease ~ sex,
data = df_train,
family = "binomial"
)
## Predict the heart disease probabilities on the validation data
df_basic <-
df_validate %>%
add_predictions(fit_basic, var = "log_odds_ratio") %>%
arrange(log_odds_ratio) %>%
rowid_to_column(var = "order") %>%
## Remember that logistic regression fits the log_odds_ratio;
## convert this to a probability with inv.logit()
mutate(pr_heart_disease = inv.logit(log_odds_ratio))
## Plot the predicted probabilities and actual classes
df_basic %>%
ggplot(aes(order, pr_heart_disease, color = heart_disease)) +
geom_hline(yintercept = 0.5, linetype = 2) +
geom_point() +
facet_grid(~ sex, scales = "free_x") +
coord_cartesian(ylim = c(0, 1)) +
theme(legend.position = "bottom") +
labs(
x = "Rank-ordering of Predicted Probability",
y = "Predicted Probability of Heart Disease"
)
```
**Observations**:
* With a threshold at `0.5` the model would perform poorly: it appears we would miss miss a large number of people with heart disease.
* This model only considers the binary variable `sex`; thus the model only predicts two probability values, one for female and one for male.
In the next modeling exercise we’ll discuss how to *quantitatively* assess the results of a classifier. For the moment, know that our objective is usually to maximize the rates of true positives (TP) and true negatives (TN). In our example, true positives are when we correctly identify the presence of heart disease, and true negatives are when we correctly flag the absence of heart disease. Note that we can make errors in “either direction”: a false positive (FP) or a false negative (FN), depending on the underlying true class.
```
## NOTE: No need to edit; run and inspect
pr_threshold <- 0.5
df_basic %>%
mutate(
true_positive = (pr_heart_disease > pr_threshold) & heart_disease,
false_positive = (pr_heart_disease > pr_threshold) & !heart_disease,
true_negative = (pr_heart_disease <= pr_threshold) & !heart_disease,
false_negative = (pr_heart_disease <= pr_threshold) & heart_disease
) %>%
summarize(
TP = sum(true_positive),
FP = sum(false_positive),
TN = sum(true_negative),
FN = sum(false_negative)
)
```
```
## # A tibble: 1 × 4
## TP FP TN FN
## <int> <int> <int> <int>
## 1 36 27 26 8
```
These numbers don’t mean a whole lot on their own; we’ll use them to compare performance across models. Next you’ll practice using R functions to carry out logistic regression for classification, and build a model to compare against this basic one.
60\.4 Doing Logistic Regression
-------------------------------
### 60\.4\.1 **q3** Using the code from q2 as a pattern, fit a logistic regression model to `df_train`.
```
fit_q3 <- glm(
formula = heart_disease ~ . - num,
data = df_train,
family = "binomial"
)
```
Use the following to check your work.
```
## NOTE: No need to change this
# Correct size
assertthat::assert_that(dim(
df_validate %>%
add_predictions(fit_q3)
)[1] > 0)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 60\.4\.2 **q4** Recall that logistic regression predicts log\-odds\-ratio values; add these predictions to `df_validate` and convert them to probabilities `pr_heart_disease`.
```
df_q4 <-
df_validate %>%
add_predictions(fit_q3, var = "log_odds_ratio") %>%
mutate(pr_heart_disease = inv.logit(log_odds_ratio))
## Plot the predicted probabilities and actual classes
df_q4 %>%
arrange(pr_heart_disease) %>%
rowid_to_column(var = "order") %>%
ggplot(aes(order, pr_heart_disease, color = heart_disease)) +
geom_point() +
coord_cartesian(ylim = c(0, 1)) +
theme(legend.position = "bottom") +
labs(
x = "Rank-ordering of Predicted Probability",
y = "Predicted Probability of Heart Disease"
)
```
Use the following to check your code.
```
## NOTE: No need to change this
# Correct size
assertthat::assert_that(all(
df_q4 %>%
mutate(check = (0 <= pr_heart_disease) & (pr_heart_disease <= 1)) %>%
pull(check)
))
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
### 60\.4\.3 **q5** Inspect your graph from q4 and choose a threshold for classification. Compare your count of true positives (TP) and true negatives (TN) to the model above.
```
## NOTE: This is a somewhat subjective choice; we'll learn some principles
## in the next modeling exercise.
pr_threshold <- 0.75
## NOTE: No need to edit this; just inspect the results
df_q4 %>%
mutate(
true_positive = (pr_heart_disease > pr_threshold) & heart_disease,
false_positive = (pr_heart_disease > pr_threshold) & !heart_disease,
true_negative = (pr_heart_disease <= pr_threshold) & !heart_disease,
false_negative = (pr_heart_disease <= pr_threshold) & heart_disease
) %>%
summarize(
TP = sum(true_positive),
FP = sum(false_positive),
TN = sum(true_negative),
FN = sum(false_negative)
)
```
```
## # A tibble: 1 × 4
## TP FP TN FN
## <int> <int> <int> <int>
## 1 29 2 51 15
```
**Observations**:
* My model ended up having fewer true positives.
* My model ended up with many more true negatives.
60\.1 Setup
-----------
Note: The following chunk contains *a lot of stuff*, but you already did this in e\-data13\-cleaning!
```
## NOTE: No need to edit; you did all this in a previous exercise!
url_disease <- "http://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.cleveland.data"
filename_disease <- "./data/uci_heart_disease.csv"
## Download the data locally
curl::curl_download(
url_disease,
destfile = filename_disease
)
## Wrangle the data
col_names <- c(
"age",
"sex",
"cp",
"trestbps",
"chol",
"fbs",
"restecg",
"thalach",
"exang",
"oldpeak",
"slope",
"ca",
"thal",
"num"
)
## Recoding functions
convert_sex <- function(x) {
case_when(
x == 1 ~ "male",
x == 0 ~ "female",
TRUE ~ NA_character_
)
}
convert_cp <- function(x) {
case_when(
x == 1 ~ "typical angina",
x == 2 ~ "atypical angina",
x == 3 ~ "non-anginal pain",
x == 4 ~ "asymptomatic",
TRUE ~ NA_character_
)
}
convert_fbs <- function(x) {
if_else(x == 1, TRUE, FALSE)
}
convert_restecv <- function(x) {
case_when(
x == 0 ~ "normal",
x == 1 ~ "ST-T wave abnormality",
x == 2 ~ "Estes' criteria",
TRUE ~ NA_character_
)
}
convert_exang <- function(x) {
if_else(x == 1, TRUE, FALSE)
}
convert_slope <- function(x) {
case_when(
x == 1 ~ "upsloping",
x == 2 ~ "flat",
x == 3 ~ "downsloping",
TRUE ~ NA_character_
)
}
convert_thal <- function(x) {
case_when(
x == 3 ~ "normal",
x == 6 ~ "fixed defect",
x == 7 ~ "reversible defect",
TRUE ~ NA_character_
)
}
## Load and wrangle
df_heart_disease <-
read_csv(
filename_disease,
col_names = col_names,
col_types = cols(
"age" = col_number(),
"sex" = col_number(),
"cp" = col_number(),
"trestbps" = col_number(),
"chol" = col_number(),
"fbs" = col_number(),
"restecg" = col_number(),
"thalach" = col_number(),
"exang" = col_number(),
"oldpeak" = col_number(),
"slope" = col_number(),
"ca" = col_number(),
"thal" = col_number(),
"num" = col_number()
)
) %>%
mutate(
sex = convert_sex(sex),
cp = convert_cp(cp),
fbs = convert_fbs(fbs),
restecg = convert_restecv(restecg),
exang = convert_exang(exang),
slope = convert_slope(slope),
thal = convert_thal(thal)
)
```
```
## Warning: One or more parsing issues, call `problems()` on your data frame for details,
## e.g.:
## dat <- vroom(...)
## problems(dat)
```
```
df_heart_disease
```
```
## # A tibble: 303 × 14
## age sex cp trest…¹ chol fbs restecg thalach exang oldpeak slope
## <dbl> <chr> <chr> <dbl> <dbl> <lgl> <chr> <dbl> <lgl> <dbl> <chr>
## 1 63 male typical… 145 233 TRUE Estes'… 150 FALSE 2.3 down…
## 2 67 male asympto… 160 286 FALSE Estes'… 108 TRUE 1.5 flat
## 3 67 male asympto… 120 229 FALSE Estes'… 129 TRUE 2.6 flat
## 4 37 male non-ang… 130 250 FALSE normal 187 FALSE 3.5 down…
## 5 41 female atypica… 130 204 FALSE Estes'… 172 FALSE 1.4 upsl…
## 6 56 male atypica… 120 236 FALSE normal 178 FALSE 0.8 upsl…
## 7 62 female asympto… 140 268 FALSE Estes'… 160 FALSE 3.6 down…
## 8 57 female asympto… 120 354 FALSE normal 163 TRUE 0.6 upsl…
## 9 63 male asympto… 130 254 FALSE Estes'… 147 FALSE 1.4 flat
## 10 53 male asympto… 140 203 TRUE Estes'… 155 TRUE 3.1 down…
## # … with 293 more rows, 3 more variables: ca <dbl>, thal <chr>, num <dbl>, and
## # abbreviated variable name ¹trestbps
```
The data above are *clean*, but we still need to prepare them for *modeling*. Remember from e\-data13\-cleaning that we had to filter out rows with `NA` values. Additionally, we’re going to convert `num` (a numerical factor) into a binary outcome indicating the presence of heart disease:
```
## NOTE: No need to edit; preparing the data for modeling
df_data <-
df_heart_disease %>%
rowid_to_column() %>%
## Filter rows with NA's (you did this in e-data13-cleaning)
filter(!is.na(ca), !is.na(thal)) %>%
## Create binary outcome for heart disease
mutate(heart_disease = num > 0)
```
The last step of data setup is up to you!
### 60\.1\.1 **q1** Perform a train\-validate split of `df_data`. Make sure to *shuffle* the data when splitting, and ensure that `df_train` and `df_validate` together contain the entire dataset.
```
n_train <- 200
df_train <-
df_data %>%
slice_sample(n = n_train)
df_validate <-
anti_join(
df_data,
df_train,
by = "rowid"
)
```
Use the following to check your code.
```
## NOTE: No need to change this
# Correct size
assertthat::assert_that(
dim(bind_rows(df_train, df_validate))[1] == dim(df_data)[1]
)
```
```
## [1] TRUE
```
```
# All rowid's appear exactly once
assertthat::assert_that(all(
bind_rows(df_train, df_validate) %>% count(rowid) %>% pull(n) == 1
))
```
```
## [1] TRUE
```
```
# Data shuffled
assertthat::assert_that(
!all(
bind_rows(df_train, df_validate) %>% pull(rowid) ==
df_data %>% pull(rowid)
)
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
### 60\.1\.1 **q1** Perform a train\-validate split of `df_data`. Make sure to *shuffle* the data when splitting, and ensure that `df_train` and `df_validate` together contain the entire dataset.
```
n_train <- 200
df_train <-
df_data %>%
slice_sample(n = n_train)
df_validate <-
anti_join(
df_data,
df_train,
by = "rowid"
)
```
Use the following to check your code.
```
## NOTE: No need to change this
# Correct size
assertthat::assert_that(
dim(bind_rows(df_train, df_validate))[1] == dim(df_data)[1]
)
```
```
## [1] TRUE
```
```
# All rowid's appear exactly once
assertthat::assert_that(all(
bind_rows(df_train, df_validate) %>% count(rowid) %>% pull(n) == 1
))
```
```
## [1] TRUE
```
```
# Data shuffled
assertthat::assert_that(
!all(
bind_rows(df_train, df_validate) %>% pull(rowid) ==
df_data %>% pull(rowid)
)
)
```
```
## [1] TRUE
```
```
print("Well done!")
```
```
## [1] "Well done!"
```
60\.2 Logistic Regression
-------------------------
As the required video introduced, logistic regression bears some resemblance to linear regression. However, rather than predicting *continuous* outcomes (such as the price of a diamond), we will instead predict a *binary* outcome (in the present exercise: whether or not a given patient has heart disease).
In order to “fit a line” to this kind of binary data, we make a careful choice about what to model: Rather than model the binary outcome directly, we instead model the *probability* (a continuous value) that a given observation falls into one category or the other. We can then categorize observations based on predicted probabilities with some user\-specified threshold (which we’ll talk more about in a future exercise).
There’s one more trick we need to make this scheme work: Probabilities lie between \\(p \\in \[0, 1]\\), but the response of linear regression can be any real value between \\(x \\in (\-\\infty, \+\\infty)\\). To deal with this, we use the *logit function* to “warp space” and transform between the interval \\(p \\in \[0, 1]\\) and the whole real line \\(x \\in (\-\\infty, \+\\infty)\\).
```
## We'll need the logit and inverse-logit functions to "warp space"
logit <- function(p) {
odds_ratio <- p / (1 - p)
log(odds_ratio)
}
inv.logit <- function(x) {
exp(x) / (1 + exp(x))
}
```
The result of the logit function is a [log\-odds ratio](https://www.youtube.com/watch?v=ARfXDSkQf1Y), which is just a different way of expressing a probability. This is what it looks like to map from probabilities `p` to log\-odds ratios:
```
tibble(p = seq(0, 1, length.out = 100)) %>%
mutate(x = logit(p)) %>%
ggplot(aes(p, x)) +
geom_vline(xintercept = 0, linetype = 2) +
geom_vline(xintercept = 1, linetype = 2) +
geom_line() +
labs(x = "Probability", y = "Logit Value (log-odds ratio)")
```
And this is what it looks like to carry out the *inverse mapping* from log\-odds ratios to probabilities:
```
tibble(p = seq(0, 1, length.out = 100)) %>%
mutate(x = logit(p)) %>%
ggplot(aes(x, p)) +
geom_hline(yintercept = 0, linetype = 2) +
geom_hline(yintercept = 1, linetype = 2) +
geom_line() +
labs(y = "Probability", x = "Logit Value (log-odds ratio)")
```
This curve (the inverse\-logit) is the one we’ll stretch and shift in order to fit a logistic regression.
60\.3 A Worked Example
----------------------
The following code chunk fits a logistic regression model to your training data, predicts classification probabilities on the validation data, and visualizes the results so we can assess the model. You’ll practice carrying out these steps soon: First let’s practice interpreting a logistic regression model’s outputs.
### 60\.3\.1 **q2** Run the following code and study the results. Answer the questions under *observations* below.
```
## NOTE: No need to edit; just run and answer the questions below
## Fit a basic logistic regression model: biological-sex only
fit_basic <- glm(
formula = heart_disease ~ sex,
data = df_train,
family = "binomial"
)
## Predict the heart disease probabilities on the validation data
df_basic <-
df_validate %>%
add_predictions(fit_basic, var = "log_odds_ratio") %>%
arrange(log_odds_ratio) %>%
rowid_to_column(var = "order") %>%
## Remember that logistic regression fits the log_odds_ratio;
## convert this to a probability with inv.logit()
mutate(pr_heart_disease = inv.logit(log_odds_ratio))
## Plot the predicted probabilities and actual classes
df_basic %>%
ggplot(aes(order, pr_heart_disease, color = heart_disease)) +
geom_hline(yintercept = 0.5, linetype = 2) +
geom_point() +
facet_grid(~ sex, scales = "free_x") +
coord_cartesian(ylim = c(0, 1)) +
theme(legend.position = "bottom") +
labs(
x = "Rank-ordering of Predicted Probability",
y = "Predicted Probability of Heart Disease"
)
```
**Observations**:
* With a threshold at `0.5` the model would perform poorly: it appears we would miss miss a large number of people with heart disease.
* This model only considers the binary variable `sex`; thus the model only predicts two probability values, one for female and one for male.
In the next modeling exercise we’ll discuss how to *quantitatively* assess the results of a classifier. For the moment, know that our objective is usually to maximize the rates of true positives (TP) and true negatives (TN). In our example, true positives are when we correctly identify the presence of heart disease, and true negatives are when we correctly flag the absence of heart disease. Note that we can make errors in “either direction”: a false positive (FP) or a false negative (FN), depending on the underlying true class.
```
## NOTE: No need to edit; run and inspect
pr_threshold <- 0.5
df_basic %>%
mutate(
true_positive = (pr_heart_disease > pr_threshold) & heart_disease,
false_positive = (pr_heart_disease > pr_threshold) & !heart_disease,
true_negative = (pr_heart_disease <= pr_threshold) & !heart_disease,
false_negative = (pr_heart_disease <= pr_threshold) & heart_disease
) %>%
summarize(
TP = sum(true_positive),
FP = sum(false_positive),
TN = sum(true_negative),
FN = sum(false_negative)
)
```
```
## # A tibble: 1 × 4
## TP FP TN FN
## <int> <int> <int> <int>
## 1 36 27 26 8
```
These numbers don’t mean a whole lot on their own; we’ll use them to compare performance across models. Next you’ll practice using R functions to carry out logistic regression for classification, and build a model to compare against this basic one.
### 60\.3\.1 **q2** Run the following code and study the results. Answer the questions under *observations* below.
```
## NOTE: No need to edit; just run and answer the questions below
## Fit a basic logistic regression model: biological-sex only
fit_basic <- glm(
formula = heart_disease ~ sex,
data = df_train,
family = "binomial"
)
## Predict the heart disease probabilities on the validation data
df_basic <-
df_validate %>%
add_predictions(fit_basic, var = "log_odds_ratio") %>%
arrange(log_odds_ratio) %>%
rowid_to_column(var = "order") %>%
## Remember that logistic regression fits the log_odds_ratio;
## convert this to a probability with inv.logit()
mutate(pr_heart_disease = inv.logit(log_odds_ratio))
## Plot the predicted probabilities and actual classes
df_basic %>%
ggplot(aes(order, pr_heart_disease, color = heart_disease)) +
geom_hline(yintercept = 0.5, linetype = 2) +
geom_point() +
facet_grid(~ sex, scales = "free_x") +
coord_cartesian(ylim = c(0, 1)) +
theme(legend.position = "bottom") +
labs(
x = "Rank-ordering of Predicted Probability",
y = "Predicted Probability of Heart Disease"
)
```
**Observations**:
* With a threshold at `0.5` the model would perform poorly: it appears we would miss miss a large number of people with heart disease.
* This model only considers the binary variable `sex`; thus the model only predicts two probability values, one for female and one for male.
In the next modeling exercise we’ll discuss how to *quantitatively* assess the results of a classifier. For the moment, know that our objective is usually to maximize the rates of true positives (TP) and true negatives (TN). In our example, true positives are when we correctly identify the presence of heart disease, and true negatives are when we correctly flag the absence of heart disease. Note that we can make errors in “either direction”: a false positive (FP) or a false negative (FN), depending on the underlying true class.
```
## NOTE: No need to edit; run and inspect
pr_threshold <- 0.5
df_basic %>%
mutate(
true_positive = (pr_heart_disease > pr_threshold) & heart_disease,
false_positive = (pr_heart_disease > pr_threshold) & !heart_disease,
true_negative = (pr_heart_disease <= pr_threshold) & !heart_disease,
false_negative = (pr_heart_disease <= pr_threshold) & heart_disease
) %>%
summarize(
TP = sum(true_positive),
FP = sum(false_positive),
TN = sum(true_negative),
FN = sum(false_negative)
)
```
```
## # A tibble: 1 × 4
## TP FP TN FN
## <int> <int> <int> <int>
## 1 36 27 26 8
```
These numbers don’t mean a whole lot on their own; we’ll use them to compare performance across models. Next you’ll practice using R functions to carry out logistic regression for classification, and build a model to compare against this basic one.
60\.4 Doing Logistic Regression
-------------------------------
### 60\.4\.1 **q3** Using the code from q2 as a pattern, fit a logistic regression model to `df_train`.
```
fit_q3 <- glm(
formula = heart_disease ~ . - num,
data = df_train,
family = "binomial"
)
```
Use the following to check your work.
```
## NOTE: No need to change this
# Correct size
assertthat::assert_that(dim(
df_validate %>%
add_predictions(fit_q3)
)[1] > 0)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 60\.4\.2 **q4** Recall that logistic regression predicts log\-odds\-ratio values; add these predictions to `df_validate` and convert them to probabilities `pr_heart_disease`.
```
df_q4 <-
df_validate %>%
add_predictions(fit_q3, var = "log_odds_ratio") %>%
mutate(pr_heart_disease = inv.logit(log_odds_ratio))
## Plot the predicted probabilities and actual classes
df_q4 %>%
arrange(pr_heart_disease) %>%
rowid_to_column(var = "order") %>%
ggplot(aes(order, pr_heart_disease, color = heart_disease)) +
geom_point() +
coord_cartesian(ylim = c(0, 1)) +
theme(legend.position = "bottom") +
labs(
x = "Rank-ordering of Predicted Probability",
y = "Predicted Probability of Heart Disease"
)
```
Use the following to check your code.
```
## NOTE: No need to change this
# Correct size
assertthat::assert_that(all(
df_q4 %>%
mutate(check = (0 <= pr_heart_disease) & (pr_heart_disease <= 1)) %>%
pull(check)
))
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
### 60\.4\.3 **q5** Inspect your graph from q4 and choose a threshold for classification. Compare your count of true positives (TP) and true negatives (TN) to the model above.
```
## NOTE: This is a somewhat subjective choice; we'll learn some principles
## in the next modeling exercise.
pr_threshold <- 0.75
## NOTE: No need to edit this; just inspect the results
df_q4 %>%
mutate(
true_positive = (pr_heart_disease > pr_threshold) & heart_disease,
false_positive = (pr_heart_disease > pr_threshold) & !heart_disease,
true_negative = (pr_heart_disease <= pr_threshold) & !heart_disease,
false_negative = (pr_heart_disease <= pr_threshold) & heart_disease
) %>%
summarize(
TP = sum(true_positive),
FP = sum(false_positive),
TN = sum(true_negative),
FN = sum(false_negative)
)
```
```
## # A tibble: 1 × 4
## TP FP TN FN
## <int> <int> <int> <int>
## 1 29 2 51 15
```
**Observations**:
* My model ended up having fewer true positives.
* My model ended up with many more true negatives.
### 60\.4\.1 **q3** Using the code from q2 as a pattern, fit a logistic regression model to `df_train`.
```
fit_q3 <- glm(
formula = heart_disease ~ . - num,
data = df_train,
family = "binomial"
)
```
Use the following to check your work.
```
## NOTE: No need to change this
# Correct size
assertthat::assert_that(dim(
df_validate %>%
add_predictions(fit_q3)
)[1] > 0)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 60\.4\.2 **q4** Recall that logistic regression predicts log\-odds\-ratio values; add these predictions to `df_validate` and convert them to probabilities `pr_heart_disease`.
```
df_q4 <-
df_validate %>%
add_predictions(fit_q3, var = "log_odds_ratio") %>%
mutate(pr_heart_disease = inv.logit(log_odds_ratio))
## Plot the predicted probabilities and actual classes
df_q4 %>%
arrange(pr_heart_disease) %>%
rowid_to_column(var = "order") %>%
ggplot(aes(order, pr_heart_disease, color = heart_disease)) +
geom_point() +
coord_cartesian(ylim = c(0, 1)) +
theme(legend.position = "bottom") +
labs(
x = "Rank-ordering of Predicted Probability",
y = "Predicted Probability of Heart Disease"
)
```
Use the following to check your code.
```
## NOTE: No need to change this
# Correct size
assertthat::assert_that(all(
df_q4 %>%
mutate(check = (0 <= pr_heart_disease) & (pr_heart_disease <= 1)) %>%
pull(check)
))
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
### 60\.4\.3 **q5** Inspect your graph from q4 and choose a threshold for classification. Compare your count of true positives (TP) and true negatives (TN) to the model above.
```
## NOTE: This is a somewhat subjective choice; we'll learn some principles
## in the next modeling exercise.
pr_threshold <- 0.75
## NOTE: No need to edit this; just inspect the results
df_q4 %>%
mutate(
true_positive = (pr_heart_disease > pr_threshold) & heart_disease,
false_positive = (pr_heart_disease > pr_threshold) & !heart_disease,
true_negative = (pr_heart_disease <= pr_threshold) & !heart_disease,
false_negative = (pr_heart_disease <= pr_threshold) & heart_disease
) %>%
summarize(
TP = sum(true_positive),
FP = sum(false_positive),
TN = sum(true_negative),
FN = sum(false_negative)
)
```
```
## # A tibble: 1 × 4
## TP FP TN FN
## <int> <int> <int> <int>
## 1 29 2 51 15
```
**Observations**:
* My model ended up having fewer true positives.
* My model ended up with many more true negatives.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/stats-randomization.html |
61 Stats: Randomization
=======================
*Purpose*: You’ve probably heard that a “control” is important for doing science. If you’re lucky, you may have also heard about randomization. These two ideas are the *backbone* of sound data collection. In this exercise, you’ll learn the basics about how to plan data collection.
**This is probably **the most important** lesson in this entire class,** so I hope you do this exercise very carefully!
*Reading*: (None, this is the reading)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
## NOTE: Don't edit this; this sets up the example
simulate_yield <- function(v) {
## Check assertions
if (length(v) != 6) {
stop("Design must be a vector of length 6")
}
if (length(setdiff(v, c("T", "C")) != 0)) {
stop("Design must be a vector with 'T' and 'C' characters only")
}
if (length(setdiff(c("T", "C"), v)) > 0) {
stop("Design must contain at least one 'T' and at least one 'C'")
}
## Simulate data
tibble(condition = v) %>%
mutate(
condition = fct_relevel(condition, "T", "C"),
plot = row_number(),
yield = if_else(condition == "T", 1, 0) + plot / 3 + rnorm(n(), mean = 1, sd = 0.5)
)
}
```
61\.1 An Example: Fertilizer and Crop Yield
-------------------------------------------
It’s difficult to explain the ideas behind data collection without talking about data to collect, so let’s consider a specific example:
Imagine we’re testing a fertilizer, and we want to know how much it affects the yield of a specific crop. We have access to a farm, which we section off into six plots. In order to determine the effect the fertilizer has, we need to add fertilizer to some plots, and leave other plots without fertilizer (to serve as a comparison). In scientific jargon, these choices are referred to as the “treatment” and “control”. The treatment will have the effect we wish to study, while the control serves as a baseline for a meaningful (quantitative) comparison.
The code below selects a simple arrangement of treatment and control plots.
```
## Define the sequence of treatment (T) and control (C) plots
experimental_design <- c("T", "T", "T", "C", "C", "C")
```
In statistics, the word “design” refers to the “design of *data collection*”. The purposeful planning of data collection is called *statistical design of experiments*.
61\.2 Visualize the Scenario
----------------------------
The following code visualizes the scenario: how experimental conditions are arranged spatially on our test farm.
```
tibble(
condition = experimental_design,
plot = 1:length(experimental_design)
) %>%
ggplot(aes(plot, 1)) +
geom_label(
aes(label = condition, fill = condition),
size = 10
) +
scale_x_continuous(breaks = 1:6) +
scale_y_continuous(breaks = NULL) +
theme_minimal() +
theme(legend.position = "none") +
labs(
x = "Plot of Land",
y = NULL
)
```
Now let’s simulate the results of the experiment!
### 61\.2\.1 **q1** Simulate the results of this experimental design, and answer the questions under *Observations* below.
```
## TODO: Do not edit; run the following code, answer the questions below
## For reproducibility, set the seed
set.seed(101)
## Simulate the experimental yields
experimental_design %>%
simulate_yield() %>%
## Analyze the data
group_by(condition) %>%
summarize(
yield_mean = mean(yield),
yield_sd = sd(yield)
)
```
```
## # A tibble: 2 × 3
## condition yield_mean yield_sd
## <fct> <dbl> <dbl>
## 1 T 2.59 0.391
## 2 C 2.95 0.584
```
*Observations*
* The treatment is to add fertilizer. Based on the description above, I would expect the treatment to have greater yield.
* However, in this case, the control has greater yield by \~`0.3`. This is the reverse of what we would expect!
* The mean difference has the wrong sign!
61\.3 Confound \- Increased yield due to proximity to the river!
----------------------------------------------------------------
What I didn’t tell you about the experimental setup is that there’s a *river* on the right\-hand\-side of the plots:
```
tibble(
condition = experimental_design,
plot = 1:length(experimental_design)
) %>%
ggplot(aes(plot, 1)) +
geom_label(
aes(label = condition, fill = condition),
size = 10
) +
geom_vline(
xintercept = 7,
color = "cornflowerblue",
size = 8
) +
annotate(
"text",
x = 6.7, y = 1.25, label = "River",
hjust = 1
) +
scale_x_continuous(breaks = 1:6) +
scale_y_continuous(breaks = NULL, limits = c(0.5, 1.5)) +
theme_minimal() +
theme(legend.position = "none") +
labs(
x = "Plot of Land",
y = NULL
)
```
```
## Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
## ℹ Please use `linewidth` instead.
```
While fertilizer leads to an increase in crop yield, additional water also leads to a higher crop yield. These are the only plots we have available for planting, and it’s too expensive to move the river, so we’ll have to figure out how to place the plots to deal with this experimental reality.
Terminology: When there are other factors present in an experiment affecting our outcome of interest, we call those factors *confounds*. When we don’t know that a confound exists, it is sometimes called a *lurking variable* (Joiner 1979\).
### 61\.3\.1 **q2** Try defining a different order of the plots to overcome the confound (river).
```
## TODO: Define your own experimental design
## An "obvious" first attempt would be to simply switch the order, but this
## will tend to over-estimate the effect of the treatment
your_design <- c("C", "C", "C", "T", "T", "T")
## NOTE: No need to edit; use this to simulate your design
your_design %>%
simulate_yield() %>%
group_by(condition) %>%
summarize(
yield_mean = mean(yield),
yield_sd = sd(yield)
)
```
```
## # A tibble: 2 × 3
## condition yield_mean yield_sd
## <fct> <dbl> <dbl>
## 1 T 3.58 0.354
## 2 C 1.90 0.481
```
*Observations*
* In my case, I found the treatment to have a higher yield than the control.
* In my case, I found the treatment to have a higher yield by about two units; this is a drastic overestimate of the effect.
* In my case, the effects I see are due to both the treatment and the river; this leads to a strong overestimate of the effect of the treatment on the yield.
61\.4 Randomization to the rescue!
----------------------------------
To recap: We’re trying to accurately estimate the effect of the treatment over the control. However, there is a river that tends to increase the yield of plots nearby. The only thing we can affect in our data collection is where to place the treatment and control plots.
You might be tempted to try to do something “smart” to cancel out the effects of the river (such as alternate the order of `T` and `C`). While that might work for this specific example, in real experiments there are often many different confounds with all sorts of complicated effects on the quantity we seek to study. What we need is a *general\-purpose* way to do statistical design of experiments that can deal with *any* kind of confound.
To that end, **the gold\-standard for dealing with confounds is to *randomize* our data collection**.
The `sample()` function will randomize the order of a vector, as the following code shows.
```
## NOTE: No need to edit; run and inspect
## Start with a simple arrangement
experimental_design %>%
## Randomize the order
sample()
```
```
## [1] "C" "T" "T" "T" "C" "C"
```
If we randomize the order of treatment and control plots, then we *transform* the effects of the river (and any other confounds) into a random effect.
### 61\.4\.1 **q3** Randomize the run order to produce your design. Answer the questions under *Observations* below.
```
## TODO: Complete the code below
## Set the seed for reproducibility
set.seed(101)
## Simulate the experimental results
experimental_design %>%
## Randomize the plot order
sample() %>%
simulate_yield() %>%
group_by(condition) %>%
summarize(
yield_mean = mean(yield),
yield_sd = sd(yield)
)
```
```
## # A tibble: 2 × 3
## condition yield_mean yield_sd
## <fct> <dbl> <dbl>
## 1 T 3.22 0.679
## 2 C 2.63 0.818
```
*Observations*
* In this case, we find the treatment has a higher yield than the control.
* Since we randomized the run order, the confound cannot have a consistent effect on the outcome (on average).
* In my case, I found the difference to be about `0.6` units; this is smaller than the true treatment effect, but less optimistic than before.
61\.5 Why does randomization work?
----------------------------------
Let’s visually compare a naive (sequential) design with a randomized design, and draw a line to represent the effects of the river on crop yield.
```
set.seed(101)
bind_rows(
experimental_design %>%
simulate_yield() %>%
mutate(design = "Naive"),
experimental_design %>%
sample() %>%
simulate_yield() %>%
mutate(design = "Randomized"),
) %>%
ggplot(aes(plot, yield)) +
geom_line(aes(y = plot / 3), color = "cornflowerblue") +
geom_segment(
aes(y = plot / 3, yend = yield, xend = plot),
arrow = arrow(length = unit(0.05, "npc")),
color = "grey70"
) +
geom_point(aes(color = condition)) +
facet_grid(design~.) +
theme_minimal()
```
In the naive case, we’ve placed all our treatments at locations where the river has a low effect, and our controls at locations where the river has a high effect. This results in a consistent effect that reverses the perceived difference between treatment and control.
However, when we randomize, a mix of high and low river effects enter into both the treatment and control conditions. So long as there is an average difference between the treatment and control, we can detect it with sufficiently many samples *from a well\-designed study*.
It’s not randomization alone that’s saving us from confounds: The river will boost production for all plots, so we’ll always see a yield higher than what we’d get with the treatment or control alone. Studying the difference between treatment and control cancels out any constant difference, while randomization scrambles the effect of the river. This is why we combine randomization with a treatment\-to\-control comparison:
* Randomization allows us to transform confounds into a random effect
* Comparing a treatment and control allows us to isolate the effect of the treatment
Using both ideas together—randomization with a control—is the foundation of sound experimental design. A similar idea—[random assignment](https://en.wikipedia.org/wiki/Random_assignment)—is used in medical science to determine the effects of new drugs and other medical interventions.
61\.6 Conclusion
----------------
Here are the key takeaways from this lesson:
* “More data” is not enough for sound science; the *right* data is what you need to understand an effect.
* Getting the right data is a matter of carefully planning your data collection; *designing an experiment*.
* Confounds can confuse our analysis of data, and lead us to make incorrect conclusions.
+ **No amount of fancy math can overcome poorly\-collected data**.
* Randomization, paired with a treatment\-to\-control comparison, is our best tool to deal with confounds.
61\.7 References
----------------
Joiner, B., “Lurking Variables: Some Examples” (1979\) [link](https://www.tandfonline.com/doi/abs/10.1080/00031305.1981.10479361?casa_token=g1ULzOrGeEcAAAAA:5NqMGZtV_fNFTJ55UYlH1m9WhKI5ZYDe6fN8799XCk2pXOuTXWzlUC-ODrLnOoCf_2dyx1wIKoxn)
61\.1 An Example: Fertilizer and Crop Yield
-------------------------------------------
It’s difficult to explain the ideas behind data collection without talking about data to collect, so let’s consider a specific example:
Imagine we’re testing a fertilizer, and we want to know how much it affects the yield of a specific crop. We have access to a farm, which we section off into six plots. In order to determine the effect the fertilizer has, we need to add fertilizer to some plots, and leave other plots without fertilizer (to serve as a comparison). In scientific jargon, these choices are referred to as the “treatment” and “control”. The treatment will have the effect we wish to study, while the control serves as a baseline for a meaningful (quantitative) comparison.
The code below selects a simple arrangement of treatment and control plots.
```
## Define the sequence of treatment (T) and control (C) plots
experimental_design <- c("T", "T", "T", "C", "C", "C")
```
In statistics, the word “design” refers to the “design of *data collection*”. The purposeful planning of data collection is called *statistical design of experiments*.
61\.2 Visualize the Scenario
----------------------------
The following code visualizes the scenario: how experimental conditions are arranged spatially on our test farm.
```
tibble(
condition = experimental_design,
plot = 1:length(experimental_design)
) %>%
ggplot(aes(plot, 1)) +
geom_label(
aes(label = condition, fill = condition),
size = 10
) +
scale_x_continuous(breaks = 1:6) +
scale_y_continuous(breaks = NULL) +
theme_minimal() +
theme(legend.position = "none") +
labs(
x = "Plot of Land",
y = NULL
)
```
Now let’s simulate the results of the experiment!
### 61\.2\.1 **q1** Simulate the results of this experimental design, and answer the questions under *Observations* below.
```
## TODO: Do not edit; run the following code, answer the questions below
## For reproducibility, set the seed
set.seed(101)
## Simulate the experimental yields
experimental_design %>%
simulate_yield() %>%
## Analyze the data
group_by(condition) %>%
summarize(
yield_mean = mean(yield),
yield_sd = sd(yield)
)
```
```
## # A tibble: 2 × 3
## condition yield_mean yield_sd
## <fct> <dbl> <dbl>
## 1 T 2.59 0.391
## 2 C 2.95 0.584
```
*Observations*
* The treatment is to add fertilizer. Based on the description above, I would expect the treatment to have greater yield.
* However, in this case, the control has greater yield by \~`0.3`. This is the reverse of what we would expect!
* The mean difference has the wrong sign!
### 61\.2\.1 **q1** Simulate the results of this experimental design, and answer the questions under *Observations* below.
```
## TODO: Do not edit; run the following code, answer the questions below
## For reproducibility, set the seed
set.seed(101)
## Simulate the experimental yields
experimental_design %>%
simulate_yield() %>%
## Analyze the data
group_by(condition) %>%
summarize(
yield_mean = mean(yield),
yield_sd = sd(yield)
)
```
```
## # A tibble: 2 × 3
## condition yield_mean yield_sd
## <fct> <dbl> <dbl>
## 1 T 2.59 0.391
## 2 C 2.95 0.584
```
*Observations*
* The treatment is to add fertilizer. Based on the description above, I would expect the treatment to have greater yield.
* However, in this case, the control has greater yield by \~`0.3`. This is the reverse of what we would expect!
* The mean difference has the wrong sign!
61\.3 Confound \- Increased yield due to proximity to the river!
----------------------------------------------------------------
What I didn’t tell you about the experimental setup is that there’s a *river* on the right\-hand\-side of the plots:
```
tibble(
condition = experimental_design,
plot = 1:length(experimental_design)
) %>%
ggplot(aes(plot, 1)) +
geom_label(
aes(label = condition, fill = condition),
size = 10
) +
geom_vline(
xintercept = 7,
color = "cornflowerblue",
size = 8
) +
annotate(
"text",
x = 6.7, y = 1.25, label = "River",
hjust = 1
) +
scale_x_continuous(breaks = 1:6) +
scale_y_continuous(breaks = NULL, limits = c(0.5, 1.5)) +
theme_minimal() +
theme(legend.position = "none") +
labs(
x = "Plot of Land",
y = NULL
)
```
```
## Warning: Using `size` aesthetic for lines was deprecated in ggplot2 3.4.0.
## ℹ Please use `linewidth` instead.
```
While fertilizer leads to an increase in crop yield, additional water also leads to a higher crop yield. These are the only plots we have available for planting, and it’s too expensive to move the river, so we’ll have to figure out how to place the plots to deal with this experimental reality.
Terminology: When there are other factors present in an experiment affecting our outcome of interest, we call those factors *confounds*. When we don’t know that a confound exists, it is sometimes called a *lurking variable* (Joiner 1979\).
### 61\.3\.1 **q2** Try defining a different order of the plots to overcome the confound (river).
```
## TODO: Define your own experimental design
## An "obvious" first attempt would be to simply switch the order, but this
## will tend to over-estimate the effect of the treatment
your_design <- c("C", "C", "C", "T", "T", "T")
## NOTE: No need to edit; use this to simulate your design
your_design %>%
simulate_yield() %>%
group_by(condition) %>%
summarize(
yield_mean = mean(yield),
yield_sd = sd(yield)
)
```
```
## # A tibble: 2 × 3
## condition yield_mean yield_sd
## <fct> <dbl> <dbl>
## 1 T 3.58 0.354
## 2 C 1.90 0.481
```
*Observations*
* In my case, I found the treatment to have a higher yield than the control.
* In my case, I found the treatment to have a higher yield by about two units; this is a drastic overestimate of the effect.
* In my case, the effects I see are due to both the treatment and the river; this leads to a strong overestimate of the effect of the treatment on the yield.
### 61\.3\.1 **q2** Try defining a different order of the plots to overcome the confound (river).
```
## TODO: Define your own experimental design
## An "obvious" first attempt would be to simply switch the order, but this
## will tend to over-estimate the effect of the treatment
your_design <- c("C", "C", "C", "T", "T", "T")
## NOTE: No need to edit; use this to simulate your design
your_design %>%
simulate_yield() %>%
group_by(condition) %>%
summarize(
yield_mean = mean(yield),
yield_sd = sd(yield)
)
```
```
## # A tibble: 2 × 3
## condition yield_mean yield_sd
## <fct> <dbl> <dbl>
## 1 T 3.58 0.354
## 2 C 1.90 0.481
```
*Observations*
* In my case, I found the treatment to have a higher yield than the control.
* In my case, I found the treatment to have a higher yield by about two units; this is a drastic overestimate of the effect.
* In my case, the effects I see are due to both the treatment and the river; this leads to a strong overestimate of the effect of the treatment on the yield.
61\.4 Randomization to the rescue!
----------------------------------
To recap: We’re trying to accurately estimate the effect of the treatment over the control. However, there is a river that tends to increase the yield of plots nearby. The only thing we can affect in our data collection is where to place the treatment and control plots.
You might be tempted to try to do something “smart” to cancel out the effects of the river (such as alternate the order of `T` and `C`). While that might work for this specific example, in real experiments there are often many different confounds with all sorts of complicated effects on the quantity we seek to study. What we need is a *general\-purpose* way to do statistical design of experiments that can deal with *any* kind of confound.
To that end, **the gold\-standard for dealing with confounds is to *randomize* our data collection**.
The `sample()` function will randomize the order of a vector, as the following code shows.
```
## NOTE: No need to edit; run and inspect
## Start with a simple arrangement
experimental_design %>%
## Randomize the order
sample()
```
```
## [1] "C" "T" "T" "T" "C" "C"
```
If we randomize the order of treatment and control plots, then we *transform* the effects of the river (and any other confounds) into a random effect.
### 61\.4\.1 **q3** Randomize the run order to produce your design. Answer the questions under *Observations* below.
```
## TODO: Complete the code below
## Set the seed for reproducibility
set.seed(101)
## Simulate the experimental results
experimental_design %>%
## Randomize the plot order
sample() %>%
simulate_yield() %>%
group_by(condition) %>%
summarize(
yield_mean = mean(yield),
yield_sd = sd(yield)
)
```
```
## # A tibble: 2 × 3
## condition yield_mean yield_sd
## <fct> <dbl> <dbl>
## 1 T 3.22 0.679
## 2 C 2.63 0.818
```
*Observations*
* In this case, we find the treatment has a higher yield than the control.
* Since we randomized the run order, the confound cannot have a consistent effect on the outcome (on average).
* In my case, I found the difference to be about `0.6` units; this is smaller than the true treatment effect, but less optimistic than before.
### 61\.4\.1 **q3** Randomize the run order to produce your design. Answer the questions under *Observations* below.
```
## TODO: Complete the code below
## Set the seed for reproducibility
set.seed(101)
## Simulate the experimental results
experimental_design %>%
## Randomize the plot order
sample() %>%
simulate_yield() %>%
group_by(condition) %>%
summarize(
yield_mean = mean(yield),
yield_sd = sd(yield)
)
```
```
## # A tibble: 2 × 3
## condition yield_mean yield_sd
## <fct> <dbl> <dbl>
## 1 T 3.22 0.679
## 2 C 2.63 0.818
```
*Observations*
* In this case, we find the treatment has a higher yield than the control.
* Since we randomized the run order, the confound cannot have a consistent effect on the outcome (on average).
* In my case, I found the difference to be about `0.6` units; this is smaller than the true treatment effect, but less optimistic than before.
61\.5 Why does randomization work?
----------------------------------
Let’s visually compare a naive (sequential) design with a randomized design, and draw a line to represent the effects of the river on crop yield.
```
set.seed(101)
bind_rows(
experimental_design %>%
simulate_yield() %>%
mutate(design = "Naive"),
experimental_design %>%
sample() %>%
simulate_yield() %>%
mutate(design = "Randomized"),
) %>%
ggplot(aes(plot, yield)) +
geom_line(aes(y = plot / 3), color = "cornflowerblue") +
geom_segment(
aes(y = plot / 3, yend = yield, xend = plot),
arrow = arrow(length = unit(0.05, "npc")),
color = "grey70"
) +
geom_point(aes(color = condition)) +
facet_grid(design~.) +
theme_minimal()
```
In the naive case, we’ve placed all our treatments at locations where the river has a low effect, and our controls at locations where the river has a high effect. This results in a consistent effect that reverses the perceived difference between treatment and control.
However, when we randomize, a mix of high and low river effects enter into both the treatment and control conditions. So long as there is an average difference between the treatment and control, we can detect it with sufficiently many samples *from a well\-designed study*.
It’s not randomization alone that’s saving us from confounds: The river will boost production for all plots, so we’ll always see a yield higher than what we’d get with the treatment or control alone. Studying the difference between treatment and control cancels out any constant difference, while randomization scrambles the effect of the river. This is why we combine randomization with a treatment\-to\-control comparison:
* Randomization allows us to transform confounds into a random effect
* Comparing a treatment and control allows us to isolate the effect of the treatment
Using both ideas together—randomization with a control—is the foundation of sound experimental design. A similar idea—[random assignment](https://en.wikipedia.org/wiki/Random_assignment)—is used in medical science to determine the effects of new drugs and other medical interventions.
61\.6 Conclusion
----------------
Here are the key takeaways from this lesson:
* “More data” is not enough for sound science; the *right* data is what you need to understand an effect.
* Getting the right data is a matter of carefully planning your data collection; *designing an experiment*.
* Confounds can confuse our analysis of data, and lead us to make incorrect conclusions.
+ **No amount of fancy math can overcome poorly\-collected data**.
* Randomization, paired with a treatment\-to\-control comparison, is our best tool to deal with confounds.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/model-assessing-classification-with-roc.html |
62 Model: Assessing Classification with ROC
===========================================
*Purpose*: With regression models, we used model metrics in order to assess and select a model (e.g. choose which features we should use). In order to do the same with classification models, we need some quantitative measure of accuracy. However, assessing the “accuracy” of a classifier is far more complicated. To do this, we’ll need to understand the *receiver operating characteristic*.
*Reading*: [StatQuest: ROC and AUC… clearly explained!](https://www.youtube.com/watch?v=4jRBRDbJemM) (Required, \~17 minutes)
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
```
library(modelr)
library(broom)
```
```
##
## Attaching package: 'broom'
```
```
## The following object is masked from 'package:modelr':
##
## bootstrap
```
```
## We'll need the logit and inverse-logit functions to "warp space"
logit <- function(p) {
odds_ratio <- p / (1 - p)
log(odds_ratio)
}
inv.logit <- function(x) {
exp(x) / (1 + exp(x))
}
```
62\.1 Setup
-----------
Note: The following chunk contains *a lot of stuff*, but you already did this in e\-model04\-logistic!
```
## NOTE: No need to edit; you did all this in a previous exercise!
url_disease <- "http://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.cleveland.data"
filename_disease <- "./data/uci_heart_disease.csv"
## Download the data locally
curl::curl_download(
url_disease,
destfile = filename_disease
)
## Wrangle the data
col_names <- c(
"age",
"sex",
"cp",
"trestbps",
"chol",
"fbs",
"restecg",
"thalach",
"exang",
"oldpeak",
"slope",
"ca",
"thal",
"num"
)
## Recoding functions
convert_sex <- function(x) {
case_when(
x == 1 ~ "male",
x == 0 ~ "female",
TRUE ~ NA_character_
)
}
convert_cp <- function(x) {
case_when(
x == 1 ~ "typical angina",
x == 2 ~ "atypical angina",
x == 3 ~ "non-anginal pain",
x == 4 ~ "asymptomatic",
TRUE ~ NA_character_
)
}
convert_fbs <- function(x) {
if_else(x == 1, TRUE, FALSE)
}
convert_restecv <- function(x) {
case_when(
x == 0 ~ "normal",
x == 1 ~ "ST-T wave abnormality",
x == 2 ~ "Estes' criteria",
TRUE ~ NA_character_
)
}
convert_exang <- function(x) {
if_else(x == 1, TRUE, FALSE)
}
convert_slope <- function(x) {
case_when(
x == 1 ~ "upsloping",
x == 2 ~ "flat",
x == 3 ~ "downsloping",
TRUE ~ NA_character_
)
}
convert_thal <- function(x) {
case_when(
x == 3 ~ "normal",
x == 6 ~ "fixed defect",
x == 7 ~ "reversible defect",
TRUE ~ NA_character_
)
}
## Load and wrangle
df_data <-
read_csv(
filename_disease,
col_names = col_names,
col_types = cols(
"age" = col_number(),
"sex" = col_number(),
"cp" = col_number(),
"trestbps" = col_number(),
"chol" = col_number(),
"fbs" = col_number(),
"restecg" = col_number(),
"thalach" = col_number(),
"exang" = col_number(),
"oldpeak" = col_number(),
"slope" = col_number(),
"ca" = col_number(),
"thal" = col_number(),
"num" = col_number()
)
) %>%
mutate(
sex = convert_sex(sex),
cp = convert_cp(cp),
fbs = convert_fbs(fbs),
restecg = convert_restecv(restecg),
exang = convert_exang(exang),
slope = convert_slope(slope),
thal = convert_thal(thal)
) %>%
rowid_to_column() %>%
## Filter rows with NA's (you did this in e-data13-cleaning)
filter(!is.na(ca), !is.na(thal)) %>%
## Create binary outcome for heart disease
mutate(heart_disease = num > 0)
```
```
## Warning: One or more parsing issues, call `problems()` on your data frame for details,
## e.g.:
## dat <- vroom(...)
## problems(dat)
```
```
set.seed(101)
df_train <-
df_data %>%
slice_sample(n = 200)
df_validate <-
anti_join(
df_data,
df_train,
by = "rowid"
)
```
62\.2 Assessing a Classifier
----------------------------
What makes for a “good” or a “bad” classifier? When studying continuous models, we studied a variety of diagnostic plots and error metrics to assess model accuracy. However, since we’re now dealing with a discrete response, our metrics are going to look very different. In order to assess a classifier, we’re going to need to build up some concepts.
To learn these concepts, let’s return to the basic model from the previous modeling exercise:
```
## NOTE: No need to edit
## Fit a basic logistic regression model: biological-sex only
fit_basic <- glm(
formula = heart_disease ~ sex,
data = df_train,
family = "binomial"
)
## Predict the heart disease probabilities on the validation data
df_basic <-
df_validate %>%
add_predictions(fit_basic, var = "log_odds_ratio") %>%
arrange(log_odds_ratio) %>%
rowid_to_column(var = "order") %>%
mutate(pr_heart_disease = inv.logit(log_odds_ratio))
```
62\.3 Positives and negatives
-----------------------------
With a binary (two\-class) classifier, **there are only 4 possible outcomes of
a single prediction**. We can summarize all four in a table:
\| Predicted True \| Predicted False \|
Actually True \| True Positive \| False Negative \|
Actually False \| False Positive \| True Negative \|
*Note*: A table with the total counts of \[TP, FP, FN, TN] is called a
[confusion matrix](https://en.wikipedia.org/wiki/Confusion_matrix).
There are two ways in which we can be correct:
* **True Positive**: We correctly identified a positive case; e.g. we correctly identified that a given patient has heart disease.
* **True Negative**: We correctly identified a negative case; e.g. we correctly identified that a given patient does not have heart disease.
And there are two ways in which we can be incorrect:
* **False Positive**: We predicted a case to be positive, but in reality it was negative; e.g. we predicted that a given patient has heart disease, but in reality they do not have the disease.
* **False Negative**: We predicted a case to be negative, but in reality it was positive; e.g. we predicted that a given patient does not have heard disease, but in reality they do have the disease.
Note that we might have different concerns about false positives and negatives.
For instance in the heart disease case, we might be more concerned with flagging
all possible cases of heart disease, particularly if follow\-up examination can
diagnose heart disease with greater precision. In that case, we might want to
avoid false negatives but accept more false positives.
We can make quantitative judgments about these classification tradeoffs by
controlling classification rates with a decision threshold.
62\.4 Classification Rates and Decision Thresholds
--------------------------------------------------
We can summarize the tradeoffs a classifier makes in terms of classification rates. First, let’s introduce some shorthand:
TP \| Total count of True Positives \|
FP \| Total count of False Positives \|
TN \| Total count of True Negatives \|
FN \| Total count of False Negatives \|
Two important rates are the *true positive rate* and *false positive rate*, defined as:
**True Positive Rate** (TPR): The ratio of true positives to all positives, that is:
`TPR = TP / P = TP / (TP + FN)`
We generally want to *maximize* the TPR. In the heart disease example, this is the number of patients with heart disease that we correctly diagnose; a higher TPR in this setting means we can follow\-up with and treat more individuals.
**False Positive Rate** (FPR): The ratio of false positives to all negatives, that is:
`FPR = FP / N = FP / (FP + TN)`
We generally want to *minimize* the FPR. In the heart disease example, this is the number of patients without heart disease that we falsely diagnose with the disease; a higher FPR in this setting means we will waste valuable time and resources following up with healthy individuals.
We can control the TPR and FPR by choosing our decision threshold for our classifier. Remember that in the previous exercise e\-model04\-logistic we set an arbitrary threshold of `pr_heart_disease > 0.5` for detection: We can instead pick a `pr_threshold` to make our classifier more or less sensitive, which will adjust the TPR and FPR. The next task will illustrate this idea.
### 62\.4\.1 **q1** Compute the true positive rate (TPR) and false positive rate (FPR) using the model fitted above, calculating on the validation data.
*Hint 1*: Remember that you can use `summarize(n = sum(boolean))` to count the number of `TRUE` values in a variable `boolean`. Feel free to compute intermediate boolean values with things like `mutate(boolean = (x < 0) & flag)` before your summarize.
*Hint 2*: We did part of this in the previous modeling exercise!
```
pr_threshold <- 0.5
df_q1 <-
df_basic %>%
mutate(
true_positive = (pr_heart_disease > pr_threshold) & heart_disease,
false_positive = (pr_heart_disease > pr_threshold) & !heart_disease,
true_negative = (pr_heart_disease <= pr_threshold) & !heart_disease,
false_negative = (pr_heart_disease <= pr_threshold) & heart_disease
) %>%
summarize(
TP = sum(true_positive),
FP = sum(false_positive),
TN = sum(true_negative),
FN = sum(false_negative)
) %>%
mutate(
TPR = TP / (TP + FN),
FPR = FP / (FP + TN)
)
df_q1
```
```
## # A tibble: 1 × 6
## TP FP TN FN TPR FPR
## <int> <int> <int> <int> <dbl> <dbl>
## 1 26 37 24 10 0.722 0.607
```
Use the following test to check your work.
```
## NOTE: No need to edit; use this to check your work
assertthat::assert_that(
all_equal(
df_q1 %>% select(TPR, FPR),
df_validate %>%
add_predictions(fit_basic, var = "l_heart_disease") %>%
mutate(pr_heart_disease = inv.logit(l_heart_disease)) %>%
summarize(
TP = sum((pr_heart_disease > pr_threshold) & heart_disease),
FP = sum((pr_heart_disease > pr_threshold) & !heart_disease),
TN = sum((pr_heart_disease <= pr_threshold) & !heart_disease),
FN = sum((pr_heart_disease <= pr_threshold) & heart_disease)
) %>%
mutate(
TPR = TP / (TP + FN),
FPR = FP / (FP + TN)
) %>%
select(TPR, FPR)
)
)
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
62\.5 The Receiver Operating Characteristic (ROC) Curve
-------------------------------------------------------
As the required video mentioned, we can summarize TPR and FPR at different threshold values `pr_threshold` with the *receiver operating characteristic curve* (ROC curve). This plot gives us an overview of the tradeoffs we can achieve with our classification model.
The ROC curve shows TPR against FPR. Remember that we want to *maximize* TPR and *minimize* FPR\*; therefore, the ideal point for the curve to reach is the top\-left point in the graph. A very poor classifier would run along the diagonal—this would be equivalent to randomly guessing the class of each observation. An ROC curve below the diagonal is worse than random guessing!
To compute an ROC curve, we could construct a confusion matrix at a variety of thresholds, compute the TPR and FPR for each, and repeat. However, there’s a small bit of “shortcut code” we could use to do the same thing. The following chunk illustrates how to compute an ROC curve.
### 62\.5\.1 **q2** Inspect the following ROC curve for the basic classifier and assess its performance. Is this an effective classifier? How do you know?
```
## NOTE: No need to edit; run and inspect
df_basic %>%
## Begin: Shortcut code for computing an ROC
arrange(desc(pr_heart_disease)) %>%
summarize(
true_positive_rate = cumsum(heart_disease) / sum(heart_disease),
false_positive_rate = cumsum(!heart_disease) / sum(!heart_disease)
) %>%
## End: Shortcut code for computing an ROC
ggplot(aes(false_positive_rate, true_positive_rate)) +
geom_abline(intercept = 0, slope = 1, linetype = 2) +
geom_step() +
coord_cartesian(xlim = c(0, 1), ylim = c(0, 1)) +
theme_minimal()
```
**Observations**:
* This is a highly ineffective classifier; the ROC curve is very near the diagonal, indicating the classifier is not much better (for some thresholds—worse than) random guessing.
62\.6 Practice Assessing Classifiers
------------------------------------
Let’s get some practice reading ROC curves.
### 62\.6\.1 **q3** Inspect the following ROC curve. Is this an effective classifier? What explains this model’s performance? Is this model valid for prediction?
```
## NOTE: No need to edit
fit_cheating <- glm(
formula = heart_disease ~ num,
data = df_train,
family = "binomial"
)
```
```
## Warning: glm.fit: algorithm did not converge
```
```
## Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred
```
```
df_cheating <-
df_validate %>%
add_predictions(fit_cheating, var = "log_odds_ratio") %>%
arrange(log_odds_ratio) %>%
rowid_to_column(var = "order") %>%
mutate(pr_heart_disease = inv.logit(log_odds_ratio))
df_cheating %>%
## Begin: Shortcut code for computing an ROC
arrange(desc(pr_heart_disease)) %>%
summarize(
true_positive_rate = cumsum(heart_disease) / sum(heart_disease),
false_positive_rate = cumsum(!heart_disease) / sum(!heart_disease)
) %>%
## End: Shortcut code for computing an ROC
ggplot(aes(false_positive_rate, true_positive_rate)) +
geom_abline(intercept = 0, slope = 1, linetype = 2) +
geom_step() +
coord_cartesian(xlim = c(0, 1), ylim = c(0, 1)) +
theme_minimal()
```
**Observations**:
* This is an *optimal* classifier; we can achieve TPR \= 1 with FPR \= 0\. In fact, it’s *suspiciously good*….
* This model is using the outcome to predict the outcome! Remember that `heart_disease = num > 0`; this is not a valid way to predict the presence of heart disease.
Next you’ll fit your own model and assess its performance.
### 62\.6\.2 **q4** Fit a model to the training data, and predict class probabilities on the validation data. Compare your model’s performance to that of `fit_baseline` (fitted below).
```
fit_q4 <- glm(
formula = heart_disease ~ age + cp + trestbps,
data = df_train,
family = "binomial"
)
df_q4 <-
df_validate %>%
add_predictions(fit_q4, var = "log_odds_ratio") %>%
arrange(log_odds_ratio) %>%
rowid_to_column(var = "order") %>%
mutate(pr_heart_disease = inv.logit(log_odds_ratio))
## Here's another model for comparison
fit_baseline <- glm(
formula = heart_disease ~ sex + cp + trestbps,
data = df_train,
family = "binomial"
)
df_baseline <-
df_validate %>%
add_predictions(fit_baseline, var = "log_odds_ratio") %>%
arrange(log_odds_ratio) %>%
rowid_to_column(var = "order") %>%
mutate(pr_heart_disease = inv.logit(log_odds_ratio))
## NOTE: No need to edit
bind_rows(
df_q4 %>%
arrange(desc(pr_heart_disease)) %>%
summarize(
true_positive_rate = cumsum(heart_disease) / sum(heart_disease),
false_positive_rate = cumsum(!heart_disease) / sum(!heart_disease)
) %>%
mutate(model = "Personal"),
df_baseline %>%
arrange(desc(pr_heart_disease)) %>%
summarize(
true_positive_rate = cumsum(heart_disease) / sum(heart_disease),
false_positive_rate = cumsum(!heart_disease) / sum(!heart_disease)
) %>%
mutate(model = "Baseline")
) %>%
ggplot(aes(false_positive_rate, true_positive_rate, color = model)) +
geom_abline(intercept = 0, slope = 1, linetype = 2) +
geom_step() +
coord_cartesian(xlim = c(0, 1), ylim = c(0, 1)) +
theme_minimal() +
theme(legend.position = "bottom")
```
**Observations**:
* My model `fit_q4` is comparable in performance to `fit_baseline`; it outperforms (in TPR for fixed FPR) in some places, and underperforms in others.
* As one sweeps from low to high FPR, the TPR increases—at first quickly, then it tapers off to increase slowly. Both FPR and TPR equal zero at the beginning, and both limit to one. The “tradeoff” is that we can have an arbitrarily high TPR, but we “pay” for this through an increase in the FPR.
62\.7 Selecting a Threshold
---------------------------
The ROC summarizes performance characteristics for a *variety* of thresholds `pr_threshold`, but to actually *deploy* a classifier and make decisions, we have to pick a *specific* threshold value. Picking a threshold is *not* just an exercise in mathematics; we need to inform this decision with our intended use\-case.
The following chunk plots potential `pr_threshold` against achieved TPR values for your model. Use this image to pick a classifier threshold.
### 62\.7\.1 **q5** Pick a target TPR value for your heart disease predictor; what is a reasonable value for `TPR`, and why did yo pick that value? What values of `pr_threshold` meet or exceed that target TPR? What specific value for `pr_threshold` do you choose, and why?
```
## NOTE: No need to edit this; use these data to pick a threshold
df_thresholds <-
df_q4 %>%
## Begin: Shortcut code for computing an ROC
arrange(desc(pr_heart_disease)) %>%
summarize(
pr_heart_disease = pr_heart_disease,
true_positive_rate = cumsum(heart_disease) / sum(heart_disease),
false_positive_rate = cumsum(!heart_disease) / sum(!heart_disease)
)
## End: Shortcut code for computing an ROC
## TODO: Pick a threshold using df_thresholds above
df_thresholds %>%
filter(true_positive_rate >= 0.9) %>%
filter(false_positive_rate == min(false_positive_rate))
```
```
## # A tibble: 2 × 3
## pr_heart_disease true_positive_rate false_positive_rate
## <dbl> <dbl> <dbl>
## 1 0.252 0.917 0.492
## 2 0.249 0.944 0.492
```
```
pr_threshold <- 0.249
tpr_achieved <- 0.944
fpr_achieved <- 0.492
## NOTE: No need to edit; use this visual to help your decision
df_thresholds %>%
ggplot(aes(true_positive_rate, pr_heart_disease)) +
geom_vline(xintercept = tpr_achieved, linetype = 2) +
geom_hline(yintercept = pr_threshold, linetype = 2) +
geom_step() +
labs(
x = "True Positive Rate",
y = "Pr Threshold"
)
```
**Observations**:
* I pick `TPR > 0.9`, as I want to catch the vast majority of patients with the disease.
* Filtering `df_thresholds` shows that `pr_threshold >= 0.252` achieves my desired TPR.
* To pick a specific value for `pr_threshold`, I also try to minimize the FPR. For my case, there is a range of values of `pr_threshold` that can minimize the FPR; therefore I take the more permissive end of the interval.
* Ultimately I picked `pr_threshold = 0.249`, which gave `TPR = 0.944, FPR = 0.492`. This will lead to a lot of false positives, but we will have a very sensitive detector.
62\.1 Setup
-----------
Note: The following chunk contains *a lot of stuff*, but you already did this in e\-model04\-logistic!
```
## NOTE: No need to edit; you did all this in a previous exercise!
url_disease <- "http://archive.ics.uci.edu/ml/machine-learning-databases/heart-disease/processed.cleveland.data"
filename_disease <- "./data/uci_heart_disease.csv"
## Download the data locally
curl::curl_download(
url_disease,
destfile = filename_disease
)
## Wrangle the data
col_names <- c(
"age",
"sex",
"cp",
"trestbps",
"chol",
"fbs",
"restecg",
"thalach",
"exang",
"oldpeak",
"slope",
"ca",
"thal",
"num"
)
## Recoding functions
convert_sex <- function(x) {
case_when(
x == 1 ~ "male",
x == 0 ~ "female",
TRUE ~ NA_character_
)
}
convert_cp <- function(x) {
case_when(
x == 1 ~ "typical angina",
x == 2 ~ "atypical angina",
x == 3 ~ "non-anginal pain",
x == 4 ~ "asymptomatic",
TRUE ~ NA_character_
)
}
convert_fbs <- function(x) {
if_else(x == 1, TRUE, FALSE)
}
convert_restecv <- function(x) {
case_when(
x == 0 ~ "normal",
x == 1 ~ "ST-T wave abnormality",
x == 2 ~ "Estes' criteria",
TRUE ~ NA_character_
)
}
convert_exang <- function(x) {
if_else(x == 1, TRUE, FALSE)
}
convert_slope <- function(x) {
case_when(
x == 1 ~ "upsloping",
x == 2 ~ "flat",
x == 3 ~ "downsloping",
TRUE ~ NA_character_
)
}
convert_thal <- function(x) {
case_when(
x == 3 ~ "normal",
x == 6 ~ "fixed defect",
x == 7 ~ "reversible defect",
TRUE ~ NA_character_
)
}
## Load and wrangle
df_data <-
read_csv(
filename_disease,
col_names = col_names,
col_types = cols(
"age" = col_number(),
"sex" = col_number(),
"cp" = col_number(),
"trestbps" = col_number(),
"chol" = col_number(),
"fbs" = col_number(),
"restecg" = col_number(),
"thalach" = col_number(),
"exang" = col_number(),
"oldpeak" = col_number(),
"slope" = col_number(),
"ca" = col_number(),
"thal" = col_number(),
"num" = col_number()
)
) %>%
mutate(
sex = convert_sex(sex),
cp = convert_cp(cp),
fbs = convert_fbs(fbs),
restecg = convert_restecv(restecg),
exang = convert_exang(exang),
slope = convert_slope(slope),
thal = convert_thal(thal)
) %>%
rowid_to_column() %>%
## Filter rows with NA's (you did this in e-data13-cleaning)
filter(!is.na(ca), !is.na(thal)) %>%
## Create binary outcome for heart disease
mutate(heart_disease = num > 0)
```
```
## Warning: One or more parsing issues, call `problems()` on your data frame for details,
## e.g.:
## dat <- vroom(...)
## problems(dat)
```
```
set.seed(101)
df_train <-
df_data %>%
slice_sample(n = 200)
df_validate <-
anti_join(
df_data,
df_train,
by = "rowid"
)
```
62\.2 Assessing a Classifier
----------------------------
What makes for a “good” or a “bad” classifier? When studying continuous models, we studied a variety of diagnostic plots and error metrics to assess model accuracy. However, since we’re now dealing with a discrete response, our metrics are going to look very different. In order to assess a classifier, we’re going to need to build up some concepts.
To learn these concepts, let’s return to the basic model from the previous modeling exercise:
```
## NOTE: No need to edit
## Fit a basic logistic regression model: biological-sex only
fit_basic <- glm(
formula = heart_disease ~ sex,
data = df_train,
family = "binomial"
)
## Predict the heart disease probabilities on the validation data
df_basic <-
df_validate %>%
add_predictions(fit_basic, var = "log_odds_ratio") %>%
arrange(log_odds_ratio) %>%
rowid_to_column(var = "order") %>%
mutate(pr_heart_disease = inv.logit(log_odds_ratio))
```
62\.3 Positives and negatives
-----------------------------
With a binary (two\-class) classifier, **there are only 4 possible outcomes of
a single prediction**. We can summarize all four in a table:
\| Predicted True \| Predicted False \|
Actually True \| True Positive \| False Negative \|
Actually False \| False Positive \| True Negative \|
*Note*: A table with the total counts of \[TP, FP, FN, TN] is called a
[confusion matrix](https://en.wikipedia.org/wiki/Confusion_matrix).
There are two ways in which we can be correct:
* **True Positive**: We correctly identified a positive case; e.g. we correctly identified that a given patient has heart disease.
* **True Negative**: We correctly identified a negative case; e.g. we correctly identified that a given patient does not have heart disease.
And there are two ways in which we can be incorrect:
* **False Positive**: We predicted a case to be positive, but in reality it was negative; e.g. we predicted that a given patient has heart disease, but in reality they do not have the disease.
* **False Negative**: We predicted a case to be negative, but in reality it was positive; e.g. we predicted that a given patient does not have heard disease, but in reality they do have the disease.
Note that we might have different concerns about false positives and negatives.
For instance in the heart disease case, we might be more concerned with flagging
all possible cases of heart disease, particularly if follow\-up examination can
diagnose heart disease with greater precision. In that case, we might want to
avoid false negatives but accept more false positives.
We can make quantitative judgments about these classification tradeoffs by
controlling classification rates with a decision threshold.
62\.4 Classification Rates and Decision Thresholds
--------------------------------------------------
We can summarize the tradeoffs a classifier makes in terms of classification rates. First, let’s introduce some shorthand:
TP \| Total count of True Positives \|
FP \| Total count of False Positives \|
TN \| Total count of True Negatives \|
FN \| Total count of False Negatives \|
Two important rates are the *true positive rate* and *false positive rate*, defined as:
**True Positive Rate** (TPR): The ratio of true positives to all positives, that is:
`TPR = TP / P = TP / (TP + FN)`
We generally want to *maximize* the TPR. In the heart disease example, this is the number of patients with heart disease that we correctly diagnose; a higher TPR in this setting means we can follow\-up with and treat more individuals.
**False Positive Rate** (FPR): The ratio of false positives to all negatives, that is:
`FPR = FP / N = FP / (FP + TN)`
We generally want to *minimize* the FPR. In the heart disease example, this is the number of patients without heart disease that we falsely diagnose with the disease; a higher FPR in this setting means we will waste valuable time and resources following up with healthy individuals.
We can control the TPR and FPR by choosing our decision threshold for our classifier. Remember that in the previous exercise e\-model04\-logistic we set an arbitrary threshold of `pr_heart_disease > 0.5` for detection: We can instead pick a `pr_threshold` to make our classifier more or less sensitive, which will adjust the TPR and FPR. The next task will illustrate this idea.
### 62\.4\.1 **q1** Compute the true positive rate (TPR) and false positive rate (FPR) using the model fitted above, calculating on the validation data.
*Hint 1*: Remember that you can use `summarize(n = sum(boolean))` to count the number of `TRUE` values in a variable `boolean`. Feel free to compute intermediate boolean values with things like `mutate(boolean = (x < 0) & flag)` before your summarize.
*Hint 2*: We did part of this in the previous modeling exercise!
```
pr_threshold <- 0.5
df_q1 <-
df_basic %>%
mutate(
true_positive = (pr_heart_disease > pr_threshold) & heart_disease,
false_positive = (pr_heart_disease > pr_threshold) & !heart_disease,
true_negative = (pr_heart_disease <= pr_threshold) & !heart_disease,
false_negative = (pr_heart_disease <= pr_threshold) & heart_disease
) %>%
summarize(
TP = sum(true_positive),
FP = sum(false_positive),
TN = sum(true_negative),
FN = sum(false_negative)
) %>%
mutate(
TPR = TP / (TP + FN),
FPR = FP / (FP + TN)
)
df_q1
```
```
## # A tibble: 1 × 6
## TP FP TN FN TPR FPR
## <int> <int> <int> <int> <dbl> <dbl>
## 1 26 37 24 10 0.722 0.607
```
Use the following test to check your work.
```
## NOTE: No need to edit; use this to check your work
assertthat::assert_that(
all_equal(
df_q1 %>% select(TPR, FPR),
df_validate %>%
add_predictions(fit_basic, var = "l_heart_disease") %>%
mutate(pr_heart_disease = inv.logit(l_heart_disease)) %>%
summarize(
TP = sum((pr_heart_disease > pr_threshold) & heart_disease),
FP = sum((pr_heart_disease > pr_threshold) & !heart_disease),
TN = sum((pr_heart_disease <= pr_threshold) & !heart_disease),
FN = sum((pr_heart_disease <= pr_threshold) & heart_disease)
) %>%
mutate(
TPR = TP / (TP + FN),
FPR = FP / (FP + TN)
) %>%
select(TPR, FPR)
)
)
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
### 62\.4\.1 **q1** Compute the true positive rate (TPR) and false positive rate (FPR) using the model fitted above, calculating on the validation data.
*Hint 1*: Remember that you can use `summarize(n = sum(boolean))` to count the number of `TRUE` values in a variable `boolean`. Feel free to compute intermediate boolean values with things like `mutate(boolean = (x < 0) & flag)` before your summarize.
*Hint 2*: We did part of this in the previous modeling exercise!
```
pr_threshold <- 0.5
df_q1 <-
df_basic %>%
mutate(
true_positive = (pr_heart_disease > pr_threshold) & heart_disease,
false_positive = (pr_heart_disease > pr_threshold) & !heart_disease,
true_negative = (pr_heart_disease <= pr_threshold) & !heart_disease,
false_negative = (pr_heart_disease <= pr_threshold) & heart_disease
) %>%
summarize(
TP = sum(true_positive),
FP = sum(false_positive),
TN = sum(true_negative),
FN = sum(false_negative)
) %>%
mutate(
TPR = TP / (TP + FN),
FPR = FP / (FP + TN)
)
df_q1
```
```
## # A tibble: 1 × 6
## TP FP TN FN TPR FPR
## <int> <int> <int> <int> <dbl> <dbl>
## 1 26 37 24 10 0.722 0.607
```
Use the following test to check your work.
```
## NOTE: No need to edit; use this to check your work
assertthat::assert_that(
all_equal(
df_q1 %>% select(TPR, FPR),
df_validate %>%
add_predictions(fit_basic, var = "l_heart_disease") %>%
mutate(pr_heart_disease = inv.logit(l_heart_disease)) %>%
summarize(
TP = sum((pr_heart_disease > pr_threshold) & heart_disease),
FP = sum((pr_heart_disease > pr_threshold) & !heart_disease),
TN = sum((pr_heart_disease <= pr_threshold) & !heart_disease),
FN = sum((pr_heart_disease <= pr_threshold) & heart_disease)
) %>%
mutate(
TPR = TP / (TP + FN),
FPR = FP / (FP + TN)
) %>%
select(TPR, FPR)
)
)
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
62\.5 The Receiver Operating Characteristic (ROC) Curve
-------------------------------------------------------
As the required video mentioned, we can summarize TPR and FPR at different threshold values `pr_threshold` with the *receiver operating characteristic curve* (ROC curve). This plot gives us an overview of the tradeoffs we can achieve with our classification model.
The ROC curve shows TPR against FPR. Remember that we want to *maximize* TPR and *minimize* FPR\*; therefore, the ideal point for the curve to reach is the top\-left point in the graph. A very poor classifier would run along the diagonal—this would be equivalent to randomly guessing the class of each observation. An ROC curve below the diagonal is worse than random guessing!
To compute an ROC curve, we could construct a confusion matrix at a variety of thresholds, compute the TPR and FPR for each, and repeat. However, there’s a small bit of “shortcut code” we could use to do the same thing. The following chunk illustrates how to compute an ROC curve.
### 62\.5\.1 **q2** Inspect the following ROC curve for the basic classifier and assess its performance. Is this an effective classifier? How do you know?
```
## NOTE: No need to edit; run and inspect
df_basic %>%
## Begin: Shortcut code for computing an ROC
arrange(desc(pr_heart_disease)) %>%
summarize(
true_positive_rate = cumsum(heart_disease) / sum(heart_disease),
false_positive_rate = cumsum(!heart_disease) / sum(!heart_disease)
) %>%
## End: Shortcut code for computing an ROC
ggplot(aes(false_positive_rate, true_positive_rate)) +
geom_abline(intercept = 0, slope = 1, linetype = 2) +
geom_step() +
coord_cartesian(xlim = c(0, 1), ylim = c(0, 1)) +
theme_minimal()
```
**Observations**:
* This is a highly ineffective classifier; the ROC curve is very near the diagonal, indicating the classifier is not much better (for some thresholds—worse than) random guessing.
### 62\.5\.1 **q2** Inspect the following ROC curve for the basic classifier and assess its performance. Is this an effective classifier? How do you know?
```
## NOTE: No need to edit; run and inspect
df_basic %>%
## Begin: Shortcut code for computing an ROC
arrange(desc(pr_heart_disease)) %>%
summarize(
true_positive_rate = cumsum(heart_disease) / sum(heart_disease),
false_positive_rate = cumsum(!heart_disease) / sum(!heart_disease)
) %>%
## End: Shortcut code for computing an ROC
ggplot(aes(false_positive_rate, true_positive_rate)) +
geom_abline(intercept = 0, slope = 1, linetype = 2) +
geom_step() +
coord_cartesian(xlim = c(0, 1), ylim = c(0, 1)) +
theme_minimal()
```
**Observations**:
* This is a highly ineffective classifier; the ROC curve is very near the diagonal, indicating the classifier is not much better (for some thresholds—worse than) random guessing.
62\.6 Practice Assessing Classifiers
------------------------------------
Let’s get some practice reading ROC curves.
### 62\.6\.1 **q3** Inspect the following ROC curve. Is this an effective classifier? What explains this model’s performance? Is this model valid for prediction?
```
## NOTE: No need to edit
fit_cheating <- glm(
formula = heart_disease ~ num,
data = df_train,
family = "binomial"
)
```
```
## Warning: glm.fit: algorithm did not converge
```
```
## Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred
```
```
df_cheating <-
df_validate %>%
add_predictions(fit_cheating, var = "log_odds_ratio") %>%
arrange(log_odds_ratio) %>%
rowid_to_column(var = "order") %>%
mutate(pr_heart_disease = inv.logit(log_odds_ratio))
df_cheating %>%
## Begin: Shortcut code for computing an ROC
arrange(desc(pr_heart_disease)) %>%
summarize(
true_positive_rate = cumsum(heart_disease) / sum(heart_disease),
false_positive_rate = cumsum(!heart_disease) / sum(!heart_disease)
) %>%
## End: Shortcut code for computing an ROC
ggplot(aes(false_positive_rate, true_positive_rate)) +
geom_abline(intercept = 0, slope = 1, linetype = 2) +
geom_step() +
coord_cartesian(xlim = c(0, 1), ylim = c(0, 1)) +
theme_minimal()
```
**Observations**:
* This is an *optimal* classifier; we can achieve TPR \= 1 with FPR \= 0\. In fact, it’s *suspiciously good*….
* This model is using the outcome to predict the outcome! Remember that `heart_disease = num > 0`; this is not a valid way to predict the presence of heart disease.
Next you’ll fit your own model and assess its performance.
### 62\.6\.2 **q4** Fit a model to the training data, and predict class probabilities on the validation data. Compare your model’s performance to that of `fit_baseline` (fitted below).
```
fit_q4 <- glm(
formula = heart_disease ~ age + cp + trestbps,
data = df_train,
family = "binomial"
)
df_q4 <-
df_validate %>%
add_predictions(fit_q4, var = "log_odds_ratio") %>%
arrange(log_odds_ratio) %>%
rowid_to_column(var = "order") %>%
mutate(pr_heart_disease = inv.logit(log_odds_ratio))
## Here's another model for comparison
fit_baseline <- glm(
formula = heart_disease ~ sex + cp + trestbps,
data = df_train,
family = "binomial"
)
df_baseline <-
df_validate %>%
add_predictions(fit_baseline, var = "log_odds_ratio") %>%
arrange(log_odds_ratio) %>%
rowid_to_column(var = "order") %>%
mutate(pr_heart_disease = inv.logit(log_odds_ratio))
## NOTE: No need to edit
bind_rows(
df_q4 %>%
arrange(desc(pr_heart_disease)) %>%
summarize(
true_positive_rate = cumsum(heart_disease) / sum(heart_disease),
false_positive_rate = cumsum(!heart_disease) / sum(!heart_disease)
) %>%
mutate(model = "Personal"),
df_baseline %>%
arrange(desc(pr_heart_disease)) %>%
summarize(
true_positive_rate = cumsum(heart_disease) / sum(heart_disease),
false_positive_rate = cumsum(!heart_disease) / sum(!heart_disease)
) %>%
mutate(model = "Baseline")
) %>%
ggplot(aes(false_positive_rate, true_positive_rate, color = model)) +
geom_abline(intercept = 0, slope = 1, linetype = 2) +
geom_step() +
coord_cartesian(xlim = c(0, 1), ylim = c(0, 1)) +
theme_minimal() +
theme(legend.position = "bottom")
```
**Observations**:
* My model `fit_q4` is comparable in performance to `fit_baseline`; it outperforms (in TPR for fixed FPR) in some places, and underperforms in others.
* As one sweeps from low to high FPR, the TPR increases—at first quickly, then it tapers off to increase slowly. Both FPR and TPR equal zero at the beginning, and both limit to one. The “tradeoff” is that we can have an arbitrarily high TPR, but we “pay” for this through an increase in the FPR.
### 62\.6\.1 **q3** Inspect the following ROC curve. Is this an effective classifier? What explains this model’s performance? Is this model valid for prediction?
```
## NOTE: No need to edit
fit_cheating <- glm(
formula = heart_disease ~ num,
data = df_train,
family = "binomial"
)
```
```
## Warning: glm.fit: algorithm did not converge
```
```
## Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred
```
```
df_cheating <-
df_validate %>%
add_predictions(fit_cheating, var = "log_odds_ratio") %>%
arrange(log_odds_ratio) %>%
rowid_to_column(var = "order") %>%
mutate(pr_heart_disease = inv.logit(log_odds_ratio))
df_cheating %>%
## Begin: Shortcut code for computing an ROC
arrange(desc(pr_heart_disease)) %>%
summarize(
true_positive_rate = cumsum(heart_disease) / sum(heart_disease),
false_positive_rate = cumsum(!heart_disease) / sum(!heart_disease)
) %>%
## End: Shortcut code for computing an ROC
ggplot(aes(false_positive_rate, true_positive_rate)) +
geom_abline(intercept = 0, slope = 1, linetype = 2) +
geom_step() +
coord_cartesian(xlim = c(0, 1), ylim = c(0, 1)) +
theme_minimal()
```
**Observations**:
* This is an *optimal* classifier; we can achieve TPR \= 1 with FPR \= 0\. In fact, it’s *suspiciously good*….
* This model is using the outcome to predict the outcome! Remember that `heart_disease = num > 0`; this is not a valid way to predict the presence of heart disease.
Next you’ll fit your own model and assess its performance.
### 62\.6\.2 **q4** Fit a model to the training data, and predict class probabilities on the validation data. Compare your model’s performance to that of `fit_baseline` (fitted below).
```
fit_q4 <- glm(
formula = heart_disease ~ age + cp + trestbps,
data = df_train,
family = "binomial"
)
df_q4 <-
df_validate %>%
add_predictions(fit_q4, var = "log_odds_ratio") %>%
arrange(log_odds_ratio) %>%
rowid_to_column(var = "order") %>%
mutate(pr_heart_disease = inv.logit(log_odds_ratio))
## Here's another model for comparison
fit_baseline <- glm(
formula = heart_disease ~ sex + cp + trestbps,
data = df_train,
family = "binomial"
)
df_baseline <-
df_validate %>%
add_predictions(fit_baseline, var = "log_odds_ratio") %>%
arrange(log_odds_ratio) %>%
rowid_to_column(var = "order") %>%
mutate(pr_heart_disease = inv.logit(log_odds_ratio))
## NOTE: No need to edit
bind_rows(
df_q4 %>%
arrange(desc(pr_heart_disease)) %>%
summarize(
true_positive_rate = cumsum(heart_disease) / sum(heart_disease),
false_positive_rate = cumsum(!heart_disease) / sum(!heart_disease)
) %>%
mutate(model = "Personal"),
df_baseline %>%
arrange(desc(pr_heart_disease)) %>%
summarize(
true_positive_rate = cumsum(heart_disease) / sum(heart_disease),
false_positive_rate = cumsum(!heart_disease) / sum(!heart_disease)
) %>%
mutate(model = "Baseline")
) %>%
ggplot(aes(false_positive_rate, true_positive_rate, color = model)) +
geom_abline(intercept = 0, slope = 1, linetype = 2) +
geom_step() +
coord_cartesian(xlim = c(0, 1), ylim = c(0, 1)) +
theme_minimal() +
theme(legend.position = "bottom")
```
**Observations**:
* My model `fit_q4` is comparable in performance to `fit_baseline`; it outperforms (in TPR for fixed FPR) in some places, and underperforms in others.
* As one sweeps from low to high FPR, the TPR increases—at first quickly, then it tapers off to increase slowly. Both FPR and TPR equal zero at the beginning, and both limit to one. The “tradeoff” is that we can have an arbitrarily high TPR, but we “pay” for this through an increase in the FPR.
62\.7 Selecting a Threshold
---------------------------
The ROC summarizes performance characteristics for a *variety* of thresholds `pr_threshold`, but to actually *deploy* a classifier and make decisions, we have to pick a *specific* threshold value. Picking a threshold is *not* just an exercise in mathematics; we need to inform this decision with our intended use\-case.
The following chunk plots potential `pr_threshold` against achieved TPR values for your model. Use this image to pick a classifier threshold.
### 62\.7\.1 **q5** Pick a target TPR value for your heart disease predictor; what is a reasonable value for `TPR`, and why did yo pick that value? What values of `pr_threshold` meet or exceed that target TPR? What specific value for `pr_threshold` do you choose, and why?
```
## NOTE: No need to edit this; use these data to pick a threshold
df_thresholds <-
df_q4 %>%
## Begin: Shortcut code for computing an ROC
arrange(desc(pr_heart_disease)) %>%
summarize(
pr_heart_disease = pr_heart_disease,
true_positive_rate = cumsum(heart_disease) / sum(heart_disease),
false_positive_rate = cumsum(!heart_disease) / sum(!heart_disease)
)
## End: Shortcut code for computing an ROC
## TODO: Pick a threshold using df_thresholds above
df_thresholds %>%
filter(true_positive_rate >= 0.9) %>%
filter(false_positive_rate == min(false_positive_rate))
```
```
## # A tibble: 2 × 3
## pr_heart_disease true_positive_rate false_positive_rate
## <dbl> <dbl> <dbl>
## 1 0.252 0.917 0.492
## 2 0.249 0.944 0.492
```
```
pr_threshold <- 0.249
tpr_achieved <- 0.944
fpr_achieved <- 0.492
## NOTE: No need to edit; use this visual to help your decision
df_thresholds %>%
ggplot(aes(true_positive_rate, pr_heart_disease)) +
geom_vline(xintercept = tpr_achieved, linetype = 2) +
geom_hline(yintercept = pr_threshold, linetype = 2) +
geom_step() +
labs(
x = "True Positive Rate",
y = "Pr Threshold"
)
```
**Observations**:
* I pick `TPR > 0.9`, as I want to catch the vast majority of patients with the disease.
* Filtering `df_thresholds` shows that `pr_threshold >= 0.252` achieves my desired TPR.
* To pick a specific value for `pr_threshold`, I also try to minimize the FPR. For my case, there is a range of values of `pr_threshold` that can minimize the FPR; therefore I take the more permissive end of the interval.
* Ultimately I picked `pr_threshold = 0.249`, which gave `TPR = 0.944, FPR = 0.492`. This will lead to a lot of false positives, but we will have a very sensitive detector.
### 62\.7\.1 **q5** Pick a target TPR value for your heart disease predictor; what is a reasonable value for `TPR`, and why did yo pick that value? What values of `pr_threshold` meet or exceed that target TPR? What specific value for `pr_threshold` do you choose, and why?
```
## NOTE: No need to edit this; use these data to pick a threshold
df_thresholds <-
df_q4 %>%
## Begin: Shortcut code for computing an ROC
arrange(desc(pr_heart_disease)) %>%
summarize(
pr_heart_disease = pr_heart_disease,
true_positive_rate = cumsum(heart_disease) / sum(heart_disease),
false_positive_rate = cumsum(!heart_disease) / sum(!heart_disease)
)
## End: Shortcut code for computing an ROC
## TODO: Pick a threshold using df_thresholds above
df_thresholds %>%
filter(true_positive_rate >= 0.9) %>%
filter(false_positive_rate == min(false_positive_rate))
```
```
## # A tibble: 2 × 3
## pr_heart_disease true_positive_rate false_positive_rate
## <dbl> <dbl> <dbl>
## 1 0.252 0.917 0.492
## 2 0.249 0.944 0.492
```
```
pr_threshold <- 0.249
tpr_achieved <- 0.944
fpr_achieved <- 0.492
## NOTE: No need to edit; use this visual to help your decision
df_thresholds %>%
ggplot(aes(true_positive_rate, pr_heart_disease)) +
geom_vline(xintercept = tpr_achieved, linetype = 2) +
geom_hline(yintercept = pr_threshold, linetype = 2) +
geom_step() +
labs(
x = "True Positive Rate",
y = "Pr Threshold"
)
```
**Observations**:
* I pick `TPR > 0.9`, as I want to catch the vast majority of patients with the disease.
* Filtering `df_thresholds` shows that `pr_threshold >= 0.252` achieves my desired TPR.
* To pick a specific value for `pr_threshold`, I also try to minimize the FPR. For my case, there is a range of values of `pr_threshold` that can minimize the FPR; therefore I take the more permissive end of the interval.
* Ultimately I picked `pr_threshold = 0.249`, which gave `TPR = 0.944, FPR = 0.492`. This will lead to a lot of false positives, but we will have a very sensitive detector.
| Field Specific |
zdelrosario.github.io | https://zdelrosario.github.io/data-science-curriculum/vis-control-charts.html |
63 Vis: Control Charts
======================
*Purpose*: Remember in c02\-michelson (q4\) when you studied a *control chart*? Now that we’ve learned about confidence intervals, we can more formally study control charts. These are a key tool in [statistical process control](https://en.wikipedia.org/wiki/Statistical_process_control), which is how manufacturers rigorously track and control the quality of manufactured goods. Control charts help a process manager track when something has gone wrong in a manufacturing line, and are used to determine when a process is running smoothly.
*Reading*: [Example use](https://www.itl.nist.gov/div898/handbook/mpc/section3/mpc3521.htm) of a control chart, based on NIST mass calibration data. (Optional)
*Prerequisites*: c02\-michelson, e\-stat06\-clt
```
library(tidyverse)
```
```
## ── Attaching packages ─────────────────────────────────────── tidyverse 1.3.0 ──
```
```
## ✔ ggplot2 3.4.0 ✔ purrr 1.0.1
## ✔ tibble 3.1.8 ✔ dplyr 1.0.10
## ✔ tidyr 1.2.1 ✔ stringr 1.5.0
## ✔ readr 2.1.3 ✔ forcats 0.5.2
```
```
## ── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ dplyr::filter() masks stats::filter()
## ✖ dplyr::lag() masks stats::lag()
```
63\.1 Control Chart: Example
----------------------------
Below is an example of a control chart: A control chart is used to help detect when something out\-of\-the\-ordinary occurred. Essentially, it is a tool to help us determine when something non\-random happened so we can plan a follow\-up study and prevent that non\-random event from happening in the future. To do that detection work, we look for patterns.
Note that two kinds of patterns have been highlighted below: an outlier that lies outside the *control limits*, and a “run” of batch means that all lie off\-center (to one side of the “grand mean,” the solid line).
control chart
63\.2 Control chart steps
-------------------------
To construct and use a control chart, follow these steps:
1. Group individual observations into consecutive batches, say 4 to 10 observations. If parts are manufactured in batches, then use those groups.
2. Take the mean of each batch, compute the “grand mean” based on all of the data, and establish “control limits” based on a confidence interval for the batch means (where \\(n\\) is your batch size).
3. Plot each batch mean sequentially, and visually indicate the control limits and grand mean on your plot.
4. Compare each batch mean against the control limits and grand mean. Look for patterns to suggest batches where something out\-of\-the\-ordinary happened. Some examples include:
* Batches that fall outside the control limits
* Consecutive batches that lie above or below the mean
* Persistent up\-and\-down patterns
If there are no coherent patterns and if only an expected number of batch means fall outside the control limits, then there is no evidence for non\-random behavior. A process free of any obvious non\-random behavior is said to be under *statistical control*, or to be a [stable process](https://en.wikipedia.org/wiki/Statistical_process_control#Stable_process).
If you *do* detect something out\-of\-the\-ordinary, use the batch index to go investigate those cases in greater detail. **A control chart helps you *detect* when something went wrong—it does *not* tell you what went wrong.**
Like any form of EDA, it is also a good idea to experiment with different batch sizes.
63\.3 Data Preparation
----------------------
To illustrate the control chart concept, let’s first generate some data that is completely random.
```
set.seed(101)
df_data <-
tibble(X = rnorm(n = 1000))
```
Following Step 1, we need to assign batch identifiers to group the data.
### 63\.3\.1 **q1** Use integer division `%/%` and the `row_number()` helper to group consecutive observations into groups of `4` with a common identifier `id`.
*Hint*: Since `R` is a one\-based index language, you will need to adjust the output of `row_number()` before performing the integer division `%/%`.
```
df_q1 <-
df_data %>%
mutate(id = (row_number() - 1) %/% 4)
df_q1
```
```
## # A tibble: 1,000 × 2
## X id
## <dbl> <dbl>
## 1 -0.326 0
## 2 0.552 0
## 3 -0.675 0
## 4 0.214 0
## 5 0.311 1
## 6 1.17 1
## 7 0.619 1
## 8 -0.113 1
## 9 0.917 2
## 10 -0.223 2
## # … with 990 more rows
```
Use the following to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(
df_q1 %>%
filter(row_number() <= 4) %>%
summarize(sd = sd(id)) %>%
pull(sd) == 0
)
```
```
## [1] TRUE
```
```
assertthat::assert_that(
df_q1 %>%
filter(row_number() >= 997) %>%
summarize(sd = sd(id)) %>%
pull(sd) == 0
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
63\.4 Generate Control Limits
-----------------------------
Next, we’ll use our knowledge about confidence intervals and the CLT to set the *control limits*, based on our batch size.
### 63\.4\.1 **q2** Use a central limit theorem (CLT) approximation to set 3 sigma confidence interval limits on the group mean you computed above.
*Note*: A 3 sigma bound corresponds to a coverage probability of `1 - pnorm(-3) * 2`; approximately \\(99\.7%\\).
*Hint*: Think carefully about how many samples will be in each *group*, not in your dataset in total.
```
X_grand <-
df_data %>%
summarize(X_mean = mean(X)) %>%
pull(X_mean)
X_sd <-
df_data %>%
summarize(X_sd = sd(X)) %>%
pull(X_sd)
X_g4_lo <- X_grand - 3 * X_sd / sqrt(4)
X_g4_up <- X_grand + 3 * X_sd / sqrt(4)
X_g4_lo
```
```
## [1] -1.473529
```
```
X_grand
```
```
## [1] -0.03486206
```
```
X_g4_up
```
```
## [1] 1.403805
```
Use the following to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(abs(X_g4_lo + 3 / sqrt(4)) < 0.1)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(X_grand) < 0.05)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(X_g4_up - 3 / sqrt(4)) < 0.1)
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
63\.5 Visualize and Interpret
-----------------------------
### 63\.5\.1 **q3** Inspect the following control chart, and answer the questions under *observe* below.
```
## NOTE: No need to edit; run and inspect
df_q1 %>%
group_by(id) %>%
summarize(X_batch = mean(X)) %>%
ggplot(aes(id, X_batch)) +
geom_hline(yintercept = X_g4_lo, linetype = "dashed") +
geom_hline(yintercept = X_grand) +
geom_hline(yintercept = X_g4_up, linetype = "dashed") +
geom_point() +
theme_minimal() +
labs(
x = "Batch Index",
y = "Batch Mean"
)
```
**Observations**:
\- I would expect about `1 - qnorm(-3) * 2`—\\(99\.7%\\) of the points to lie inside the control bounds. Given that we have 250 points, I’d expect about one to lie outside.
\- One point lies outside the control limits, as we would expect if the points were completely random.
\- I don’t see any coherent pattern in the points. This makes sense, as these points *are* random (by construction).
63\.6 Control Chart: Application
--------------------------------
Next you will construct a control chart for a real dataset. The following code downloads and parses a dataset from the NIST website studying [proof masses](https://en.wikipedia.org/wiki/Proof_mass). These are comparative measurements between “exact” 1 kilogram masses, carried out by one of the world’s most\-rigorous measurement societies.
```
## NO NEED TO EDIT; the following will download and read the data
url <- "https://www.itl.nist.gov/div898/handbook/datasets/MASS.DAT"
filename <- "./data/nist-mass.dat"
download.file(url, filename)
df_mass <-
read_table(
filename,
skip = 25,
col_names = c(
"date",
"standard_id",
"Y",
"balance_id",
"residual_sd",
"design_id"
)
)
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## date = col_double(),
## standard_id = col_double(),
## Y = col_double(),
## balance_id = col_double(),
## residual_sd = col_double(),
## design_id = col_double()
## )
```
```
df_mass
```
```
## # A tibble: 217 × 6
## date standard_id Y balance_id residual_sd design_id
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 75.9 41 -19.5 12 0.0217 41
## 2 75.9 41 -19.5 12 0.0118 41
## 3 76.0 41 -19.5 12 0.0232 41
## 4 76.1 41 -19.5 12 0.021 41
## 5 76.6 41 -19.5 12 0.0265 41
## 6 76.7 41 -19.5 12 0.0317 41
## 7 77.2 41 -19.5 12 0.0194 41
## 8 77.3 41 -19.5 12 0.0316 41
## 9 77.6 41 -19.5 12 0.0274 41
## 10 77.7 41 -19.5 12 0.0361 41
## # … with 207 more rows
```
Note that `Y` denotes a kind of comparison between multiple “exact” 1 kilogram masses (with `Y` measured in micrograms), while `date` denotes the (fractional) years since 1900\.
First, try plotting and interpreting the data without a control chart.
### 63\.6\.1 **q4** Plot the measured values `Y` against their date of measurement `date`. Answer the questions under *Observations* below.
```
## TASK: Plot Y vs date
```
**Observations**:
\- There is considerable variation! Measurements of *any* kind are *not* exactly repeatable; variability is unavoidable.
\- There seem to be two “upward” trends; one in the late 1970’s, and another from the mid 80’s onward.
\- The “density” of measurements is not consistent; it seems that many measurements are taken around the same time, with much more “sparse” measurements in between.
\- Something odd seems to have happened after 1985; the measurements seem to shift upward.
Next, prepare the data for a control chart of the NIST data.
### 63\.6\.2 **q5** Generate control chart data for a batch size of `10`.
```
n_group <- 10
df_q4 <-
df_mass %>%
mutate(group_id = (row_number() - 1) %/% n_group) %>%
group_by(group_id) %>%
summarize(
Y_mean = mean(Y),
date = min(date)
)
Y_mean <-
df_mass %>%
summarize(Y_mean = mean(Y)) %>%
pull(Y_mean)
Y_sd <-
df_mass %>%
summarize(Y_sd = sd(Y)) %>%
pull(Y_sd)
Y_g_lo <- Y_mean - 3 * Y_sd / sqrt(n_group)
Y_g_up <- Y_mean + 3 * Y_sd / sqrt(n_group)
Y_g_lo
```
```
## [1] -19.49853
```
```
Y_mean
```
```
## [1] -19.46452
```
```
Y_g_up
```
```
## [1] -19.43051
```
Use the following to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(abs(Y_g_lo + 19.49853) < 1e-4)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(Y_mean + 19.46452) < 1e-4)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(Y_g_up + 19.43051) < 1e-4)
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
Next, plot the control chart data and inspect.
### 63\.6\.3 **q6** Run the following chunk and answer the questions under *Observations* below.
```
## NOTE: No need to edit; run and inspect
df_q4 %>%
ggplot(aes(date, Y_mean)) +
geom_hline(yintercept = Y_g_lo, linetype = "dashed") +
geom_hline(yintercept = Y_mean) +
geom_hline(yintercept = Y_g_up, linetype = "dashed") +
geom_point() +
theme_minimal() +
labs(
x = "Batch Year",
y = "Measurement (micrograms)"
)
```
**Observations**:
\- There is considerable variation! Measurements of *any* kind are *not* exactly repeatable; variability is unavoidable.
\- Still the case in this control chart!
\- There seem to be two “upward” trends; one in the late 1970’s, and another from the mid 80’s onward.
\- This is much more prominent in the control chart
\- The “density” of measurements is not consistent; it seems that many measurements are taken around the same time, with much more “sparse” measurements in between.
\- This is a bit harder to see in the control chart; we have to keep in mind that each batch represents multiple measurements.
\- Something odd seems to have happened after 1985; the measurements seem to shift upward.
\- More visually obvious in the control chart.
* We can also make quantitative statements with the control chart: There are three points violating the control limits in the late 80’s. There is also one batch violating the control limit before 1985\.
* From 1980 to 1985, many of the batches lay below the grand mean; something seems off here.
Remember that a control chart is only a *detection* tool; to say more, you would need to go investigate the data collection process.
63\.1 Control Chart: Example
----------------------------
Below is an example of a control chart: A control chart is used to help detect when something out\-of\-the\-ordinary occurred. Essentially, it is a tool to help us determine when something non\-random happened so we can plan a follow\-up study and prevent that non\-random event from happening in the future. To do that detection work, we look for patterns.
Note that two kinds of patterns have been highlighted below: an outlier that lies outside the *control limits*, and a “run” of batch means that all lie off\-center (to one side of the “grand mean,” the solid line).
control chart
63\.2 Control chart steps
-------------------------
To construct and use a control chart, follow these steps:
1. Group individual observations into consecutive batches, say 4 to 10 observations. If parts are manufactured in batches, then use those groups.
2. Take the mean of each batch, compute the “grand mean” based on all of the data, and establish “control limits” based on a confidence interval for the batch means (where \\(n\\) is your batch size).
3. Plot each batch mean sequentially, and visually indicate the control limits and grand mean on your plot.
4. Compare each batch mean against the control limits and grand mean. Look for patterns to suggest batches where something out\-of\-the\-ordinary happened. Some examples include:
* Batches that fall outside the control limits
* Consecutive batches that lie above or below the mean
* Persistent up\-and\-down patterns
If there are no coherent patterns and if only an expected number of batch means fall outside the control limits, then there is no evidence for non\-random behavior. A process free of any obvious non\-random behavior is said to be under *statistical control*, or to be a [stable process](https://en.wikipedia.org/wiki/Statistical_process_control#Stable_process).
If you *do* detect something out\-of\-the\-ordinary, use the batch index to go investigate those cases in greater detail. **A control chart helps you *detect* when something went wrong—it does *not* tell you what went wrong.**
Like any form of EDA, it is also a good idea to experiment with different batch sizes.
63\.3 Data Preparation
----------------------
To illustrate the control chart concept, let’s first generate some data that is completely random.
```
set.seed(101)
df_data <-
tibble(X = rnorm(n = 1000))
```
Following Step 1, we need to assign batch identifiers to group the data.
### 63\.3\.1 **q1** Use integer division `%/%` and the `row_number()` helper to group consecutive observations into groups of `4` with a common identifier `id`.
*Hint*: Since `R` is a one\-based index language, you will need to adjust the output of `row_number()` before performing the integer division `%/%`.
```
df_q1 <-
df_data %>%
mutate(id = (row_number() - 1) %/% 4)
df_q1
```
```
## # A tibble: 1,000 × 2
## X id
## <dbl> <dbl>
## 1 -0.326 0
## 2 0.552 0
## 3 -0.675 0
## 4 0.214 0
## 5 0.311 1
## 6 1.17 1
## 7 0.619 1
## 8 -0.113 1
## 9 0.917 2
## 10 -0.223 2
## # … with 990 more rows
```
Use the following to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(
df_q1 %>%
filter(row_number() <= 4) %>%
summarize(sd = sd(id)) %>%
pull(sd) == 0
)
```
```
## [1] TRUE
```
```
assertthat::assert_that(
df_q1 %>%
filter(row_number() >= 997) %>%
summarize(sd = sd(id)) %>%
pull(sd) == 0
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
### 63\.3\.1 **q1** Use integer division `%/%` and the `row_number()` helper to group consecutive observations into groups of `4` with a common identifier `id`.
*Hint*: Since `R` is a one\-based index language, you will need to adjust the output of `row_number()` before performing the integer division `%/%`.
```
df_q1 <-
df_data %>%
mutate(id = (row_number() - 1) %/% 4)
df_q1
```
```
## # A tibble: 1,000 × 2
## X id
## <dbl> <dbl>
## 1 -0.326 0
## 2 0.552 0
## 3 -0.675 0
## 4 0.214 0
## 5 0.311 1
## 6 1.17 1
## 7 0.619 1
## 8 -0.113 1
## 9 0.917 2
## 10 -0.223 2
## # … with 990 more rows
```
Use the following to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(
df_q1 %>%
filter(row_number() <= 4) %>%
summarize(sd = sd(id)) %>%
pull(sd) == 0
)
```
```
## [1] TRUE
```
```
assertthat::assert_that(
df_q1 %>%
filter(row_number() >= 997) %>%
summarize(sd = sd(id)) %>%
pull(sd) == 0
)
```
```
## [1] TRUE
```
```
print("Nice!")
```
```
## [1] "Nice!"
```
63\.4 Generate Control Limits
-----------------------------
Next, we’ll use our knowledge about confidence intervals and the CLT to set the *control limits*, based on our batch size.
### 63\.4\.1 **q2** Use a central limit theorem (CLT) approximation to set 3 sigma confidence interval limits on the group mean you computed above.
*Note*: A 3 sigma bound corresponds to a coverage probability of `1 - pnorm(-3) * 2`; approximately \\(99\.7%\\).
*Hint*: Think carefully about how many samples will be in each *group*, not in your dataset in total.
```
X_grand <-
df_data %>%
summarize(X_mean = mean(X)) %>%
pull(X_mean)
X_sd <-
df_data %>%
summarize(X_sd = sd(X)) %>%
pull(X_sd)
X_g4_lo <- X_grand - 3 * X_sd / sqrt(4)
X_g4_up <- X_grand + 3 * X_sd / sqrt(4)
X_g4_lo
```
```
## [1] -1.473529
```
```
X_grand
```
```
## [1] -0.03486206
```
```
X_g4_up
```
```
## [1] 1.403805
```
Use the following to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(abs(X_g4_lo + 3 / sqrt(4)) < 0.1)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(X_grand) < 0.05)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(X_g4_up - 3 / sqrt(4)) < 0.1)
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
### 63\.4\.1 **q2** Use a central limit theorem (CLT) approximation to set 3 sigma confidence interval limits on the group mean you computed above.
*Note*: A 3 sigma bound corresponds to a coverage probability of `1 - pnorm(-3) * 2`; approximately \\(99\.7%\\).
*Hint*: Think carefully about how many samples will be in each *group*, not in your dataset in total.
```
X_grand <-
df_data %>%
summarize(X_mean = mean(X)) %>%
pull(X_mean)
X_sd <-
df_data %>%
summarize(X_sd = sd(X)) %>%
pull(X_sd)
X_g4_lo <- X_grand - 3 * X_sd / sqrt(4)
X_g4_up <- X_grand + 3 * X_sd / sqrt(4)
X_g4_lo
```
```
## [1] -1.473529
```
```
X_grand
```
```
## [1] -0.03486206
```
```
X_g4_up
```
```
## [1] 1.403805
```
Use the following to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(abs(X_g4_lo + 3 / sqrt(4)) < 0.1)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(X_grand) < 0.05)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(X_g4_up - 3 / sqrt(4)) < 0.1)
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
63\.5 Visualize and Interpret
-----------------------------
### 63\.5\.1 **q3** Inspect the following control chart, and answer the questions under *observe* below.
```
## NOTE: No need to edit; run and inspect
df_q1 %>%
group_by(id) %>%
summarize(X_batch = mean(X)) %>%
ggplot(aes(id, X_batch)) +
geom_hline(yintercept = X_g4_lo, linetype = "dashed") +
geom_hline(yintercept = X_grand) +
geom_hline(yintercept = X_g4_up, linetype = "dashed") +
geom_point() +
theme_minimal() +
labs(
x = "Batch Index",
y = "Batch Mean"
)
```
**Observations**:
\- I would expect about `1 - qnorm(-3) * 2`—\\(99\.7%\\) of the points to lie inside the control bounds. Given that we have 250 points, I’d expect about one to lie outside.
\- One point lies outside the control limits, as we would expect if the points were completely random.
\- I don’t see any coherent pattern in the points. This makes sense, as these points *are* random (by construction).
### 63\.5\.1 **q3** Inspect the following control chart, and answer the questions under *observe* below.
```
## NOTE: No need to edit; run and inspect
df_q1 %>%
group_by(id) %>%
summarize(X_batch = mean(X)) %>%
ggplot(aes(id, X_batch)) +
geom_hline(yintercept = X_g4_lo, linetype = "dashed") +
geom_hline(yintercept = X_grand) +
geom_hline(yintercept = X_g4_up, linetype = "dashed") +
geom_point() +
theme_minimal() +
labs(
x = "Batch Index",
y = "Batch Mean"
)
```
**Observations**:
\- I would expect about `1 - qnorm(-3) * 2`—\\(99\.7%\\) of the points to lie inside the control bounds. Given that we have 250 points, I’d expect about one to lie outside.
\- One point lies outside the control limits, as we would expect if the points were completely random.
\- I don’t see any coherent pattern in the points. This makes sense, as these points *are* random (by construction).
63\.6 Control Chart: Application
--------------------------------
Next you will construct a control chart for a real dataset. The following code downloads and parses a dataset from the NIST website studying [proof masses](https://en.wikipedia.org/wiki/Proof_mass). These are comparative measurements between “exact” 1 kilogram masses, carried out by one of the world’s most\-rigorous measurement societies.
```
## NO NEED TO EDIT; the following will download and read the data
url <- "https://www.itl.nist.gov/div898/handbook/datasets/MASS.DAT"
filename <- "./data/nist-mass.dat"
download.file(url, filename)
df_mass <-
read_table(
filename,
skip = 25,
col_names = c(
"date",
"standard_id",
"Y",
"balance_id",
"residual_sd",
"design_id"
)
)
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## date = col_double(),
## standard_id = col_double(),
## Y = col_double(),
## balance_id = col_double(),
## residual_sd = col_double(),
## design_id = col_double()
## )
```
```
df_mass
```
```
## # A tibble: 217 × 6
## date standard_id Y balance_id residual_sd design_id
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 75.9 41 -19.5 12 0.0217 41
## 2 75.9 41 -19.5 12 0.0118 41
## 3 76.0 41 -19.5 12 0.0232 41
## 4 76.1 41 -19.5 12 0.021 41
## 5 76.6 41 -19.5 12 0.0265 41
## 6 76.7 41 -19.5 12 0.0317 41
## 7 77.2 41 -19.5 12 0.0194 41
## 8 77.3 41 -19.5 12 0.0316 41
## 9 77.6 41 -19.5 12 0.0274 41
## 10 77.7 41 -19.5 12 0.0361 41
## # … with 207 more rows
```
Note that `Y` denotes a kind of comparison between multiple “exact” 1 kilogram masses (with `Y` measured in micrograms), while `date` denotes the (fractional) years since 1900\.
First, try plotting and interpreting the data without a control chart.
### 63\.6\.1 **q4** Plot the measured values `Y` against their date of measurement `date`. Answer the questions under *Observations* below.
```
## TASK: Plot Y vs date
```
**Observations**:
\- There is considerable variation! Measurements of *any* kind are *not* exactly repeatable; variability is unavoidable.
\- There seem to be two “upward” trends; one in the late 1970’s, and another from the mid 80’s onward.
\- The “density” of measurements is not consistent; it seems that many measurements are taken around the same time, with much more “sparse” measurements in between.
\- Something odd seems to have happened after 1985; the measurements seem to shift upward.
Next, prepare the data for a control chart of the NIST data.
### 63\.6\.2 **q5** Generate control chart data for a batch size of `10`.
```
n_group <- 10
df_q4 <-
df_mass %>%
mutate(group_id = (row_number() - 1) %/% n_group) %>%
group_by(group_id) %>%
summarize(
Y_mean = mean(Y),
date = min(date)
)
Y_mean <-
df_mass %>%
summarize(Y_mean = mean(Y)) %>%
pull(Y_mean)
Y_sd <-
df_mass %>%
summarize(Y_sd = sd(Y)) %>%
pull(Y_sd)
Y_g_lo <- Y_mean - 3 * Y_sd / sqrt(n_group)
Y_g_up <- Y_mean + 3 * Y_sd / sqrt(n_group)
Y_g_lo
```
```
## [1] -19.49853
```
```
Y_mean
```
```
## [1] -19.46452
```
```
Y_g_up
```
```
## [1] -19.43051
```
Use the following to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(abs(Y_g_lo + 19.49853) < 1e-4)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(Y_mean + 19.46452) < 1e-4)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(Y_g_up + 19.43051) < 1e-4)
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
Next, plot the control chart data and inspect.
### 63\.6\.3 **q6** Run the following chunk and answer the questions under *Observations* below.
```
## NOTE: No need to edit; run and inspect
df_q4 %>%
ggplot(aes(date, Y_mean)) +
geom_hline(yintercept = Y_g_lo, linetype = "dashed") +
geom_hline(yintercept = Y_mean) +
geom_hline(yintercept = Y_g_up, linetype = "dashed") +
geom_point() +
theme_minimal() +
labs(
x = "Batch Year",
y = "Measurement (micrograms)"
)
```
**Observations**:
\- There is considerable variation! Measurements of *any* kind are *not* exactly repeatable; variability is unavoidable.
\- Still the case in this control chart!
\- There seem to be two “upward” trends; one in the late 1970’s, and another from the mid 80’s onward.
\- This is much more prominent in the control chart
\- The “density” of measurements is not consistent; it seems that many measurements are taken around the same time, with much more “sparse” measurements in between.
\- This is a bit harder to see in the control chart; we have to keep in mind that each batch represents multiple measurements.
\- Something odd seems to have happened after 1985; the measurements seem to shift upward.
\- More visually obvious in the control chart.
* We can also make quantitative statements with the control chart: There are three points violating the control limits in the late 80’s. There is also one batch violating the control limit before 1985\.
* From 1980 to 1985, many of the batches lay below the grand mean; something seems off here.
Remember that a control chart is only a *detection* tool; to say more, you would need to go investigate the data collection process.
### 63\.6\.1 **q4** Plot the measured values `Y` against their date of measurement `date`. Answer the questions under *Observations* below.
```
## TASK: Plot Y vs date
```
**Observations**:
\- There is considerable variation! Measurements of *any* kind are *not* exactly repeatable; variability is unavoidable.
\- There seem to be two “upward” trends; one in the late 1970’s, and another from the mid 80’s onward.
\- The “density” of measurements is not consistent; it seems that many measurements are taken around the same time, with much more “sparse” measurements in between.
\- Something odd seems to have happened after 1985; the measurements seem to shift upward.
Next, prepare the data for a control chart of the NIST data.
### 63\.6\.2 **q5** Generate control chart data for a batch size of `10`.
```
n_group <- 10
df_q4 <-
df_mass %>%
mutate(group_id = (row_number() - 1) %/% n_group) %>%
group_by(group_id) %>%
summarize(
Y_mean = mean(Y),
date = min(date)
)
Y_mean <-
df_mass %>%
summarize(Y_mean = mean(Y)) %>%
pull(Y_mean)
Y_sd <-
df_mass %>%
summarize(Y_sd = sd(Y)) %>%
pull(Y_sd)
Y_g_lo <- Y_mean - 3 * Y_sd / sqrt(n_group)
Y_g_up <- Y_mean + 3 * Y_sd / sqrt(n_group)
Y_g_lo
```
```
## [1] -19.49853
```
```
Y_mean
```
```
## [1] -19.46452
```
```
Y_g_up
```
```
## [1] -19.43051
```
Use the following to check your work.
```
## NOTE: No need to change this
assertthat::assert_that(abs(Y_g_lo + 19.49853) < 1e-4)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(Y_mean + 19.46452) < 1e-4)
```
```
## [1] TRUE
```
```
assertthat::assert_that(abs(Y_g_up + 19.43051) < 1e-4)
```
```
## [1] TRUE
```
```
print("Excellent!")
```
```
## [1] "Excellent!"
```
Next, plot the control chart data and inspect.
### 63\.6\.3 **q6** Run the following chunk and answer the questions under *Observations* below.
```
## NOTE: No need to edit; run and inspect
df_q4 %>%
ggplot(aes(date, Y_mean)) +
geom_hline(yintercept = Y_g_lo, linetype = "dashed") +
geom_hline(yintercept = Y_mean) +
geom_hline(yintercept = Y_g_up, linetype = "dashed") +
geom_point() +
theme_minimal() +
labs(
x = "Batch Year",
y = "Measurement (micrograms)"
)
```
**Observations**:
\- There is considerable variation! Measurements of *any* kind are *not* exactly repeatable; variability is unavoidable.
\- Still the case in this control chart!
\- There seem to be two “upward” trends; one in the late 1970’s, and another from the mid 80’s onward.
\- This is much more prominent in the control chart
\- The “density” of measurements is not consistent; it seems that many measurements are taken around the same time, with much more “sparse” measurements in between.
\- This is a bit harder to see in the control chart; we have to keep in mind that each batch represents multiple measurements.
\- Something odd seems to have happened after 1985; the measurements seem to shift upward.
\- More visually obvious in the control chart.
* We can also make quantitative statements with the control chart: There are three points violating the control limits in the late 80’s. There is also one batch violating the control limit before 1985\.
* From 1980 to 1985, many of the batches lay below the grand mean; something seems off here.
Remember that a control chart is only a *detection* tool; to say more, you would need to go investigate the data collection process.
| Field Specific |
dtkaplan.github.io | https://dtkaplan.github.io/RforCalculus/representing-mathematical-functions.html |
Chapter 1 Representing mathematical functions
=============================================
As the title suggests, in this book we are going to be using the R computer language to implement the operations of calculus along with related operations such as graphing and solving.
The topic of calculus is fundamentally about mathematical functions and the operations that are performed on them. The concept of “mathematical function” is an idea. If we are going to use a computer language to work with mathematical functions, we will need to translate them into some entity in the computer language. That is, we need a language construct to ***represent*** functions and the quantities functions take as input and produce as output.
If you happen to have a background in programming, you may well be thinking that the choice of representation is obvious. For quantities, use numbers. For functions, use R\-language functions. That is indeed what we are going to do, but as you’ll see, the situation is just a little more complicated than that. A little, but that little complexity needs to be dealt with from the beginning.
1\.1 Numbers, quantities, and names
-----------------------------------
The complexity mentioned in the previous section stems the uses and real\-world situations to which we want to be able to apply the mathematical idea of functions. The inputs taken by functions and the outputs produced by them are not necessarily numbers. Often, they are ***quantities***.
Consider these examples of quantities: money, speed, blood pressure, height, volume. Isn’t each of these quantities a number? Not quite. Consider money. We can count, add, and subtract money – just like we count, add, and subtract with numbers. But money has an additional property: the *kind* of currency. There are Dollars[1](dynamics.html#fn1), Renminbi, Euros, Rand, Krona and so on. Money as an idea is an abstraction of the kind sometimes called a ***dimension***. Other dimensions are length, time, volume, temperature, area, angle, luminosity, bank interest rate, and so on. When we are dealing with a quantity that has dimension, it’s not enough to give a number to say “how much” of that dimension we have. We need also to give ***units***. Many of these are familiar in everyday life. For instance, the dimension of length is quantified with units of meters, or inches, or miles, or parsecs, and so on. The dimension of money is measured with euros, dollars, renminbi, and so on. Area as a dimension is quantified with square\-meters, or square\-feet, or hectares, or acres, etc.
Unfortunately, most mathematics texts treat dimension and units by ignoring them. This leads to confusion and error when an attempt is made to apply mathematical ideas to quantities in the real world.
In this book we will be using functions and calculus to work with real\-world quantities. We can’t afford to ignore dimension and units. Unfortunately, mainstream computer languages like R and Python and JavaScript do not provide a systematic way to deal with dimension and units automatically. In R, for example, we can easily write
```
x <- 7
```
which stores a quantity under the name `x`. But the language bawks at something like
```
y <- 12 meters
```
```
## Error: <text>:1:9: unexpected symbol
## 1: y <- 12 meters
## ^
```
Lacking a proper computer notation for representing dimensioned quantities with their units, we need some other way to keep track of things.
Here is what we will do. When we represent a quantity on the computer, we will use the **name** of the quantity to remind us what the dimension and units are. Suppose we want, for instance, to represent a family’s yearly income. A sensible choice is to use names like `income` or `income_per_year` or `family_income`. We might even give the units in the name, e.g. `family_income_euros_per_year`. But usually what we do is to document the units separately for a human reader.
This style of using the name of a quantity as a reminder of the dimension and units of the quantity is outside the tradition of mathematical notation. In high\-school math, you encountered \\(x\\) and \\(y\\) and \\(t\\) and \\(\\theta\\). You might even have seen subscripts used to be more specific, for instance \\(x\_0\\). Someone trained in physics will know the traditional and typical uses of letters. \\(x\\) is likely a position, \\(t\\) is likely a time, and \\(x\_0\\) is the position at time zero.
In many areas of application, there are many different quantities to be represented. To illustrate, consider a textbook about the turbulent flow of fluids: *Turbulent Flows* by Stephen B. Pope (ISBN 978\-0521598866\). In the book’s prefatory matter, there is a 14\-page long section entitled “Nomenclature.” Figure [1\.1](representing-mathematical-functions.html#fig:pope-notation) shows a small part of that section, just the symbols that start with the capital letters R and S.
Figure 1\.1: Part of one page from the 14\-page long nomenclature section of *Turbulent Flows* by Stephen B. Pope
Each page in the nomenclature section is dense with symbols. The core of most of them is one or two letters, e.g. \\(R\\) or \\(Re\\). But there are only so many letters available, even if Greek and other alphabets are brought into things. And so more specificity is added by using subscripts, superscripts, hats (e.g. \\(\\hat{S}\_{ij}\\)), bars (e.g. \\(\\bar{S}\_{ij}\\)), stars, and so on. My favorite on this page is \\(Re\_{\\delta^\\star}\\) in which the subscript \\(\\delta^\\star\\) itself has a superscript \\(^\\star\\).
For experts working extensively with algebraic manipulations, this elaborate system of nomenclature may be unavoidable and even optimal. But when the experts are implementing their ideas as computer programs, they have to use a much more mundane approach to naming things: character strings like `Reynolds`.
There can be an advantage to the mundane approach. In this book, we’ll be working with calculus, which is used in a host of fields. We’re going to be drawing examples from many fields. And so we want our quantities to have names that are easy to recognize and remember. Things like `income` and `blood_pressure` and `farm_area`. If we tried to stick to `x` and `y` and `z`, our computer notation would become incomprehensible to the human reader and thus easily subject to error.
1\.2 R\-language functions
--------------------------
R, like most computer languages, has a programming construct to represent operations that take one or more inputs and produce an output. In R, these are called “functions.” In R, everything you do involves a function, either explicitly or implicitly.
Let’s look at the R version of a mathematical function, exponentiation. The function is named `exp` and we can look at the programming it contains
```
exp
```
```
## function (x) .Primitive("exp")
```
Not much there, except computer notation. In R, functions can be created with the key word `function`. For instance, to create a function that translates yearly income to daily income, we could write:
```
as_daily_income <- function(yearly_income) {
yearly_income / 365
}
```
The name selected for the function, `as_daily_income`, is arbitrary. We could have named the function anything at all. (It’s a good practice to give functions names that are easy to read and write and remind you what they are about.)
After the keyword `function`, there is a pair of parentheses. Inside the parentheses are the names being given to the inputs to the function. There’s just one input to `as_daily_income` which we called `yearly_income` just to help remind us what the function is intended to do. But we could have called the input anything at all.
The part `function(yearly_income)` specifies that the thing being created will be a `function` and that we are calling the input `yearly_income`. After this part comes the ***body*** of the function. The body contains R expressions that specify the calculation to be done. In this case, the expression is very simple: divide `yearly_income` by 365 – the (approximate) number of days in a year.
It’s helpful to distinguish between the *value* of an input and the *role* that the input will play in the function. A value of yearly income might be `61362` (in, say, dollars). To speak of the role that the input plays, we use the word ***argument***. For instance, we might say something like, “`as_yearly_income` is a function that takes one argument.”
Following the keyword `function` and the parentheses where the arguments are defined comes a pair of curly braces `{` and `}` containing some R statements. These statements are the body of the function and contain the instructions for the calculation that will turn the inputs into the output.
Here’s a surprising feature of computer languages like R … The name given to the argument doesn’t matter at all, so long as it is used consistently in the body of the function. So the programmer might have written the R function this way:
```
as_daily_income <- function(x) {
x / 365
}
```
or even
```
as_daily_income <- function(ghskelw) {
ghskelw / 365
}
```
All of these different versions of `as_daily_income()` will do exactly the same thing and be used in exactly the same way, regardless of the name given for the argument.[2](dynamics.html#fn2) Like this:
```
as_daily_income(61362)
```
```
## [1] 168.1151
```
Often, functions have more than one argument. The names of the arguments are listed between the parentheses following the keyword `function`, like this:
```
as_daily_income <- function(yearly_income, duration) {
yearly_income / duration
}
```
In such a case, to use the function we have to provide *all* the arguments. So, with the most recent two\-argument definition of `as_daily_income()`, the following use generates an error message:
```
as_daily_income(61362)
```
```
## Error in as_daily_income(61362): argument "duration" is missing, with no default
```
Instead, specify both arguments:
```
as_daily_income(61362, 365)
```
```
## [1] 168.1151
```
One more aspect of function arguments in R … Any argument can be given a ***default value***. It’s easy to see how this works with an example:
```
as_daily_income <- function(yearly_income, duration = 365) {
yearly_income / duration
}
```
With the default value for `duration` the function can be used with either one argument or two:
```
as_daily_income(61362)
```
```
## [1] 168.1151
```
```
as_daily_income(61362, duration = 366)
```
```
## [1] 167.6557
```
The second line is the appropriate calculation for a leap year.
To close, let’s return to the `exp` function, which is built\-in to R. The single argument to `exp` was named `x` and the body of the function is somewhat cryptic: `.Primitive("exp")`.
It will often be the case that the functions we create will have bodies that don’t involve traditional mathematical expressions like \\(x / d\\). As you’ll see in later chapters, in the modern world many mathematical functions are too complicated to be represented by algebraic notation.
Don’t be put off by such non\-algebra function bodies. Ultimately, what you need to know about a function in order to use it are just three things:
1. What are the arguments to the function and what do they stand for.
2. What kind of thing is being produced by the function.
3. That the function works as advertised, e.g. calculating what we would write algebraically as \\(e^x\\).
1\.3 Literate use of arguments
------------------------------
Recall that the names selected by the programmer of a function are arbitrary. You would use the function in exactly the same way even if the names were different. Similarly, when using the function you can pick yourself what expression will be the value of the argument.
For example, suppose you want to calculate \\(100 e^{\-2\.5}\\). Easy:
```
100 * exp(-2.5)
```
```
## [1] 8.2085
```
But it’s likely that the \\(\-2\.5\\) is meant to stand for something more general. For instance, perhaps you are calculating how much of a drug is still in the body ten days after a dose of 100 mg was administered. There will be three quantities involved in even a simple calculation of this: the dosage, the amount of time since the dose was taken, and what’s called the “time constant” for elimination of the drug via the liver or other mechanisms. (To follow this example, you don’t have to know what a time constant is. But if you’re interested, here’s an example. Suppose a drug has a time constant of 4 days. This means that a 63% of the drug will be eliminated during a 4\-day period.)
In writing the calculation, it’s a good idea to be clear and explicit about the meaning of each quantity used in the calculation. So, instead of `100 * exp(-2.5)`, you might want to write:
```
dose <- 100 # mg
duration <- 10 # days
time_constant <- 4 # days
dose * exp(- duration / time_constant)
```
```
## [1] 8.2085
```
Even better, you could define a *function* that does the calculation for you:
```
drug_remaining <- function(dose, duration, time_constant) {
dose * exp(- duration / time_constant)
}
```
Then, doing the calculation for the particular situation described above is a matter of using the function:
```
drug_remaining(dose = 100, duration = 10, time_constant = 4)
```
```
## [1] 8.2085
```
By using good, descriptive names and explicitly labelling which argument is which, you produce a clear and literate documentation of what you are intending to do and how someone else, including “future you,” should change things for representing a new situation.
1\.4 With respect to …
----------------------
We’ve been using R functions to represent the calculation of quantities from inputs like the dose and time constant of a drug.
But R functions play a much bigger role than that. Functions are used for just about everything, from reading a file of data to drawing a graph to finding out what kind of computer is being used. Of particular interest to us here is the use of functions to represent and implement the operations of calculus. These operations have names that you might or might not be familiar with yet: differentiation, integration, etc.
When a calculus or similar mathematical operation is being undertaken, you usually have to specify which ***variable*** or variables the operation is being done “with respect to.” To illustrate, consider the conceptually simple operation of drawing a graph of a function. More specifically, let’s draw a graph of how much drug remains in the body as a function of time since the dose was given. The basic pharmacokinetics of the process is encapsulated in the `drug_remaining()` function. So what we want to do is draw a graph of `drug_remaining()`.
Recall that `drug_remaining()` has three arguments: `dose`, `duration`, and `time_constant`. The particular graph we are going to draw shows the drug remaining as a function of duration. That is, the operation of graphing will be *with respect to `duration`*. We’ll consider, say, a dose of 100 mg of a drug with a time constant of 4 days, looking perhaps at the duration interval from 0 days to 20 days.
In this book, we’ll be using operations provided by the `mosaic` and `mosaicCalc` packages for R. The operations from these packages have a very specific notation to express *with respect to*. That notation uses the tilde character, `~`. Here’s how to draw the graph we want, using the package’s `slice_plot()` operation:
```
slice_plot(
drug_remaining(dose = 100, time_constant = 4, duration = t) ~ t,
domain(t = 0:20))
```
A proper graph would be properly labelled, for instance the horizontal axis with “Time (days)” and the vertical axis with “Remaining drug (mg)”. You’ll see how to do that in the next chapter, which explores the function\-graphing operation in more detail.
1\.1 Numbers, quantities, and names
-----------------------------------
The complexity mentioned in the previous section stems the uses and real\-world situations to which we want to be able to apply the mathematical idea of functions. The inputs taken by functions and the outputs produced by them are not necessarily numbers. Often, they are ***quantities***.
Consider these examples of quantities: money, speed, blood pressure, height, volume. Isn’t each of these quantities a number? Not quite. Consider money. We can count, add, and subtract money – just like we count, add, and subtract with numbers. But money has an additional property: the *kind* of currency. There are Dollars[1](dynamics.html#fn1), Renminbi, Euros, Rand, Krona and so on. Money as an idea is an abstraction of the kind sometimes called a ***dimension***. Other dimensions are length, time, volume, temperature, area, angle, luminosity, bank interest rate, and so on. When we are dealing with a quantity that has dimension, it’s not enough to give a number to say “how much” of that dimension we have. We need also to give ***units***. Many of these are familiar in everyday life. For instance, the dimension of length is quantified with units of meters, or inches, or miles, or parsecs, and so on. The dimension of money is measured with euros, dollars, renminbi, and so on. Area as a dimension is quantified with square\-meters, or square\-feet, or hectares, or acres, etc.
Unfortunately, most mathematics texts treat dimension and units by ignoring them. This leads to confusion and error when an attempt is made to apply mathematical ideas to quantities in the real world.
In this book we will be using functions and calculus to work with real\-world quantities. We can’t afford to ignore dimension and units. Unfortunately, mainstream computer languages like R and Python and JavaScript do not provide a systematic way to deal with dimension and units automatically. In R, for example, we can easily write
```
x <- 7
```
which stores a quantity under the name `x`. But the language bawks at something like
```
y <- 12 meters
```
```
## Error: <text>:1:9: unexpected symbol
## 1: y <- 12 meters
## ^
```
Lacking a proper computer notation for representing dimensioned quantities with their units, we need some other way to keep track of things.
Here is what we will do. When we represent a quantity on the computer, we will use the **name** of the quantity to remind us what the dimension and units are. Suppose we want, for instance, to represent a family’s yearly income. A sensible choice is to use names like `income` or `income_per_year` or `family_income`. We might even give the units in the name, e.g. `family_income_euros_per_year`. But usually what we do is to document the units separately for a human reader.
This style of using the name of a quantity as a reminder of the dimension and units of the quantity is outside the tradition of mathematical notation. In high\-school math, you encountered \\(x\\) and \\(y\\) and \\(t\\) and \\(\\theta\\). You might even have seen subscripts used to be more specific, for instance \\(x\_0\\). Someone trained in physics will know the traditional and typical uses of letters. \\(x\\) is likely a position, \\(t\\) is likely a time, and \\(x\_0\\) is the position at time zero.
In many areas of application, there are many different quantities to be represented. To illustrate, consider a textbook about the turbulent flow of fluids: *Turbulent Flows* by Stephen B. Pope (ISBN 978\-0521598866\). In the book’s prefatory matter, there is a 14\-page long section entitled “Nomenclature.” Figure [1\.1](representing-mathematical-functions.html#fig:pope-notation) shows a small part of that section, just the symbols that start with the capital letters R and S.
Figure 1\.1: Part of one page from the 14\-page long nomenclature section of *Turbulent Flows* by Stephen B. Pope
Each page in the nomenclature section is dense with symbols. The core of most of them is one or two letters, e.g. \\(R\\) or \\(Re\\). But there are only so many letters available, even if Greek and other alphabets are brought into things. And so more specificity is added by using subscripts, superscripts, hats (e.g. \\(\\hat{S}\_{ij}\\)), bars (e.g. \\(\\bar{S}\_{ij}\\)), stars, and so on. My favorite on this page is \\(Re\_{\\delta^\\star}\\) in which the subscript \\(\\delta^\\star\\) itself has a superscript \\(^\\star\\).
For experts working extensively with algebraic manipulations, this elaborate system of nomenclature may be unavoidable and even optimal. But when the experts are implementing their ideas as computer programs, they have to use a much more mundane approach to naming things: character strings like `Reynolds`.
There can be an advantage to the mundane approach. In this book, we’ll be working with calculus, which is used in a host of fields. We’re going to be drawing examples from many fields. And so we want our quantities to have names that are easy to recognize and remember. Things like `income` and `blood_pressure` and `farm_area`. If we tried to stick to `x` and `y` and `z`, our computer notation would become incomprehensible to the human reader and thus easily subject to error.
1\.2 R\-language functions
--------------------------
R, like most computer languages, has a programming construct to represent operations that take one or more inputs and produce an output. In R, these are called “functions.” In R, everything you do involves a function, either explicitly or implicitly.
Let’s look at the R version of a mathematical function, exponentiation. The function is named `exp` and we can look at the programming it contains
```
exp
```
```
## function (x) .Primitive("exp")
```
Not much there, except computer notation. In R, functions can be created with the key word `function`. For instance, to create a function that translates yearly income to daily income, we could write:
```
as_daily_income <- function(yearly_income) {
yearly_income / 365
}
```
The name selected for the function, `as_daily_income`, is arbitrary. We could have named the function anything at all. (It’s a good practice to give functions names that are easy to read and write and remind you what they are about.)
After the keyword `function`, there is a pair of parentheses. Inside the parentheses are the names being given to the inputs to the function. There’s just one input to `as_daily_income` which we called `yearly_income` just to help remind us what the function is intended to do. But we could have called the input anything at all.
The part `function(yearly_income)` specifies that the thing being created will be a `function` and that we are calling the input `yearly_income`. After this part comes the ***body*** of the function. The body contains R expressions that specify the calculation to be done. In this case, the expression is very simple: divide `yearly_income` by 365 – the (approximate) number of days in a year.
It’s helpful to distinguish between the *value* of an input and the *role* that the input will play in the function. A value of yearly income might be `61362` (in, say, dollars). To speak of the role that the input plays, we use the word ***argument***. For instance, we might say something like, “`as_yearly_income` is a function that takes one argument.”
Following the keyword `function` and the parentheses where the arguments are defined comes a pair of curly braces `{` and `}` containing some R statements. These statements are the body of the function and contain the instructions for the calculation that will turn the inputs into the output.
Here’s a surprising feature of computer languages like R … The name given to the argument doesn’t matter at all, so long as it is used consistently in the body of the function. So the programmer might have written the R function this way:
```
as_daily_income <- function(x) {
x / 365
}
```
or even
```
as_daily_income <- function(ghskelw) {
ghskelw / 365
}
```
All of these different versions of `as_daily_income()` will do exactly the same thing and be used in exactly the same way, regardless of the name given for the argument.[2](dynamics.html#fn2) Like this:
```
as_daily_income(61362)
```
```
## [1] 168.1151
```
Often, functions have more than one argument. The names of the arguments are listed between the parentheses following the keyword `function`, like this:
```
as_daily_income <- function(yearly_income, duration) {
yearly_income / duration
}
```
In such a case, to use the function we have to provide *all* the arguments. So, with the most recent two\-argument definition of `as_daily_income()`, the following use generates an error message:
```
as_daily_income(61362)
```
```
## Error in as_daily_income(61362): argument "duration" is missing, with no default
```
Instead, specify both arguments:
```
as_daily_income(61362, 365)
```
```
## [1] 168.1151
```
One more aspect of function arguments in R … Any argument can be given a ***default value***. It’s easy to see how this works with an example:
```
as_daily_income <- function(yearly_income, duration = 365) {
yearly_income / duration
}
```
With the default value for `duration` the function can be used with either one argument or two:
```
as_daily_income(61362)
```
```
## [1] 168.1151
```
```
as_daily_income(61362, duration = 366)
```
```
## [1] 167.6557
```
The second line is the appropriate calculation for a leap year.
To close, let’s return to the `exp` function, which is built\-in to R. The single argument to `exp` was named `x` and the body of the function is somewhat cryptic: `.Primitive("exp")`.
It will often be the case that the functions we create will have bodies that don’t involve traditional mathematical expressions like \\(x / d\\). As you’ll see in later chapters, in the modern world many mathematical functions are too complicated to be represented by algebraic notation.
Don’t be put off by such non\-algebra function bodies. Ultimately, what you need to know about a function in order to use it are just three things:
1. What are the arguments to the function and what do they stand for.
2. What kind of thing is being produced by the function.
3. That the function works as advertised, e.g. calculating what we would write algebraically as \\(e^x\\).
1\.3 Literate use of arguments
------------------------------
Recall that the names selected by the programmer of a function are arbitrary. You would use the function in exactly the same way even if the names were different. Similarly, when using the function you can pick yourself what expression will be the value of the argument.
For example, suppose you want to calculate \\(100 e^{\-2\.5}\\). Easy:
```
100 * exp(-2.5)
```
```
## [1] 8.2085
```
But it’s likely that the \\(\-2\.5\\) is meant to stand for something more general. For instance, perhaps you are calculating how much of a drug is still in the body ten days after a dose of 100 mg was administered. There will be three quantities involved in even a simple calculation of this: the dosage, the amount of time since the dose was taken, and what’s called the “time constant” for elimination of the drug via the liver or other mechanisms. (To follow this example, you don’t have to know what a time constant is. But if you’re interested, here’s an example. Suppose a drug has a time constant of 4 days. This means that a 63% of the drug will be eliminated during a 4\-day period.)
In writing the calculation, it’s a good idea to be clear and explicit about the meaning of each quantity used in the calculation. So, instead of `100 * exp(-2.5)`, you might want to write:
```
dose <- 100 # mg
duration <- 10 # days
time_constant <- 4 # days
dose * exp(- duration / time_constant)
```
```
## [1] 8.2085
```
Even better, you could define a *function* that does the calculation for you:
```
drug_remaining <- function(dose, duration, time_constant) {
dose * exp(- duration / time_constant)
}
```
Then, doing the calculation for the particular situation described above is a matter of using the function:
```
drug_remaining(dose = 100, duration = 10, time_constant = 4)
```
```
## [1] 8.2085
```
By using good, descriptive names and explicitly labelling which argument is which, you produce a clear and literate documentation of what you are intending to do and how someone else, including “future you,” should change things for representing a new situation.
1\.4 With respect to …
----------------------
We’ve been using R functions to represent the calculation of quantities from inputs like the dose and time constant of a drug.
But R functions play a much bigger role than that. Functions are used for just about everything, from reading a file of data to drawing a graph to finding out what kind of computer is being used. Of particular interest to us here is the use of functions to represent and implement the operations of calculus. These operations have names that you might or might not be familiar with yet: differentiation, integration, etc.
When a calculus or similar mathematical operation is being undertaken, you usually have to specify which ***variable*** or variables the operation is being done “with respect to.” To illustrate, consider the conceptually simple operation of drawing a graph of a function. More specifically, let’s draw a graph of how much drug remains in the body as a function of time since the dose was given. The basic pharmacokinetics of the process is encapsulated in the `drug_remaining()` function. So what we want to do is draw a graph of `drug_remaining()`.
Recall that `drug_remaining()` has three arguments: `dose`, `duration`, and `time_constant`. The particular graph we are going to draw shows the drug remaining as a function of duration. That is, the operation of graphing will be *with respect to `duration`*. We’ll consider, say, a dose of 100 mg of a drug with a time constant of 4 days, looking perhaps at the duration interval from 0 days to 20 days.
In this book, we’ll be using operations provided by the `mosaic` and `mosaicCalc` packages for R. The operations from these packages have a very specific notation to express *with respect to*. That notation uses the tilde character, `~`. Here’s how to draw the graph we want, using the package’s `slice_plot()` operation:
```
slice_plot(
drug_remaining(dose = 100, time_constant = 4, duration = t) ~ t,
domain(t = 0:20))
```
A proper graph would be properly labelled, for instance the horizontal axis with “Time (days)” and the vertical axis with “Remaining drug (mg)”. You’ll see how to do that in the next chapter, which explores the function\-graphing operation in more detail.
| Field Specific |
dtkaplan.github.io | https://dtkaplan.github.io/RforCalculus/graphing-functions.html |
Chapter 2 Graphing functions
============================
In this lesson, you will learn how to use R to **graph mathematical functions**.
It’s important to point out at the beginning that much of what you will be learning – much of what will be new to you here – actually has to do with the mathematical structure of functions and not R.
2\.1 Graphing mathematical functions
------------------------------------
Recall that a function is a transformation from an input to an output. Functions are used to represent the relationship between quantities. In **evaluating a function**, you specify what the input will be and the function translates it into the output.
In much of the traditional mathematics notation you have used, functions have names like \\(f\\) or \\(g\\) or \\(y\\), and the input is notated as \\(x\\). Other letters are used to represent ***parameters***. For instance, it’s common to write the equation of a line this way \\\[ y \= m x \+ b .\\] In order to apply mathematical concepts to realistic settings in the world, it’s important to recognize three things that a notation like \\(y \= mx \+ b\\) does not support well:
1. Real\-world relationships generally involve more than two quantities. (For example, the Ideal Gas Law in chemistry, \\(PV \= n R T\\), involves three variables: pressure, volume, and temperature.) For this reason, you will need a notation that lets you describe the *multiple inputs* to a function and which lets you keep track of which input is which.
2. Real\-world quantities are not typically named \\(x\\) and \\(y\\), but are quantities like “cyclic AMP concentration” or “membrane voltage” or “government expenditures”. Of course, you could call all such things \\(x\\) or \\(y\\), but it’s much easier to make sense of things when the names remind you of the quantity being represented.
3. Real\-world situations involve many different relationships, and mathematical models of them can involve different approximations and representations of those relationships. Therefore, it’s important to be able to give names to relationships, so that you can keep track of the various things you are working with.
For these reasons, the notation that you will use needs to be more general than the notation commonly used in high\-school algebra. At first, this will seem odd, but the oddness doesn’t have to do so much with the fact that the notation is used by the computer so much as for the mathematical reasons given above.
But there is one aspect of the notation that stems directly from the use of the keyboard to communicate with the computer. In writing mathematical operations, you’ll use expressions like `a * b` and `2 ^ n` and `a / b` rather than the traditional \\(a b\\) or \\(2^n\\) or \\(\\frac{a}{b}\\), and you will use parentheses both for grouping expressions and for applying functions to their inputs.
In plotting a function, you need to specify several things:
* **What is the function**. This is usually given by an expression, for instance `m * x + b` or `A * x ^ 2` or `sin(2 * t)` Later on, you will also give names to functions and use those names in the expressions, much like `sin` is the name of a trigonometric function.
* **What are the inputs**. Remember, there’s no reason to assume that \\(x\\) is always the input, and you’ll be using variables with names like `G` and `cAMP`. So you have to be explicit in saying what’s an input and what’s not. The R notation for this involves the `~` (“tilde”) symbol. For instance, to specify a linear function with \\(x\\) as the input, you can write `m * x + b ~ x`
* **What range of inputs to make the plot over**. Think of this as the bounds of the horizontal axis over which you want to make the plot.
* **The values of any parameters**. Remember, the notation `m * x + b ~ x` involves not just the variable input `x` but also two other quantities, `m` and `b`. To make a plot of the function, you need to pick specific values for `m` and `b` and tell the computer what these are.
There are three graphing functions in `{mosaicCalc}` that enable you to graph functions, and to layer those plots with graphs of other functions or data. These are:
* `slice_plot()` for functions of one variable.
* `contour_plot()` for functions of two variables.
* `interactive_plot()` which produces an HTML widget for interacting with functions of two variables.
All three are used in very much the same way. Here’s an example of plotting out a straight\-line function:
```
slice_plot(3 * x - 2 ~ x, domain(x = range(0, 10)))
```
Often, it’s natural to write such relationships with the parameters represented by symbols. (This can help you remember which parameter is which, e.g., which is the slope and which is the intercept. When you do this, remember to give a specific numerical value for the parameters, like this:
```
m = -3
b = -2
slice_plot(m * x + b ~ x, domain(x = range(0, 10)))
```
Try these examples:
```
A = 100
slice_plot( A * x ^ 2 ~ x, domain(x = range(-2, 3)))
A = 5
slice_plot( A * x ^ 2 ~ x, domain(x = range(0, 3)), color="red" )
slice_plot( cos(t) ~ t, domain(t = range(0,4*pi) ))
```
You can use `makeFun( )` to give a name to the function. For instance:
```
g <- makeFun(2*x^2 - 5*x + 2 ~ x)
slice_plot(g(x) ~ x , domain(x = range(-2, 2)))
```
Once the function is named, you can evaluate it by giving an input. For instance:
```
g(x = 2)
```
```
## [1] 0
```
```
g(x = 5)
```
```
## [1] 27
```
Of course, you can also construct new expressions from the function you have created. Try this somewhat complicated expression:
```
slice_plot(sqrt(abs(g(x))) ~ x, domain(x = range(-5,5)))
```
### 2\.1\.1 Exercises
#### 2\.1\.1\.1 Exercise 1
Try out this command:
```
x <- 10
slice_plot(A * x ^ 2 ~ A, domain(A = range(-2, 3)))
```
Explain why the graph doesn’t look like a parabola, even though it’s a graph of \\(A x^2\\).
ANSWER:
Notice that the input to the function is `A`, not `x`. The value of `x` has been set to 10 — the graph is being made over the range of `A` from \\(\-2\\) to 3\.
#### 2\.1\.1\.2 Exercise 2
Translate each of these expressions in traditional math notation into a plot. Hand in the command that you gave to make the plot (not the plot itself).
1. \\(4 x \- 7\\) in the window \\(x\\) from 0 to 10\.
ANSWER:
```
slice_plot( 4 * x - 7 ~ x, domain(x = range(0, 10) ))
```
2. \\(\\cos 5x\\) in the window \\(x\\) from \\(\-1\\) to \\(1\\).
ANSWER:
```
slice_plot( cos(5 * x) ~ x, domain(x = range(-1, 1)))
```
1. \\(\\cos 2t\\) in the window \\(t\\) from 0 to 5\.
ANSWER:
```
slice_plot( cos(2 * t) ~ t, domain(t = range(0,5) ))
```
1. \\(\\sqrt{t} \\cos 5t\\) in the window \\(t\\) from 0 to 5\. (Hint: \\(\\sqrt(t)\\) is `sqrt(t)`.)
ANSWER:
```
slice_plot( sqrt(t) * cos(5 * t) ~ t, domain(t = range(0, 5) ))
```
#### 2\.1\.1\.3 Exercise 3
Find the value of each of the functions above at \\(x \= 10\.543\\) or at \\(t \= 10\.543\\). (Hint: Give the function a name and compute the value using an expression like `g(x = 10.543)` or `f(t = 10.543)`.)
Pick the closest numerical value
1. 32\.721, 34\.721, *35\.172*, 37\.421, 37\.721
2. \-0\.83, *\-0\.77*, \-0\.72, \-0\.68, 0\.32, 0\.42, 0\.62
3. \-0\.83, \-0\.77, \-0\.72, \-0\.68, *\-0\.62*, 0\.42, 0\.62
4. *\-2\.5*, \-1\.5, \-0\.5, 0\.5, 1\.5, 2\.5
#### 2\.1\.1\.4 Exercise 4
Reproduce each of these plots. Hand in the command you used to make the identical plot:
1.
ANSWER:
```
slice_plot(2*x - 3 ~ x, domain(x = range(0, 5)))
```
2.
ANSWER:
```
slice_plot(t^2 ~ t, domain(t = range(-2, 2)))
```
#### 2\.1\.1\.5 Exercise 5
What happens when you use a symbolic parameter (e.g., `m` in `m*x + b ~ x`, but try to make a plot without selecting a specific numerical value for the parameter?
ANSWER:
You get an error message saying that the “object is not found”.
#### 2\.1\.1\.6 Exercise 6
What happens when you don’t specify a range for an input, but just a single number, as in the second of these two commands:
```
slice_plot(3 * x ~ x, domain(x= range(1,4))
slice_plot(3 * x ~ x, domain(x = 14))
slice_plot(3 * x ~ x)
```
Give a description of what happened and speculate on why.
ANSWER:
If no domain is specified or if the domain has only one number rather than a range, `slice_plot()` an error message is generated.
2\.2 Making scatterplots
------------------------
Often, the mathematical models that you will create will be motivated by data. For a deep appreciation of the relationship between data and models, you will want to study statistical modeling. Here, though, we will take a first cut at the subject in the form of **curve fitting**, the process of setting parameters of a mathematical function to make the function a close representation of some data.
This means that you will have to learn something about how to access data in computer files, how data are stored, and how to visualize the data. Fortunately, R and the `mosaic` package make this straightforward.
The data files you will be using are stored as **spreadsheets** on the Internet. Typically, the spreadsheet will have multiple variables; each variable is stored as one column. (The rows are “cases,” sometimes called “data points.”) To read the data in to R, you need to know the name of the file and its location. Often, the location will be an address on the Internet.
Here, we’ll work with `"Income-Housing.csv"`, which is located at `"http://www.mosaic-web.org/go/datasets/Income-Housing.csv"`. This file gives information from a survey on housing conditions for people in different income brackets in the US. (Source: Susan E. Mayer (1997\) *What money can’t buy: Family income and children’s life chances* Harvard Univ. Press p. 102\.)
Here’s how to read it into R:
```
Housing = read.csv("http://www.mosaic-web.org/go/datasets/Income-Housing.csv")
```
There are two important things to notice about the above statement. First, the `read.csv()` function is returning a value that is being stored in an object called `housing`. The choice of `Housing` as a name is arbitrary; you could have stored it as `x` or `Equador` or whatever. It’s convenient to pick names that help you remember what’s being stored where.
Second, the name `"http://www.mosaic-web.org/go/datasets/Income-Housing.csv"` is surrounded by quotation marks. These are the single\-character double quotes, that is, `"` and not repeated single quotes `' '` or the backquote ``` `. Whenever you are reading data from a file, the name of the file should be in such single-character double quotes. That way, R knows to treat the characters literally and not as the name of an object such as`housing\`.
Once the data are read in, you can look at the data just by typing the name of the object (without quotes!) that is holding the data. For instance,
```
Housing
```
```
## Income IncomePercentile CrimeProblem AbandonedBuildings
## 1 3914 5 39.6 12.6
## 2 10817 15 32.4 10.0
## 3 21097 30 26.7 7.1
## 4 34548 50 23.9 4.1
## 5 51941 70 21.4 2.3
## 6 72079 90 19.9 1.2
```
All of the variables in the data set will be shown (although just four of them are printed here).
You can see the names of *all of the variables* in a compact format with the `names( )` command:
```
names(Housing)
```
```
## [1] "Income" "IncomePercentile" "CrimeProblem"
## [4] "AbandonedBuildings" "IncompleteBathroom" "NoCentralHeat"
## [7] "ExposedWires" "AirConditioning" "TwoBathrooms"
## [10] "MotorVehicle" "TwoVehicles" "ClothesWasher"
## [13] "ClothesDryer" "Dishwasher" "Telephone"
## [16] "DoctorVisitsUnder7" "DoctorVisits7To18" "NoDoctorVisitUnder7"
## [19] "NoDoctorVisit7To18"
```
When you want to access one of the variables, you give the name of the whole data set followed by the name of the variable, with the two names separated by a `$` sign, like this:
```
Housing$Income
```
```
## [1] 3914 10817 21097 34548 51941 72079
```
```
Housing$CrimeProblem
```
```
## [1] 39.6 32.4 26.7 23.9 21.4 19.9
```
Even though the output from `names( )` shows the variable names in quotation marks, you won’t use quotations around the variable names.
Spelling and capitalization are important. If you make a mistake, no matter how trifling to a human reader, R will not figure out what you want. For instance, here’s a misspelling of a variable name, which results in nothing (`NULL`) being returned.
```
Housing$crim
```
```
## NULL
```
Sometimes people like to look at datasets in a spreadsheet format,
each entry in a little cell. In RStudio, you can do this by going to
the “Workspace” tab and clicking the name of the variable you want to
look at. This will produce a display like the following:
You probably won’t have much use for this, but occasionally it is helpful.
Usually the most informative presentation of data is graphical. One of the most familiar graphical forms is the **scatter\-plot**, a format in which each “case” or “data point” is plotted as a dot at the coordinate location given by two variables. For instance, here’s a scatter plot of the fraction of household that regard their neighborhood as having a crime problem, versus the median income in their bracket.
```
gf_point(CrimeProblem ~ Income, data = Housing )
```
The R statement closely follows the English equivalent: "plot as points `CrimeProblem` versus (or, as a function of) `Income`, using the data from the `housing` object.
Graphics are constructed in layers. If you want to plot a mathematical function **over** the data, you’ll need to use a plotting function to make another layer. Then, to display the two layers in the same plot, connect them with the `%>%` symbol (called a “pipe”). Note that `%>%` can *never* go at the start of a new line.
```
gf_point(
CrimeProblem ~ Income, data=Housing ) %>%
slice_plot(
40 - Income/2000 ~ Income, color = "red")
```
The mathematical function drawn is not a very good match to the data, but this reading is about how to draw graphs, not how to choose a family of functions or find parameters!
If, when plotting your data, you prefer to set the limits of the axes to something of your own choice, you can do this. For instance:
```
gf_point(
CrimeProblem ~ Income, data = Housing) %>%
slice_plot(
40 - Income / 2000 ~ Income, color = "blue") %>%
gf_lims(
x = range(0,100000),
y=range(0,50))
```
Properly made scientific graphics should have informative axis names. You can set the axis names directly using `gf_labs`:
```
gf_point(
CrimeProblem ~ Income, data=Housing) %>%
gf_labs(x= "Income Bracket ($US per household)/year",
y = "Fraction of Households",
main = "Crime Problem") %>%
gf_lims(x = range(0,100000), y = range(0,50))
```
Notice the use of double\-quotes to delimit the character strings, and how \\(x\\) and \\(y\\) are being used to refer to the horizontal and vertical
axes respectively.
### 2\.2\.1 Exercises
#### 2\.2\.1\.1 Exercise 1
Make each of these plots:
1. Prof. Stan Wagon (see <http://stanwagon.com>) illustrates curve fitting using measurements of the temperature (in degrees C) of a cup of coffee versus time (in minutes):
```
s = read.csv(
"http://www.mosaic-web.org/go/datasets/stan-data.csv")
gf_point(temp ~ time, data=s)
```
* Describe in everyday English the pattern you see in coffee cooling:
2. Here’s a record of the tide level in Hawaii over about 100 hours:
```
h = read.csv(
"http://www.mosaic-web.org/go/datasets/hawaii.csv")
gf_point(water ~ time, data=h)
```
* Describe in everyday English the pattern you see in the tide data:
#### 2\.2\.1\.2 Exercise 2
Construct the R commands to duplicate each of these plots. Hand in your commands (not the plot):
1. The data file `"utilities.csv"` has utility records for a house in St. Paul, Minnesota, USA. Make this plot, including the labels:
ANSWER:
```
Utilities <- read.csv(
"http://www.mosaic-web.org/go/datasets/utilities.csv")
gf_point(
temp ~ month, data=Utilities) %>%
gf_labs(x = "Month (Jan=1, Dec=12)",
y = "Temperature (F)",
main = "Ave. Monthly Temp.")
```
b.From the `"utilities.csv"` data file, make this plot of household monthly bill for natural gas versus average temperature. The line has slope \\(\-5\\) USD/degree and intercept 300 USD.
ANSWER:
```
gf_point(
gasbill ~ temp, data=Utilities) %>%
gf_labs(xlab = "Temperature (F)",
ylab = "Expenditures ($US)",
main = "Natural Gas Use") %>%
slice_plot( 300 - 5*temp ~ temp, color="blue")
```
2\.3 Graphing functions of two variables
----------------------------------------
You’ve already seen how to plot a graph of a function of one variable,
for instance:
```
slice_plot(
95 - 73*exp(-.2*t) ~ t,
domain(t = 0:20) )
```
This lesson is about plotting functions of two variables. For the
most part, the format used will be a ***contour plot***.
You use `contour_plot()` to plot with two input
variables. You need to list the two variables
on the right of the `+` sign, and you need to give a range for
each of the variables. For example:
```
contour_plot(
sin(2*pi*t/10)*exp(-.2*x) ~ t & x,
domain(t = range(0,20), x = range(0,10)))
```
Each of the contours is labeled, and by default the plot is filled with color to help guide the eye. If you prefer just to see the contours, without the color fill, use the `tile=FALSE` argument.
```
contour_plot(
sin(2*pi*t/10)*exp(-.2*x) ~ t & x,
domain(t=0:20, x=0:10))
```
Occasionally, people want to see the function as a **surface**, plotted in 3 dimensions. You can get the computer to display a perspective 3\-dimensional plot by using the `interactive_plot()` function. As you’ll see by mousing around the plot, it is interactive.
```
interactive_plot(
sin(2*pi*t/10)*exp(-.5*x) ~ t & x,
domain(t = 0:20, x = 0:10))
```
It’s very hard to read quantitative values from a surface plot — the contour plots are much more useful for that. On the other hand, people seem to have a strong intuition about shapes of surfaces. Being able to translate in your mind from contours to surfaces (and *vice versa*) is a valuable skill.
To create a function that you can evaluate numerically, construct the function with `makeFun()`. For example:
```
g <- makeFun(
sin(2*pi*t/10)*exp(-.2*x) ~ t & x)
contour_plot(
g(t, x) ~ t + x,
domain(t=0:20, x=0:10))
```
```
g(x = 4, t = 7)
```
```
## [1] -0.4273372
```
Make sure to name the arguments explicitly when inputting values. That way you will be sure that you haven’t reversed them by accident. For instance, note that this statement gives a different value than the above:
```
g(4, 7)
```
```
## [1] 0.1449461
```
The reason for the discrepancy is that when the arguments are given without names, it’s the *position* in the argument sequence that matters. So, in the above, 4 is being used for the value of `t` and 7 for the value of `x`. It’s very easy to be confused by this situation, so a good practice is to identify the arguments explicitly by name:
```
g(t = 7, x = 4)
```
```
## [1] -0.4273372
```
### 2\.3\.1 Exercises
#### 2\.3\.1\.1 Exercise 1
Refer to this contour plot:
Approximately what is the value of the function at each of these
\\((x,t)\\) pairs? Pick the closest value
1. \\(x\=4, t\=10\\): {\-6,\-5,\-4,*\-2*,0,2,4,5,6}
2. \\(x\=8, t\=10\\): {\-6,*\-5*,\-4,\-2,0,2,4,5,6}
3. \\(x\=7, t\=0\\): {\-6,\-5,*\-4*,\-2,0,2,4,5,6}
4. \\(x\=9, t\=0\\): {\-6,\-5,\-4,\-2,0,2,4,5,*6*}
ANSWER:
```
contour_plot(
fun1(x, t) ~ x & t,
domain(x = 0:10, t = 1:10))
```
```
fun1(x=4,t=10)
```
```
## [1] -2.195187
```
```
fun1(x=8,t=10)
```
```
## [1] -4.88548
```
```
fun1(x=7,t=0)
```
```
## [1] 4.0552
```
```
fun1(x=9,t=0)
```
```
## [1] 6.049647
```
#### 2\.3\.1\.2 Exercise 2
Describe the shape of the contours produced by each of these functions. (Hint: Make the plot! Caution: Use the mouse to make the plotting frame more\-or\-less square in shape.)
1. The function
```
contour_plot(
sqrt( (v-3)^2 + 2*(w-4)^2 ) ~ v & w,
domain(v=0:6, w=0:6))
```
has contours that are {Parallel Lines,Concentric Circles,*Concentric Ellipses*,X Shaped}
2. The function
```
contour_plot(
sqrt( (v-3)^2 + (w-4)^2 ) ~ v & w,
domain(v=0:6, w=0:6))
```
has contours that are {Parallel Lines,*Concentric Circles*, Concentric Ellipses, X Shaped}
1. The function
```
contour_plot(
6*v - 3*w + 4 ~ v & w,
domain(v=0:6, w=0:6))
```
has contours that are:{*Parallel Lines*,Concentric Circles,Concentric Ellipses,X Shaped}}
2\.1 Graphing mathematical functions
------------------------------------
Recall that a function is a transformation from an input to an output. Functions are used to represent the relationship between quantities. In **evaluating a function**, you specify what the input will be and the function translates it into the output.
In much of the traditional mathematics notation you have used, functions have names like \\(f\\) or \\(g\\) or \\(y\\), and the input is notated as \\(x\\). Other letters are used to represent ***parameters***. For instance, it’s common to write the equation of a line this way \\\[ y \= m x \+ b .\\] In order to apply mathematical concepts to realistic settings in the world, it’s important to recognize three things that a notation like \\(y \= mx \+ b\\) does not support well:
1. Real\-world relationships generally involve more than two quantities. (For example, the Ideal Gas Law in chemistry, \\(PV \= n R T\\), involves three variables: pressure, volume, and temperature.) For this reason, you will need a notation that lets you describe the *multiple inputs* to a function and which lets you keep track of which input is which.
2. Real\-world quantities are not typically named \\(x\\) and \\(y\\), but are quantities like “cyclic AMP concentration” or “membrane voltage” or “government expenditures”. Of course, you could call all such things \\(x\\) or \\(y\\), but it’s much easier to make sense of things when the names remind you of the quantity being represented.
3. Real\-world situations involve many different relationships, and mathematical models of them can involve different approximations and representations of those relationships. Therefore, it’s important to be able to give names to relationships, so that you can keep track of the various things you are working with.
For these reasons, the notation that you will use needs to be more general than the notation commonly used in high\-school algebra. At first, this will seem odd, but the oddness doesn’t have to do so much with the fact that the notation is used by the computer so much as for the mathematical reasons given above.
But there is one aspect of the notation that stems directly from the use of the keyboard to communicate with the computer. In writing mathematical operations, you’ll use expressions like `a * b` and `2 ^ n` and `a / b` rather than the traditional \\(a b\\) or \\(2^n\\) or \\(\\frac{a}{b}\\), and you will use parentheses both for grouping expressions and for applying functions to their inputs.
In plotting a function, you need to specify several things:
* **What is the function**. This is usually given by an expression, for instance `m * x + b` or `A * x ^ 2` or `sin(2 * t)` Later on, you will also give names to functions and use those names in the expressions, much like `sin` is the name of a trigonometric function.
* **What are the inputs**. Remember, there’s no reason to assume that \\(x\\) is always the input, and you’ll be using variables with names like `G` and `cAMP`. So you have to be explicit in saying what’s an input and what’s not. The R notation for this involves the `~` (“tilde”) symbol. For instance, to specify a linear function with \\(x\\) as the input, you can write `m * x + b ~ x`
* **What range of inputs to make the plot over**. Think of this as the bounds of the horizontal axis over which you want to make the plot.
* **The values of any parameters**. Remember, the notation `m * x + b ~ x` involves not just the variable input `x` but also two other quantities, `m` and `b`. To make a plot of the function, you need to pick specific values for `m` and `b` and tell the computer what these are.
There are three graphing functions in `{mosaicCalc}` that enable you to graph functions, and to layer those plots with graphs of other functions or data. These are:
* `slice_plot()` for functions of one variable.
* `contour_plot()` for functions of two variables.
* `interactive_plot()` which produces an HTML widget for interacting with functions of two variables.
All three are used in very much the same way. Here’s an example of plotting out a straight\-line function:
```
slice_plot(3 * x - 2 ~ x, domain(x = range(0, 10)))
```
Often, it’s natural to write such relationships with the parameters represented by symbols. (This can help you remember which parameter is which, e.g., which is the slope and which is the intercept. When you do this, remember to give a specific numerical value for the parameters, like this:
```
m = -3
b = -2
slice_plot(m * x + b ~ x, domain(x = range(0, 10)))
```
Try these examples:
```
A = 100
slice_plot( A * x ^ 2 ~ x, domain(x = range(-2, 3)))
A = 5
slice_plot( A * x ^ 2 ~ x, domain(x = range(0, 3)), color="red" )
slice_plot( cos(t) ~ t, domain(t = range(0,4*pi) ))
```
You can use `makeFun( )` to give a name to the function. For instance:
```
g <- makeFun(2*x^2 - 5*x + 2 ~ x)
slice_plot(g(x) ~ x , domain(x = range(-2, 2)))
```
Once the function is named, you can evaluate it by giving an input. For instance:
```
g(x = 2)
```
```
## [1] 0
```
```
g(x = 5)
```
```
## [1] 27
```
Of course, you can also construct new expressions from the function you have created. Try this somewhat complicated expression:
```
slice_plot(sqrt(abs(g(x))) ~ x, domain(x = range(-5,5)))
```
### 2\.1\.1 Exercises
#### 2\.1\.1\.1 Exercise 1
Try out this command:
```
x <- 10
slice_plot(A * x ^ 2 ~ A, domain(A = range(-2, 3)))
```
Explain why the graph doesn’t look like a parabola, even though it’s a graph of \\(A x^2\\).
ANSWER:
Notice that the input to the function is `A`, not `x`. The value of `x` has been set to 10 — the graph is being made over the range of `A` from \\(\-2\\) to 3\.
#### 2\.1\.1\.2 Exercise 2
Translate each of these expressions in traditional math notation into a plot. Hand in the command that you gave to make the plot (not the plot itself).
1. \\(4 x \- 7\\) in the window \\(x\\) from 0 to 10\.
ANSWER:
```
slice_plot( 4 * x - 7 ~ x, domain(x = range(0, 10) ))
```
2. \\(\\cos 5x\\) in the window \\(x\\) from \\(\-1\\) to \\(1\\).
ANSWER:
```
slice_plot( cos(5 * x) ~ x, domain(x = range(-1, 1)))
```
1. \\(\\cos 2t\\) in the window \\(t\\) from 0 to 5\.
ANSWER:
```
slice_plot( cos(2 * t) ~ t, domain(t = range(0,5) ))
```
1. \\(\\sqrt{t} \\cos 5t\\) in the window \\(t\\) from 0 to 5\. (Hint: \\(\\sqrt(t)\\) is `sqrt(t)`.)
ANSWER:
```
slice_plot( sqrt(t) * cos(5 * t) ~ t, domain(t = range(0, 5) ))
```
#### 2\.1\.1\.3 Exercise 3
Find the value of each of the functions above at \\(x \= 10\.543\\) or at \\(t \= 10\.543\\). (Hint: Give the function a name and compute the value using an expression like `g(x = 10.543)` or `f(t = 10.543)`.)
Pick the closest numerical value
1. 32\.721, 34\.721, *35\.172*, 37\.421, 37\.721
2. \-0\.83, *\-0\.77*, \-0\.72, \-0\.68, 0\.32, 0\.42, 0\.62
3. \-0\.83, \-0\.77, \-0\.72, \-0\.68, *\-0\.62*, 0\.42, 0\.62
4. *\-2\.5*, \-1\.5, \-0\.5, 0\.5, 1\.5, 2\.5
#### 2\.1\.1\.4 Exercise 4
Reproduce each of these plots. Hand in the command you used to make the identical plot:
1.
ANSWER:
```
slice_plot(2*x - 3 ~ x, domain(x = range(0, 5)))
```
2.
ANSWER:
```
slice_plot(t^2 ~ t, domain(t = range(-2, 2)))
```
#### 2\.1\.1\.5 Exercise 5
What happens when you use a symbolic parameter (e.g., `m` in `m*x + b ~ x`, but try to make a plot without selecting a specific numerical value for the parameter?
ANSWER:
You get an error message saying that the “object is not found”.
#### 2\.1\.1\.6 Exercise 6
What happens when you don’t specify a range for an input, but just a single number, as in the second of these two commands:
```
slice_plot(3 * x ~ x, domain(x= range(1,4))
slice_plot(3 * x ~ x, domain(x = 14))
slice_plot(3 * x ~ x)
```
Give a description of what happened and speculate on why.
ANSWER:
If no domain is specified or if the domain has only one number rather than a range, `slice_plot()` an error message is generated.
#### 2\.1\.1\.1 Exercise 1
Try out this command:
```
x <- 10
slice_plot(A * x ^ 2 ~ A, domain(A = range(-2, 3)))
```
Explain why the graph doesn’t look like a parabola, even though it’s a graph of \\(A x^2\\).
ANSWER:
Notice that the input to the function is `A`, not `x`. The value of `x` has been set to 10 — the graph is being made over the range of `A` from \\(\-2\\) to 3\.
#### 2\.1\.1\.2 Exercise 2
Translate each of these expressions in traditional math notation into a plot. Hand in the command that you gave to make the plot (not the plot itself).
1. \\(4 x \- 7\\) in the window \\(x\\) from 0 to 10\.
ANSWER:
```
slice_plot( 4 * x - 7 ~ x, domain(x = range(0, 10) ))
```
2. \\(\\cos 5x\\) in the window \\(x\\) from \\(\-1\\) to \\(1\\).
ANSWER:
```
slice_plot( cos(5 * x) ~ x, domain(x = range(-1, 1)))
```
1. \\(\\cos 2t\\) in the window \\(t\\) from 0 to 5\.
ANSWER:
```
slice_plot( cos(2 * t) ~ t, domain(t = range(0,5) ))
```
1. \\(\\sqrt{t} \\cos 5t\\) in the window \\(t\\) from 0 to 5\. (Hint: \\(\\sqrt(t)\\) is `sqrt(t)`.)
ANSWER:
```
slice_plot( sqrt(t) * cos(5 * t) ~ t, domain(t = range(0, 5) ))
```
#### 2\.1\.1\.3 Exercise 3
Find the value of each of the functions above at \\(x \= 10\.543\\) or at \\(t \= 10\.543\\). (Hint: Give the function a name and compute the value using an expression like `g(x = 10.543)` or `f(t = 10.543)`.)
Pick the closest numerical value
1. 32\.721, 34\.721, *35\.172*, 37\.421, 37\.721
2. \-0\.83, *\-0\.77*, \-0\.72, \-0\.68, 0\.32, 0\.42, 0\.62
3. \-0\.83, \-0\.77, \-0\.72, \-0\.68, *\-0\.62*, 0\.42, 0\.62
4. *\-2\.5*, \-1\.5, \-0\.5, 0\.5, 1\.5, 2\.5
#### 2\.1\.1\.4 Exercise 4
Reproduce each of these plots. Hand in the command you used to make the identical plot:
1.
ANSWER:
```
slice_plot(2*x - 3 ~ x, domain(x = range(0, 5)))
```
2.
ANSWER:
```
slice_plot(t^2 ~ t, domain(t = range(-2, 2)))
```
#### 2\.1\.1\.5 Exercise 5
What happens when you use a symbolic parameter (e.g., `m` in `m*x + b ~ x`, but try to make a plot without selecting a specific numerical value for the parameter?
ANSWER:
You get an error message saying that the “object is not found”.
#### 2\.1\.1\.6 Exercise 6
What happens when you don’t specify a range for an input, but just a single number, as in the second of these two commands:
```
slice_plot(3 * x ~ x, domain(x= range(1,4))
slice_plot(3 * x ~ x, domain(x = 14))
slice_plot(3 * x ~ x)
```
Give a description of what happened and speculate on why.
ANSWER:
If no domain is specified or if the domain has only one number rather than a range, `slice_plot()` an error message is generated.
2\.2 Making scatterplots
------------------------
Often, the mathematical models that you will create will be motivated by data. For a deep appreciation of the relationship between data and models, you will want to study statistical modeling. Here, though, we will take a first cut at the subject in the form of **curve fitting**, the process of setting parameters of a mathematical function to make the function a close representation of some data.
This means that you will have to learn something about how to access data in computer files, how data are stored, and how to visualize the data. Fortunately, R and the `mosaic` package make this straightforward.
The data files you will be using are stored as **spreadsheets** on the Internet. Typically, the spreadsheet will have multiple variables; each variable is stored as one column. (The rows are “cases,” sometimes called “data points.”) To read the data in to R, you need to know the name of the file and its location. Often, the location will be an address on the Internet.
Here, we’ll work with `"Income-Housing.csv"`, which is located at `"http://www.mosaic-web.org/go/datasets/Income-Housing.csv"`. This file gives information from a survey on housing conditions for people in different income brackets in the US. (Source: Susan E. Mayer (1997\) *What money can’t buy: Family income and children’s life chances* Harvard Univ. Press p. 102\.)
Here’s how to read it into R:
```
Housing = read.csv("http://www.mosaic-web.org/go/datasets/Income-Housing.csv")
```
There are two important things to notice about the above statement. First, the `read.csv()` function is returning a value that is being stored in an object called `housing`. The choice of `Housing` as a name is arbitrary; you could have stored it as `x` or `Equador` or whatever. It’s convenient to pick names that help you remember what’s being stored where.
Second, the name `"http://www.mosaic-web.org/go/datasets/Income-Housing.csv"` is surrounded by quotation marks. These are the single\-character double quotes, that is, `"` and not repeated single quotes `' '` or the backquote ``` `. Whenever you are reading data from a file, the name of the file should be in such single-character double quotes. That way, R knows to treat the characters literally and not as the name of an object such as`housing\`.
Once the data are read in, you can look at the data just by typing the name of the object (without quotes!) that is holding the data. For instance,
```
Housing
```
```
## Income IncomePercentile CrimeProblem AbandonedBuildings
## 1 3914 5 39.6 12.6
## 2 10817 15 32.4 10.0
## 3 21097 30 26.7 7.1
## 4 34548 50 23.9 4.1
## 5 51941 70 21.4 2.3
## 6 72079 90 19.9 1.2
```
All of the variables in the data set will be shown (although just four of them are printed here).
You can see the names of *all of the variables* in a compact format with the `names( )` command:
```
names(Housing)
```
```
## [1] "Income" "IncomePercentile" "CrimeProblem"
## [4] "AbandonedBuildings" "IncompleteBathroom" "NoCentralHeat"
## [7] "ExposedWires" "AirConditioning" "TwoBathrooms"
## [10] "MotorVehicle" "TwoVehicles" "ClothesWasher"
## [13] "ClothesDryer" "Dishwasher" "Telephone"
## [16] "DoctorVisitsUnder7" "DoctorVisits7To18" "NoDoctorVisitUnder7"
## [19] "NoDoctorVisit7To18"
```
When you want to access one of the variables, you give the name of the whole data set followed by the name of the variable, with the two names separated by a `$` sign, like this:
```
Housing$Income
```
```
## [1] 3914 10817 21097 34548 51941 72079
```
```
Housing$CrimeProblem
```
```
## [1] 39.6 32.4 26.7 23.9 21.4 19.9
```
Even though the output from `names( )` shows the variable names in quotation marks, you won’t use quotations around the variable names.
Spelling and capitalization are important. If you make a mistake, no matter how trifling to a human reader, R will not figure out what you want. For instance, here’s a misspelling of a variable name, which results in nothing (`NULL`) being returned.
```
Housing$crim
```
```
## NULL
```
Sometimes people like to look at datasets in a spreadsheet format,
each entry in a little cell. In RStudio, you can do this by going to
the “Workspace” tab and clicking the name of the variable you want to
look at. This will produce a display like the following:
You probably won’t have much use for this, but occasionally it is helpful.
Usually the most informative presentation of data is graphical. One of the most familiar graphical forms is the **scatter\-plot**, a format in which each “case” or “data point” is plotted as a dot at the coordinate location given by two variables. For instance, here’s a scatter plot of the fraction of household that regard their neighborhood as having a crime problem, versus the median income in their bracket.
```
gf_point(CrimeProblem ~ Income, data = Housing )
```
The R statement closely follows the English equivalent: "plot as points `CrimeProblem` versus (or, as a function of) `Income`, using the data from the `housing` object.
Graphics are constructed in layers. If you want to plot a mathematical function **over** the data, you’ll need to use a plotting function to make another layer. Then, to display the two layers in the same plot, connect them with the `%>%` symbol (called a “pipe”). Note that `%>%` can *never* go at the start of a new line.
```
gf_point(
CrimeProblem ~ Income, data=Housing ) %>%
slice_plot(
40 - Income/2000 ~ Income, color = "red")
```
The mathematical function drawn is not a very good match to the data, but this reading is about how to draw graphs, not how to choose a family of functions or find parameters!
If, when plotting your data, you prefer to set the limits of the axes to something of your own choice, you can do this. For instance:
```
gf_point(
CrimeProblem ~ Income, data = Housing) %>%
slice_plot(
40 - Income / 2000 ~ Income, color = "blue") %>%
gf_lims(
x = range(0,100000),
y=range(0,50))
```
Properly made scientific graphics should have informative axis names. You can set the axis names directly using `gf_labs`:
```
gf_point(
CrimeProblem ~ Income, data=Housing) %>%
gf_labs(x= "Income Bracket ($US per household)/year",
y = "Fraction of Households",
main = "Crime Problem") %>%
gf_lims(x = range(0,100000), y = range(0,50))
```
Notice the use of double\-quotes to delimit the character strings, and how \\(x\\) and \\(y\\) are being used to refer to the horizontal and vertical
axes respectively.
### 2\.2\.1 Exercises
#### 2\.2\.1\.1 Exercise 1
Make each of these plots:
1. Prof. Stan Wagon (see <http://stanwagon.com>) illustrates curve fitting using measurements of the temperature (in degrees C) of a cup of coffee versus time (in minutes):
```
s = read.csv(
"http://www.mosaic-web.org/go/datasets/stan-data.csv")
gf_point(temp ~ time, data=s)
```
* Describe in everyday English the pattern you see in coffee cooling:
2. Here’s a record of the tide level in Hawaii over about 100 hours:
```
h = read.csv(
"http://www.mosaic-web.org/go/datasets/hawaii.csv")
gf_point(water ~ time, data=h)
```
* Describe in everyday English the pattern you see in the tide data:
#### 2\.2\.1\.2 Exercise 2
Construct the R commands to duplicate each of these plots. Hand in your commands (not the plot):
1. The data file `"utilities.csv"` has utility records for a house in St. Paul, Minnesota, USA. Make this plot, including the labels:
ANSWER:
```
Utilities <- read.csv(
"http://www.mosaic-web.org/go/datasets/utilities.csv")
gf_point(
temp ~ month, data=Utilities) %>%
gf_labs(x = "Month (Jan=1, Dec=12)",
y = "Temperature (F)",
main = "Ave. Monthly Temp.")
```
b.From the `"utilities.csv"` data file, make this plot of household monthly bill for natural gas versus average temperature. The line has slope \\(\-5\\) USD/degree and intercept 300 USD.
ANSWER:
```
gf_point(
gasbill ~ temp, data=Utilities) %>%
gf_labs(xlab = "Temperature (F)",
ylab = "Expenditures ($US)",
main = "Natural Gas Use") %>%
slice_plot( 300 - 5*temp ~ temp, color="blue")
```
### 2\.2\.1 Exercises
#### 2\.2\.1\.1 Exercise 1
Make each of these plots:
1. Prof. Stan Wagon (see <http://stanwagon.com>) illustrates curve fitting using measurements of the temperature (in degrees C) of a cup of coffee versus time (in minutes):
```
s = read.csv(
"http://www.mosaic-web.org/go/datasets/stan-data.csv")
gf_point(temp ~ time, data=s)
```
* Describe in everyday English the pattern you see in coffee cooling:
2. Here’s a record of the tide level in Hawaii over about 100 hours:
```
h = read.csv(
"http://www.mosaic-web.org/go/datasets/hawaii.csv")
gf_point(water ~ time, data=h)
```
* Describe in everyday English the pattern you see in the tide data:
#### 2\.2\.1\.2 Exercise 2
Construct the R commands to duplicate each of these plots. Hand in your commands (not the plot):
1. The data file `"utilities.csv"` has utility records for a house in St. Paul, Minnesota, USA. Make this plot, including the labels:
ANSWER:
```
Utilities <- read.csv(
"http://www.mosaic-web.org/go/datasets/utilities.csv")
gf_point(
temp ~ month, data=Utilities) %>%
gf_labs(x = "Month (Jan=1, Dec=12)",
y = "Temperature (F)",
main = "Ave. Monthly Temp.")
```
b.From the `"utilities.csv"` data file, make this plot of household monthly bill for natural gas versus average temperature. The line has slope \\(\-5\\) USD/degree and intercept 300 USD.
ANSWER:
```
gf_point(
gasbill ~ temp, data=Utilities) %>%
gf_labs(xlab = "Temperature (F)",
ylab = "Expenditures ($US)",
main = "Natural Gas Use") %>%
slice_plot( 300 - 5*temp ~ temp, color="blue")
```
#### 2\.2\.1\.1 Exercise 1
Make each of these plots:
1. Prof. Stan Wagon (see <http://stanwagon.com>) illustrates curve fitting using measurements of the temperature (in degrees C) of a cup of coffee versus time (in minutes):
```
s = read.csv(
"http://www.mosaic-web.org/go/datasets/stan-data.csv")
gf_point(temp ~ time, data=s)
```
* Describe in everyday English the pattern you see in coffee cooling:
2. Here’s a record of the tide level in Hawaii over about 100 hours:
```
h = read.csv(
"http://www.mosaic-web.org/go/datasets/hawaii.csv")
gf_point(water ~ time, data=h)
```
* Describe in everyday English the pattern you see in the tide data:
#### 2\.2\.1\.2 Exercise 2
Construct the R commands to duplicate each of these plots. Hand in your commands (not the plot):
1. The data file `"utilities.csv"` has utility records for a house in St. Paul, Minnesota, USA. Make this plot, including the labels:
ANSWER:
```
Utilities <- read.csv(
"http://www.mosaic-web.org/go/datasets/utilities.csv")
gf_point(
temp ~ month, data=Utilities) %>%
gf_labs(x = "Month (Jan=1, Dec=12)",
y = "Temperature (F)",
main = "Ave. Monthly Temp.")
```
b.From the `"utilities.csv"` data file, make this plot of household monthly bill for natural gas versus average temperature. The line has slope \\(\-5\\) USD/degree and intercept 300 USD.
ANSWER:
```
gf_point(
gasbill ~ temp, data=Utilities) %>%
gf_labs(xlab = "Temperature (F)",
ylab = "Expenditures ($US)",
main = "Natural Gas Use") %>%
slice_plot( 300 - 5*temp ~ temp, color="blue")
```
2\.3 Graphing functions of two variables
----------------------------------------
You’ve already seen how to plot a graph of a function of one variable,
for instance:
```
slice_plot(
95 - 73*exp(-.2*t) ~ t,
domain(t = 0:20) )
```
This lesson is about plotting functions of two variables. For the
most part, the format used will be a ***contour plot***.
You use `contour_plot()` to plot with two input
variables. You need to list the two variables
on the right of the `+` sign, and you need to give a range for
each of the variables. For example:
```
contour_plot(
sin(2*pi*t/10)*exp(-.2*x) ~ t & x,
domain(t = range(0,20), x = range(0,10)))
```
Each of the contours is labeled, and by default the plot is filled with color to help guide the eye. If you prefer just to see the contours, without the color fill, use the `tile=FALSE` argument.
```
contour_plot(
sin(2*pi*t/10)*exp(-.2*x) ~ t & x,
domain(t=0:20, x=0:10))
```
Occasionally, people want to see the function as a **surface**, plotted in 3 dimensions. You can get the computer to display a perspective 3\-dimensional plot by using the `interactive_plot()` function. As you’ll see by mousing around the plot, it is interactive.
```
interactive_plot(
sin(2*pi*t/10)*exp(-.5*x) ~ t & x,
domain(t = 0:20, x = 0:10))
```
It’s very hard to read quantitative values from a surface plot — the contour plots are much more useful for that. On the other hand, people seem to have a strong intuition about shapes of surfaces. Being able to translate in your mind from contours to surfaces (and *vice versa*) is a valuable skill.
To create a function that you can evaluate numerically, construct the function with `makeFun()`. For example:
```
g <- makeFun(
sin(2*pi*t/10)*exp(-.2*x) ~ t & x)
contour_plot(
g(t, x) ~ t + x,
domain(t=0:20, x=0:10))
```
```
g(x = 4, t = 7)
```
```
## [1] -0.4273372
```
Make sure to name the arguments explicitly when inputting values. That way you will be sure that you haven’t reversed them by accident. For instance, note that this statement gives a different value than the above:
```
g(4, 7)
```
```
## [1] 0.1449461
```
The reason for the discrepancy is that when the arguments are given without names, it’s the *position* in the argument sequence that matters. So, in the above, 4 is being used for the value of `t` and 7 for the value of `x`. It’s very easy to be confused by this situation, so a good practice is to identify the arguments explicitly by name:
```
g(t = 7, x = 4)
```
```
## [1] -0.4273372
```
### 2\.3\.1 Exercises
#### 2\.3\.1\.1 Exercise 1
Refer to this contour plot:
Approximately what is the value of the function at each of these
\\((x,t)\\) pairs? Pick the closest value
1. \\(x\=4, t\=10\\): {\-6,\-5,\-4,*\-2*,0,2,4,5,6}
2. \\(x\=8, t\=10\\): {\-6,*\-5*,\-4,\-2,0,2,4,5,6}
3. \\(x\=7, t\=0\\): {\-6,\-5,*\-4*,\-2,0,2,4,5,6}
4. \\(x\=9, t\=0\\): {\-6,\-5,\-4,\-2,0,2,4,5,*6*}
ANSWER:
```
contour_plot(
fun1(x, t) ~ x & t,
domain(x = 0:10, t = 1:10))
```
```
fun1(x=4,t=10)
```
```
## [1] -2.195187
```
```
fun1(x=8,t=10)
```
```
## [1] -4.88548
```
```
fun1(x=7,t=0)
```
```
## [1] 4.0552
```
```
fun1(x=9,t=0)
```
```
## [1] 6.049647
```
#### 2\.3\.1\.2 Exercise 2
Describe the shape of the contours produced by each of these functions. (Hint: Make the plot! Caution: Use the mouse to make the plotting frame more\-or\-less square in shape.)
1. The function
```
contour_plot(
sqrt( (v-3)^2 + 2*(w-4)^2 ) ~ v & w,
domain(v=0:6, w=0:6))
```
has contours that are {Parallel Lines,Concentric Circles,*Concentric Ellipses*,X Shaped}
2. The function
```
contour_plot(
sqrt( (v-3)^2 + (w-4)^2 ) ~ v & w,
domain(v=0:6, w=0:6))
```
has contours that are {Parallel Lines,*Concentric Circles*, Concentric Ellipses, X Shaped}
1. The function
```
contour_plot(
6*v - 3*w + 4 ~ v & w,
domain(v=0:6, w=0:6))
```
has contours that are:{*Parallel Lines*,Concentric Circles,Concentric Ellipses,X Shaped}}
### 2\.3\.1 Exercises
#### 2\.3\.1\.1 Exercise 1
Refer to this contour plot:
Approximately what is the value of the function at each of these
\\((x,t)\\) pairs? Pick the closest value
1. \\(x\=4, t\=10\\): {\-6,\-5,\-4,*\-2*,0,2,4,5,6}
2. \\(x\=8, t\=10\\): {\-6,*\-5*,\-4,\-2,0,2,4,5,6}
3. \\(x\=7, t\=0\\): {\-6,\-5,*\-4*,\-2,0,2,4,5,6}
4. \\(x\=9, t\=0\\): {\-6,\-5,\-4,\-2,0,2,4,5,*6*}
ANSWER:
```
contour_plot(
fun1(x, t) ~ x & t,
domain(x = 0:10, t = 1:10))
```
```
fun1(x=4,t=10)
```
```
## [1] -2.195187
```
```
fun1(x=8,t=10)
```
```
## [1] -4.88548
```
```
fun1(x=7,t=0)
```
```
## [1] 4.0552
```
```
fun1(x=9,t=0)
```
```
## [1] 6.049647
```
#### 2\.3\.1\.2 Exercise 2
Describe the shape of the contours produced by each of these functions. (Hint: Make the plot! Caution: Use the mouse to make the plotting frame more\-or\-less square in shape.)
1. The function
```
contour_plot(
sqrt( (v-3)^2 + 2*(w-4)^2 ) ~ v & w,
domain(v=0:6, w=0:6))
```
has contours that are {Parallel Lines,Concentric Circles,*Concentric Ellipses*,X Shaped}
2. The function
```
contour_plot(
sqrt( (v-3)^2 + (w-4)^2 ) ~ v & w,
domain(v=0:6, w=0:6))
```
has contours that are {Parallel Lines,*Concentric Circles*, Concentric Ellipses, X Shaped}
1. The function
```
contour_plot(
6*v - 3*w + 4 ~ v & w,
domain(v=0:6, w=0:6))
```
has contours that are:{*Parallel Lines*,Concentric Circles,Concentric Ellipses,X Shaped}}
#### 2\.3\.1\.1 Exercise 1
Refer to this contour plot:
Approximately what is the value of the function at each of these
\\((x,t)\\) pairs? Pick the closest value
1. \\(x\=4, t\=10\\): {\-6,\-5,\-4,*\-2*,0,2,4,5,6}
2. \\(x\=8, t\=10\\): {\-6,*\-5*,\-4,\-2,0,2,4,5,6}
3. \\(x\=7, t\=0\\): {\-6,\-5,*\-4*,\-2,0,2,4,5,6}
4. \\(x\=9, t\=0\\): {\-6,\-5,\-4,\-2,0,2,4,5,*6*}
ANSWER:
```
contour_plot(
fun1(x, t) ~ x & t,
domain(x = 0:10, t = 1:10))
```
```
fun1(x=4,t=10)
```
```
## [1] -2.195187
```
```
fun1(x=8,t=10)
```
```
## [1] -4.88548
```
```
fun1(x=7,t=0)
```
```
## [1] 4.0552
```
```
fun1(x=9,t=0)
```
```
## [1] 6.049647
```
#### 2\.3\.1\.2 Exercise 2
Describe the shape of the contours produced by each of these functions. (Hint: Make the plot! Caution: Use the mouse to make the plotting frame more\-or\-less square in shape.)
1. The function
```
contour_plot(
sqrt( (v-3)^2 + 2*(w-4)^2 ) ~ v & w,
domain(v=0:6, w=0:6))
```
has contours that are {Parallel Lines,Concentric Circles,*Concentric Ellipses*,X Shaped}
2. The function
```
contour_plot(
sqrt( (v-3)^2 + (w-4)^2 ) ~ v & w,
domain(v=0:6, w=0:6))
```
has contours that are {Parallel Lines,*Concentric Circles*, Concentric Ellipses, X Shaped}
1. The function
```
contour_plot(
6*v - 3*w + 4 ~ v & w,
domain(v=0:6, w=0:6))
```
has contours that are:{*Parallel Lines*,Concentric Circles,Concentric Ellipses,X Shaped}}
| Field Specific |
dtkaplan.github.io | https://dtkaplan.github.io/RforCalculus/parameters-and-functions.html |
Chapter 3 Parameters and functions
==================================
3\.1 Parameters versus variables
--------------------------------
Why there’s not really a difference.
Newton’s distinction between a, b, c, and x, y, z.
3\.2 Parameters of modeling functions
-------------------------------------
Give the parameterizations of exponentials, sines, power laws …
The idea is to make the arguments to the mathematical functions dimensionless.
Parameters and logarithms – You can take the log of anything you like. The units show up as a constant
3\.3 Polynomials and parameters
-------------------------------
Each parameter has its own dimension
3\.4 Parameters and `makeFun()`
-------------------------------
Describe how `makeFun()` works here.[1](dynamics.html#fn1)
3\.5 Functions without parameters: splines and smoothers
--------------------------------------------------------
EXPLAIN hyper\-parameter. It’s a number that governs the shape of the function, but it can be set arbitrarily and still match the data. Hyper\-parameters are not set directly by the data.
Mathematical models attempt to capture patterns in the real world. This is useful because the models can be more easily studied and manipulated than the world itself. One of the most important uses of functions is to reproduce or capture or model the patterns that appear in data.
Sometimes, the choice of a particular form of function — exponential or power\-law, say — is motivated by an understanding of the processes involved in the pattern that the function is being used to model. But other times, all that’s called for is a function that follows the data and that has other desirable properties, for example is smooth or increases steadily.
“Smoothers” and “splines” are two kinds of general\-purpose functions that can capture patterns in data, but for which there is no simple algebraic form. Creating such functions is remarkably easy, so long as you can free yourself from the idea that functions must always have formulas.
Smoothers and splines are defined not by algebraic forms and parameters, but by data and algorithms. To illustrate, consider some simple data. The data set `Loblolly` contains 84 measurements of the age and height of loblolly pines.
```
gf_point(height ~ age, data=datasets::Loblolly)
```
Several three\-year old pines of very similar height were measured and tracked over time: age five, age ten, and so on. The trees differ from one another, but they are all pretty similar and show a simple pattern: linear growth at first which seems to low down over time.
It might be interesting to speculate about what sort of algebraic function the loblolly pines growth follows, but any such function is just a model. For many purposes, measuring how the growth rate changes as the trees age, all that’s needed is a smooth function that looks like the data. Let’s consider two:
1. A “cubic spline”, which follows the groups of data points and curves smoothly and gracefully.
```
f1 <- spliner(height ~ age, data = datasets::Loblolly)
```
2. A “linear interpolant”, which connects the groups of data points with straight lines.
```
f2 <- connector(height ~ age, data = datasets::Loblolly)
```
The definitions of these functions may seem strange at first — they are entirely defined by the data: no parameters! Nonetheless, they are genuine functions and can be worked with like other functions. For example, you can put in an input and get an output:
```
f1(age = 8)
```
```
## [1] 20.68193
```
```
f2(age = 8)
```
```
## [1] 20.54729
```
You can graph them:
```
gf_point(height ~ age, data = datasets::Loblolly) %>%
slice_plot(f1(age) ~ age) %>%
slice_plot(f2(age) ~ age, color="red", )
```
You can even “solve” them, for instance finding the age at which the height will be 35 feet:
```
findZeros(f1(age) - 35 ~ age, xlim=range(0,30))
```
```
## age
## 1 12.6905
```
```
findZeros(f2(age) - 35 ~ age, xlim=range(0,30))
```
```
## age
## 1 12.9
```
In all respects, these are perfectly ordinary functions. All respects but one: there is no simple formula for them. You’ll notice this if you ever try to look at the computer\-language definition of the functions:
```
f2
```
```
## function (age)
## {
## x <- get(fnames[2])
## if (connect)
## SF(x)
## else SF(x, deriv = deriv)
## }
## <environment: 0x7ff1bf85ec18>
```
There’s almost nothing here to tell the reader what the body of the function is doing. The definition refers to the data itself which has been stored in an “environment.” These are computer\-age functions, not functions from the age of algebra.
As you can see, the spline and linear connector functions are quite similar, except for the range of inputs outside of the range of the data. Within the range of the data, however, both types of functions go exactly through the center of each age\-group.
Splines and connectors are not always what you will want, especially when the data are not divided into discrete groups, as with the loblolly pine data. For instance, the `trees.csv` data set is measurements of the volume, girth, and height of black cherry trees. The trees were felled for their wood, and the interest in making the measurements was to help estimate how much usable volume of wood can be gotten from a tree, based on the girth (that is, circumference) and height. This would be useful, for instance, in estimating how much money a tree is worth. However, unlike the loblolly pine data, the black cherry data does not involve trees falling nicely into defined groups.
```
Cherry <- datasets::trees
gf_point(Volume ~ Girth, data = Cherry)
```
It’s easy enough to make a spline or a linear connector:
```
g1 = spliner(Volume ~ Girth, data = Cherry)
g2 = connector(Volume ~ Girth, data = Cherry)
slice_plot(g1(x) ~ x, domain(x = 8:18)) %>%
slice_plot(g2(x) ~ x, color ="red") %>%
gf_point(Volume ~ Girth, data = Cherry) %>%
gf_labs(x = "Girth (inches)")
```
The two functions both follow the data … but a bit too faithfully! Each of the functions insists on going through every data point. (The one exception is the two points with girth of 13 inches. There’s no function that can go through both of the points with girth 13, so the functions split the difference and go through the average of the two points.)
The up\-and\-down wiggling is of the functions is hard to believe. For such situations, where you have reason to believe that a smooth function is more appropriate than one with lots of ups\-and\-downs, a different type of function is appropriate: a smoother.
```
g3 <- smoother(Volume ~ Girth, data = Cherry, span=1.5)
gf_point(Volume~Girth, data=Cherry) %>%
slice_plot(g3(Girth) ~ Girth) %>%
gf_labs(x = "Girth (inches)")
```
Smoothers are well named: they construct a smooth function that goes close to the data. You have some control over how smooth the function should be. The hyper\-parameter `span` governs this:
```
g4 <- smoother(Volume ~ Girth, data=Cherry, span=1.0)
gf_point(Volume~Girth, data = Cherry) %>%
slice_plot(g4(Girth) ~ Girth) %>%
gf_labs(x = "Girth (inches)", y = "Wood volume")
```
Of course, often you will want to capture relationships where there is more than one variable as the input. Smoothers do this very nicely; just specify which variables are to be the inputs.
```
g5 <- smoother(Volume ~ Girth+Height,
data = Cherry, span = 1.0)
gf_point(Height ~ Girth, data = Cherry) %>%
contour_plot(g5(Girth, Height) ~ Girth + Height) %>%
gf_labs(x = "Girth (inches)",
y = "Height (ft)",
title = "Volume (ft^3)")
```
When you make a smoother or a spline or a linear connector, remember these rules:
1. You need a data frame that contains the data.
2. You use the formula with the variable you want as the output of the function on the left side of the tilde, and the input variables on the right side.
3. The function that is created will have input names that match the variables you specified as inputs. (For the present, only `smoother` will accept more than one input variable.)
4. The smoothness of a `smoother` function can be set by the`span` argument. A span of 1\.0 is typically pretty smooth.The de fault is 0\.5\.
5. When creating a spline, you have the option of declaring `monotonic=TRUE`. This will arrange things to avoid extraneous bumps in data that shows a steady upward pattern or a steady downward pattern.
When you want to plot out a function, you need of course to choose a range for the input values. It’s often sensible to select a range that corresponds to the data on which the function is based. You can find this with the `range()` command, e.g.
```
range(Cherry$Height)
```
```
## [1] 63 87
```
3\.1 Parameters versus variables
--------------------------------
Why there’s not really a difference.
Newton’s distinction between a, b, c, and x, y, z.
3\.2 Parameters of modeling functions
-------------------------------------
Give the parameterizations of exponentials, sines, power laws …
The idea is to make the arguments to the mathematical functions dimensionless.
Parameters and logarithms – You can take the log of anything you like. The units show up as a constant
3\.3 Polynomials and parameters
-------------------------------
Each parameter has its own dimension
3\.4 Parameters and `makeFun()`
-------------------------------
Describe how `makeFun()` works here.[1](dynamics.html#fn1)
3\.5 Functions without parameters: splines and smoothers
--------------------------------------------------------
EXPLAIN hyper\-parameter. It’s a number that governs the shape of the function, but it can be set arbitrarily and still match the data. Hyper\-parameters are not set directly by the data.
Mathematical models attempt to capture patterns in the real world. This is useful because the models can be more easily studied and manipulated than the world itself. One of the most important uses of functions is to reproduce or capture or model the patterns that appear in data.
Sometimes, the choice of a particular form of function — exponential or power\-law, say — is motivated by an understanding of the processes involved in the pattern that the function is being used to model. But other times, all that’s called for is a function that follows the data and that has other desirable properties, for example is smooth or increases steadily.
“Smoothers” and “splines” are two kinds of general\-purpose functions that can capture patterns in data, but for which there is no simple algebraic form. Creating such functions is remarkably easy, so long as you can free yourself from the idea that functions must always have formulas.
Smoothers and splines are defined not by algebraic forms and parameters, but by data and algorithms. To illustrate, consider some simple data. The data set `Loblolly` contains 84 measurements of the age and height of loblolly pines.
```
gf_point(height ~ age, data=datasets::Loblolly)
```
Several three\-year old pines of very similar height were measured and tracked over time: age five, age ten, and so on. The trees differ from one another, but they are all pretty similar and show a simple pattern: linear growth at first which seems to low down over time.
It might be interesting to speculate about what sort of algebraic function the loblolly pines growth follows, but any such function is just a model. For many purposes, measuring how the growth rate changes as the trees age, all that’s needed is a smooth function that looks like the data. Let’s consider two:
1. A “cubic spline”, which follows the groups of data points and curves smoothly and gracefully.
```
f1 <- spliner(height ~ age, data = datasets::Loblolly)
```
2. A “linear interpolant”, which connects the groups of data points with straight lines.
```
f2 <- connector(height ~ age, data = datasets::Loblolly)
```
The definitions of these functions may seem strange at first — they are entirely defined by the data: no parameters! Nonetheless, they are genuine functions and can be worked with like other functions. For example, you can put in an input and get an output:
```
f1(age = 8)
```
```
## [1] 20.68193
```
```
f2(age = 8)
```
```
## [1] 20.54729
```
You can graph them:
```
gf_point(height ~ age, data = datasets::Loblolly) %>%
slice_plot(f1(age) ~ age) %>%
slice_plot(f2(age) ~ age, color="red", )
```
You can even “solve” them, for instance finding the age at which the height will be 35 feet:
```
findZeros(f1(age) - 35 ~ age, xlim=range(0,30))
```
```
## age
## 1 12.6905
```
```
findZeros(f2(age) - 35 ~ age, xlim=range(0,30))
```
```
## age
## 1 12.9
```
In all respects, these are perfectly ordinary functions. All respects but one: there is no simple formula for them. You’ll notice this if you ever try to look at the computer\-language definition of the functions:
```
f2
```
```
## function (age)
## {
## x <- get(fnames[2])
## if (connect)
## SF(x)
## else SF(x, deriv = deriv)
## }
## <environment: 0x7ff1bf85ec18>
```
There’s almost nothing here to tell the reader what the body of the function is doing. The definition refers to the data itself which has been stored in an “environment.” These are computer\-age functions, not functions from the age of algebra.
As you can see, the spline and linear connector functions are quite similar, except for the range of inputs outside of the range of the data. Within the range of the data, however, both types of functions go exactly through the center of each age\-group.
Splines and connectors are not always what you will want, especially when the data are not divided into discrete groups, as with the loblolly pine data. For instance, the `trees.csv` data set is measurements of the volume, girth, and height of black cherry trees. The trees were felled for their wood, and the interest in making the measurements was to help estimate how much usable volume of wood can be gotten from a tree, based on the girth (that is, circumference) and height. This would be useful, for instance, in estimating how much money a tree is worth. However, unlike the loblolly pine data, the black cherry data does not involve trees falling nicely into defined groups.
```
Cherry <- datasets::trees
gf_point(Volume ~ Girth, data = Cherry)
```
It’s easy enough to make a spline or a linear connector:
```
g1 = spliner(Volume ~ Girth, data = Cherry)
g2 = connector(Volume ~ Girth, data = Cherry)
slice_plot(g1(x) ~ x, domain(x = 8:18)) %>%
slice_plot(g2(x) ~ x, color ="red") %>%
gf_point(Volume ~ Girth, data = Cherry) %>%
gf_labs(x = "Girth (inches)")
```
The two functions both follow the data … but a bit too faithfully! Each of the functions insists on going through every data point. (The one exception is the two points with girth of 13 inches. There’s no function that can go through both of the points with girth 13, so the functions split the difference and go through the average of the two points.)
The up\-and\-down wiggling is of the functions is hard to believe. For such situations, where you have reason to believe that a smooth function is more appropriate than one with lots of ups\-and\-downs, a different type of function is appropriate: a smoother.
```
g3 <- smoother(Volume ~ Girth, data = Cherry, span=1.5)
gf_point(Volume~Girth, data=Cherry) %>%
slice_plot(g3(Girth) ~ Girth) %>%
gf_labs(x = "Girth (inches)")
```
Smoothers are well named: they construct a smooth function that goes close to the data. You have some control over how smooth the function should be. The hyper\-parameter `span` governs this:
```
g4 <- smoother(Volume ~ Girth, data=Cherry, span=1.0)
gf_point(Volume~Girth, data = Cherry) %>%
slice_plot(g4(Girth) ~ Girth) %>%
gf_labs(x = "Girth (inches)", y = "Wood volume")
```
Of course, often you will want to capture relationships where there is more than one variable as the input. Smoothers do this very nicely; just specify which variables are to be the inputs.
```
g5 <- smoother(Volume ~ Girth+Height,
data = Cherry, span = 1.0)
gf_point(Height ~ Girth, data = Cherry) %>%
contour_plot(g5(Girth, Height) ~ Girth + Height) %>%
gf_labs(x = "Girth (inches)",
y = "Height (ft)",
title = "Volume (ft^3)")
```
When you make a smoother or a spline or a linear connector, remember these rules:
1. You need a data frame that contains the data.
2. You use the formula with the variable you want as the output of the function on the left side of the tilde, and the input variables on the right side.
3. The function that is created will have input names that match the variables you specified as inputs. (For the present, only `smoother` will accept more than one input variable.)
4. The smoothness of a `smoother` function can be set by the`span` argument. A span of 1\.0 is typically pretty smooth.The de fault is 0\.5\.
5. When creating a spline, you have the option of declaring `monotonic=TRUE`. This will arrange things to avoid extraneous bumps in data that shows a steady upward pattern or a steady downward pattern.
When you want to plot out a function, you need of course to choose a range for the input values. It’s often sensible to select a range that corresponds to the data on which the function is based. You can find this with the `range()` command, e.g.
```
range(Cherry$Height)
```
```
## [1] 63 87
```
| Field Specific |
dtkaplan.github.io | https://dtkaplan.github.io/RforCalculus/solving.html |
Chapter 4 Solving
=================
4\.1 Functions vs equations
---------------------------
Much of the content of high\-school algebra involves “solving.” In the typical situation, you have an equation, say
\\\[ 3 x \+ 2 \= y\\]
and you are asked to “solve” the equation for \\(x\\). This involves rearranging the symbols of the equation in the familiar ways, e.g., moving the \\(2\\) to the right hand side and dividing by the \\(3\\). These steps, originally termed “balancing” and “reduction” are summarized in the original meaning of the arabic word “al\-jabr” (that is, in his “*Compendious Book on Calculation by Completion and Balancing*”. This is where our word “algebra” originates.
High school students are also taught a variety of *ad hoc* techniques for solving in particular situations. For example, the quadratic equation \\(a x^2 \+ b x \+ c \= 0\\) can be solved by application of the procedures of “factoring,” or “completing the square,” or use of the quadratic formula: \\\[x \= \\frac{\-b \\pm \\sqrt{b^2 \- 4ac}}{2a} .\\] Parts of this formula can be traced back to at least the year 628 in the writings of Brahmagupta, an Indian mathematician, but the complete formula seems to date from Simon Stevin in Europe in 1594, and was published by René Descartes in 1637\.
For some problems, students are taught named operations that involve the inverse of functions. For instance, to solve \\(\\sin(x) \= y\\), one simply writes down \\(x \= \\arcsin(y)\\) without any detail on how to find \\(\\arcsin\\) beyond “use a calculator” or, in the old days, “use a table from a book.”
### 4\.1\.1 From Equations to Zeros of Functions
With all of this emphasis on procedures such as factoring and moving symbols back and forth around an \\(\=\\) sign, students naturally ask, “How do I solve equations in R?”
The answer is surprisingly simple, but to understand it, you need to have a different perspective on what it means to “solve” and where the concept of “equation” comes in.
The general form of the problem that is typically used in numerical calculations on the computer is that the equation to be solved is really a function to be inverted. That is, for numerical computation, the problem should be stated like this:
> *You have a function \\(f(x)\\). You happen to know the form of the function \\(f\\) and the value of the output \\(y\\) for some unknown input value \\(x\\). Your problem is to find the input \\(x\\) given the function \\(f\\) and the output value \\(y\\).*
One way to solve such problems is to find the **inverse of \\(f\\)**. This is often written \\(f^{\\ \-1}\\) (which many students understandably but mistakenly take to mean \\(1/f(x)\\)). But finding the inverse of \\(f\\) can be very difficult and is overkill. Instead, the problem can be handled by finding the **zeros** of \\(f\\).
If you can plot out the function \\(f(x)\\) for a range of \\(x\\), you can easily find the zeros. Just find where the \\(x\\) where the function crosses the \\(y\\)\-axis. This works for any function, even ones that are so complicated that there aren’t algebraic procedures for finding a solution.
To illustrate, consider the function \\(g()\\)
```
g <- makeFun(sin(x^2)*cos(sqrt(x^4 + 3 )-x^2) - x + 1 ~ x)
slice_plot(g(x) ~ x, domain(x = -3:3)) %>%
gf_hline(yintercept = 0, color = "red")
```
You can see easily enough that the function crosses the \\(y\\) axis somewhere between \\(x\=1\\) and \\(x\=2\\). You can get more detail by zooming in around the approximate solution:
```
slice_plot(g(x) ~ x, domain(x=1:2)) %>%
gf_hline(yintercept = 0, color = "red")
```
The crossing is at roughly \\(x \\approx 1\.6\\). You could, of course, zoom in further to get a better approximation. Or, you can let the software do this for you:
```
findZeros(g(x) ~ x, xlim = range(1, 2))
```
```
## x
## 1 1.5576
```
The argument `xlim` is used to state where to look for a solution. (Due to a software bug, it’s always called `xlim` even if you use a variable other than `x` in your expression.)
You need only have a rough idea of where the solution is. For example:
```
findZeros(g(x) ~ x, xlim = range(-1000, 1000))
```
```
## x
## 1 1.5576
```
`findZeros()` will only look inside the interval you give it. It will do a more precise job if you can state the interval in a narrow way.
### 4\.1\.2 Multiple Solutions
The `findZeros( )` function will try to find multiple solutions if they exist. For instance, the equation \\(\\sin x \= 0\.35\\) has an infinite number of solutions. Here are some of them:
```
findZeros( sin(x) - 0.35 ~ x, xlim=range(-20,20) )
```
```
## x
## 1 -12.2088
## 2 -9.7823
## 3 -5.9256
## 4 -3.4991
## 5 0.3576
## 6 2.7840
## 7 6.6407
## 8 9.0672
## 9 12.9239
## 10 15.3504
```
Note that the *equation* \\(\\sin x \= 0\.35\\) was turned into a function `sin(3) - 0.35`.
### 4\.1\.3 Setting up a Problem
As the name suggests, `findZeros( )` finds the zeros of functions. You can set up any solution problem in this form. For example, suppose you want to solve \\(4 \+ e^{k t} \= 2^{b t}\\) for \\(b\\), letting the parameter \\(k\\) be \\(k\=0\.00035\\). You may, of course, remember how to do this problem using logarithms. But here’s the set up for `findZeros( )`:
```
g <- makeFun(4 + exp(k*t) - 2^(b*t) ~ b, k=0.00035, t=1)
findZeros( g(b) ~ b , xlim=range(-1000, 1000) )
```
```
## b
## 1 2.322
```
Note that numerical values for both \\(b\\) and \\(t\\) were given. But in the original problem, there was no statement of the value of \\(t\\). This shows one of the advantages of the algebraic techniques. If you solve the problem algebraically, you’ll quickly see that the \\(t\\) cancels out on both sides of the equation. The numerical `findZeros( )` function doesn’t know the rules of algebra, so it can’t figure this out. Of course, you can try other values of \\(t\\) to make sure that \\(t\\) doesn’t matter.
```
findZeros( g(b, t=2) ~ b, xlim=range(-1000,1000) )
```
```
## b
## 1 1.1611
```
### 4\.1\.4 Exercises
#### 4\.1\.4\.1 Exercise 1
Solve the equation \\(\\sin(\\cos(x^2\) \- x) \- x \= 0\.5\\) for \\(x\\). {0\.0000,0\.1328,*0\.2098*,0\.3654,0\.4217}
ANSWER:
```
findZeros( sin(cos(x^2) - x) -x - 0.5 ~ x, xlim=range(-10,10))
```
```
## x
## 1 0.2098
```
#### 4\.1\.4\.2 Exercise 2
Find any zeros of the function \\(3 e^{\- t/5} \\sin(\\frac{2\\pi}{2} t)\\) that are between \\(t\=1\\) and \\(t\=10\\).
1. There aren’t any zeros in that interval.}
2. There aren’t any zeros at all!}
3. $ 2, 4, 6, 8$}
4. $ 1, 3, 5, 7, 9$}
5. {*\\(1, 2, 3, 4, 5, 6, 7, 8, 9\\)*}
ANSWER:
```
findZeros( 3*exp(-t/5)*sin(pi*t) ~ t, xlim=range(1,10))
```
```
## t
## 1 0
## 2 1
## 3 2
## 4 3
## 5 4
## 6 5
## 7 6
## 8 7
## 9 8
## 10 9
```
#### 4\.1\.4\.3 Exercise 3
Use `findZeros()` to find the zeros of each of these polynomials:
1. \\(3 x^2 \+7 x \- 10\\)
Where are the zeros?
a. *\\(x\=\-3\.33\\) or \\(1\\)*
b. \\(x\=3\.33\\) or \\(1\\)
c. \\(x\=\-3\.33\\) or \\(\-1\\)
d. \\(x\=3\.33\\) or \\(\-1\\)
e. No zeros
ANSWER:
```
findZeros( 3*x^2 + 7*x - 10 ~ x, xlim=range(-100,100))
```
```
## x
## 1 -3.3334
## 2 1.0000
```
2. \\(4 x^2 \-2 x \+ 20\\)
Where are the zeros?
a. *\\(x\=\-3\.33\\) or \\(1\\)*}
b. \\(x\=3\.33\\) or \\(1\\)}
c. \\(x\=\-3\.33\\) or \\(\-1\\)}
d. \\(x\=3\.33\\) or \\(\-1\\)}
e. No zeros
3. \\(2 x^3 \- 4x^2 \- 3x \- 10\\)
Which one of these is a zero? {\-1\.0627,0,1\.5432,1\.8011,2\.1223,*3\.0363*,none}
ANSWER:
```
findZeros(2*x^3 - 4*x^2 - 3*x - 10 ~ x, xlim=c(-10,10))
```
```
## x
## 1 3.0363
```
4. \\(7x^4 \- 2 x^3 \- 4x^2 \- 3x \- 10\\)
Which one of these is a zero? {*\-1\.0627*,0,1\.5432,1\.8011,2\.1223,3\.0363,none}
ANSWER:
```
findZeros( 7*x^4 -2*x^3 - 4*x^2 - 3*x - 10 ~ x, xlim=c(-10,10))
```
```
## x
## 1 -1.0628
## 2 1.4123
```
5. \\(6 x^5 \-7x^4 \- 2 x^3 \- 4x^2 \- 3x \- 10\\)
Which one of these is a zero? {\-1\.0627,0,1\.5432,*1\.8012*,2\.1223,3\.0363,none}
ANSWER:
```
findZeros( 6*x^5-7*x^4 -2*x^3 - 4*x^2 - 3*x - 10 ~ x, xlim=c(-10,10))
```
```
## x
## 1 1.8012
```
4\.1 Functions vs equations
---------------------------
Much of the content of high\-school algebra involves “solving.” In the typical situation, you have an equation, say
\\\[ 3 x \+ 2 \= y\\]
and you are asked to “solve” the equation for \\(x\\). This involves rearranging the symbols of the equation in the familiar ways, e.g., moving the \\(2\\) to the right hand side and dividing by the \\(3\\). These steps, originally termed “balancing” and “reduction” are summarized in the original meaning of the arabic word “al\-jabr” (that is, in his “*Compendious Book on Calculation by Completion and Balancing*”. This is where our word “algebra” originates.
High school students are also taught a variety of *ad hoc* techniques for solving in particular situations. For example, the quadratic equation \\(a x^2 \+ b x \+ c \= 0\\) can be solved by application of the procedures of “factoring,” or “completing the square,” or use of the quadratic formula: \\\[x \= \\frac{\-b \\pm \\sqrt{b^2 \- 4ac}}{2a} .\\] Parts of this formula can be traced back to at least the year 628 in the writings of Brahmagupta, an Indian mathematician, but the complete formula seems to date from Simon Stevin in Europe in 1594, and was published by René Descartes in 1637\.
For some problems, students are taught named operations that involve the inverse of functions. For instance, to solve \\(\\sin(x) \= y\\), one simply writes down \\(x \= \\arcsin(y)\\) without any detail on how to find \\(\\arcsin\\) beyond “use a calculator” or, in the old days, “use a table from a book.”
### 4\.1\.1 From Equations to Zeros of Functions
With all of this emphasis on procedures such as factoring and moving symbols back and forth around an \\(\=\\) sign, students naturally ask, “How do I solve equations in R?”
The answer is surprisingly simple, but to understand it, you need to have a different perspective on what it means to “solve” and where the concept of “equation” comes in.
The general form of the problem that is typically used in numerical calculations on the computer is that the equation to be solved is really a function to be inverted. That is, for numerical computation, the problem should be stated like this:
> *You have a function \\(f(x)\\). You happen to know the form of the function \\(f\\) and the value of the output \\(y\\) for some unknown input value \\(x\\). Your problem is to find the input \\(x\\) given the function \\(f\\) and the output value \\(y\\).*
One way to solve such problems is to find the **inverse of \\(f\\)**. This is often written \\(f^{\\ \-1}\\) (which many students understandably but mistakenly take to mean \\(1/f(x)\\)). But finding the inverse of \\(f\\) can be very difficult and is overkill. Instead, the problem can be handled by finding the **zeros** of \\(f\\).
If you can plot out the function \\(f(x)\\) for a range of \\(x\\), you can easily find the zeros. Just find where the \\(x\\) where the function crosses the \\(y\\)\-axis. This works for any function, even ones that are so complicated that there aren’t algebraic procedures for finding a solution.
To illustrate, consider the function \\(g()\\)
```
g <- makeFun(sin(x^2)*cos(sqrt(x^4 + 3 )-x^2) - x + 1 ~ x)
slice_plot(g(x) ~ x, domain(x = -3:3)) %>%
gf_hline(yintercept = 0, color = "red")
```
You can see easily enough that the function crosses the \\(y\\) axis somewhere between \\(x\=1\\) and \\(x\=2\\). You can get more detail by zooming in around the approximate solution:
```
slice_plot(g(x) ~ x, domain(x=1:2)) %>%
gf_hline(yintercept = 0, color = "red")
```
The crossing is at roughly \\(x \\approx 1\.6\\). You could, of course, zoom in further to get a better approximation. Or, you can let the software do this for you:
```
findZeros(g(x) ~ x, xlim = range(1, 2))
```
```
## x
## 1 1.5576
```
The argument `xlim` is used to state where to look for a solution. (Due to a software bug, it’s always called `xlim` even if you use a variable other than `x` in your expression.)
You need only have a rough idea of where the solution is. For example:
```
findZeros(g(x) ~ x, xlim = range(-1000, 1000))
```
```
## x
## 1 1.5576
```
`findZeros()` will only look inside the interval you give it. It will do a more precise job if you can state the interval in a narrow way.
### 4\.1\.2 Multiple Solutions
The `findZeros( )` function will try to find multiple solutions if they exist. For instance, the equation \\(\\sin x \= 0\.35\\) has an infinite number of solutions. Here are some of them:
```
findZeros( sin(x) - 0.35 ~ x, xlim=range(-20,20) )
```
```
## x
## 1 -12.2088
## 2 -9.7823
## 3 -5.9256
## 4 -3.4991
## 5 0.3576
## 6 2.7840
## 7 6.6407
## 8 9.0672
## 9 12.9239
## 10 15.3504
```
Note that the *equation* \\(\\sin x \= 0\.35\\) was turned into a function `sin(3) - 0.35`.
### 4\.1\.3 Setting up a Problem
As the name suggests, `findZeros( )` finds the zeros of functions. You can set up any solution problem in this form. For example, suppose you want to solve \\(4 \+ e^{k t} \= 2^{b t}\\) for \\(b\\), letting the parameter \\(k\\) be \\(k\=0\.00035\\). You may, of course, remember how to do this problem using logarithms. But here’s the set up for `findZeros( )`:
```
g <- makeFun(4 + exp(k*t) - 2^(b*t) ~ b, k=0.00035, t=1)
findZeros( g(b) ~ b , xlim=range(-1000, 1000) )
```
```
## b
## 1 2.322
```
Note that numerical values for both \\(b\\) and \\(t\\) were given. But in the original problem, there was no statement of the value of \\(t\\). This shows one of the advantages of the algebraic techniques. If you solve the problem algebraically, you’ll quickly see that the \\(t\\) cancels out on both sides of the equation. The numerical `findZeros( )` function doesn’t know the rules of algebra, so it can’t figure this out. Of course, you can try other values of \\(t\\) to make sure that \\(t\\) doesn’t matter.
```
findZeros( g(b, t=2) ~ b, xlim=range(-1000,1000) )
```
```
## b
## 1 1.1611
```
### 4\.1\.4 Exercises
#### 4\.1\.4\.1 Exercise 1
Solve the equation \\(\\sin(\\cos(x^2\) \- x) \- x \= 0\.5\\) for \\(x\\). {0\.0000,0\.1328,*0\.2098*,0\.3654,0\.4217}
ANSWER:
```
findZeros( sin(cos(x^2) - x) -x - 0.5 ~ x, xlim=range(-10,10))
```
```
## x
## 1 0.2098
```
#### 4\.1\.4\.2 Exercise 2
Find any zeros of the function \\(3 e^{\- t/5} \\sin(\\frac{2\\pi}{2} t)\\) that are between \\(t\=1\\) and \\(t\=10\\).
1. There aren’t any zeros in that interval.}
2. There aren’t any zeros at all!}
3. $ 2, 4, 6, 8$}
4. $ 1, 3, 5, 7, 9$}
5. {*\\(1, 2, 3, 4, 5, 6, 7, 8, 9\\)*}
ANSWER:
```
findZeros( 3*exp(-t/5)*sin(pi*t) ~ t, xlim=range(1,10))
```
```
## t
## 1 0
## 2 1
## 3 2
## 4 3
## 5 4
## 6 5
## 7 6
## 8 7
## 9 8
## 10 9
```
#### 4\.1\.4\.3 Exercise 3
Use `findZeros()` to find the zeros of each of these polynomials:
1. \\(3 x^2 \+7 x \- 10\\)
Where are the zeros?
a. *\\(x\=\-3\.33\\) or \\(1\\)*
b. \\(x\=3\.33\\) or \\(1\\)
c. \\(x\=\-3\.33\\) or \\(\-1\\)
d. \\(x\=3\.33\\) or \\(\-1\\)
e. No zeros
ANSWER:
```
findZeros( 3*x^2 + 7*x - 10 ~ x, xlim=range(-100,100))
```
```
## x
## 1 -3.3334
## 2 1.0000
```
2. \\(4 x^2 \-2 x \+ 20\\)
Where are the zeros?
a. *\\(x\=\-3\.33\\) or \\(1\\)*}
b. \\(x\=3\.33\\) or \\(1\\)}
c. \\(x\=\-3\.33\\) or \\(\-1\\)}
d. \\(x\=3\.33\\) or \\(\-1\\)}
e. No zeros
3. \\(2 x^3 \- 4x^2 \- 3x \- 10\\)
Which one of these is a zero? {\-1\.0627,0,1\.5432,1\.8011,2\.1223,*3\.0363*,none}
ANSWER:
```
findZeros(2*x^3 - 4*x^2 - 3*x - 10 ~ x, xlim=c(-10,10))
```
```
## x
## 1 3.0363
```
4. \\(7x^4 \- 2 x^3 \- 4x^2 \- 3x \- 10\\)
Which one of these is a zero? {*\-1\.0627*,0,1\.5432,1\.8011,2\.1223,3\.0363,none}
ANSWER:
```
findZeros( 7*x^4 -2*x^3 - 4*x^2 - 3*x - 10 ~ x, xlim=c(-10,10))
```
```
## x
## 1 -1.0628
## 2 1.4123
```
5. \\(6 x^5 \-7x^4 \- 2 x^3 \- 4x^2 \- 3x \- 10\\)
Which one of these is a zero? {\-1\.0627,0,1\.5432,*1\.8012*,2\.1223,3\.0363,none}
ANSWER:
```
findZeros( 6*x^5-7*x^4 -2*x^3 - 4*x^2 - 3*x - 10 ~ x, xlim=c(-10,10))
```
```
## x
## 1 1.8012
```
### 4\.1\.1 From Equations to Zeros of Functions
With all of this emphasis on procedures such as factoring and moving symbols back and forth around an \\(\=\\) sign, students naturally ask, “How do I solve equations in R?”
The answer is surprisingly simple, but to understand it, you need to have a different perspective on what it means to “solve” and where the concept of “equation” comes in.
The general form of the problem that is typically used in numerical calculations on the computer is that the equation to be solved is really a function to be inverted. That is, for numerical computation, the problem should be stated like this:
> *You have a function \\(f(x)\\). You happen to know the form of the function \\(f\\) and the value of the output \\(y\\) for some unknown input value \\(x\\). Your problem is to find the input \\(x\\) given the function \\(f\\) and the output value \\(y\\).*
One way to solve such problems is to find the **inverse of \\(f\\)**. This is often written \\(f^{\\ \-1}\\) (which many students understandably but mistakenly take to mean \\(1/f(x)\\)). But finding the inverse of \\(f\\) can be very difficult and is overkill. Instead, the problem can be handled by finding the **zeros** of \\(f\\).
If you can plot out the function \\(f(x)\\) for a range of \\(x\\), you can easily find the zeros. Just find where the \\(x\\) where the function crosses the \\(y\\)\-axis. This works for any function, even ones that are so complicated that there aren’t algebraic procedures for finding a solution.
To illustrate, consider the function \\(g()\\)
```
g <- makeFun(sin(x^2)*cos(sqrt(x^4 + 3 )-x^2) - x + 1 ~ x)
slice_plot(g(x) ~ x, domain(x = -3:3)) %>%
gf_hline(yintercept = 0, color = "red")
```
You can see easily enough that the function crosses the \\(y\\) axis somewhere between \\(x\=1\\) and \\(x\=2\\). You can get more detail by zooming in around the approximate solution:
```
slice_plot(g(x) ~ x, domain(x=1:2)) %>%
gf_hline(yintercept = 0, color = "red")
```
The crossing is at roughly \\(x \\approx 1\.6\\). You could, of course, zoom in further to get a better approximation. Or, you can let the software do this for you:
```
findZeros(g(x) ~ x, xlim = range(1, 2))
```
```
## x
## 1 1.5576
```
The argument `xlim` is used to state where to look for a solution. (Due to a software bug, it’s always called `xlim` even if you use a variable other than `x` in your expression.)
You need only have a rough idea of where the solution is. For example:
```
findZeros(g(x) ~ x, xlim = range(-1000, 1000))
```
```
## x
## 1 1.5576
```
`findZeros()` will only look inside the interval you give it. It will do a more precise job if you can state the interval in a narrow way.
### 4\.1\.2 Multiple Solutions
The `findZeros( )` function will try to find multiple solutions if they exist. For instance, the equation \\(\\sin x \= 0\.35\\) has an infinite number of solutions. Here are some of them:
```
findZeros( sin(x) - 0.35 ~ x, xlim=range(-20,20) )
```
```
## x
## 1 -12.2088
## 2 -9.7823
## 3 -5.9256
## 4 -3.4991
## 5 0.3576
## 6 2.7840
## 7 6.6407
## 8 9.0672
## 9 12.9239
## 10 15.3504
```
Note that the *equation* \\(\\sin x \= 0\.35\\) was turned into a function `sin(3) - 0.35`.
### 4\.1\.3 Setting up a Problem
As the name suggests, `findZeros( )` finds the zeros of functions. You can set up any solution problem in this form. For example, suppose you want to solve \\(4 \+ e^{k t} \= 2^{b t}\\) for \\(b\\), letting the parameter \\(k\\) be \\(k\=0\.00035\\). You may, of course, remember how to do this problem using logarithms. But here’s the set up for `findZeros( )`:
```
g <- makeFun(4 + exp(k*t) - 2^(b*t) ~ b, k=0.00035, t=1)
findZeros( g(b) ~ b , xlim=range(-1000, 1000) )
```
```
## b
## 1 2.322
```
Note that numerical values for both \\(b\\) and \\(t\\) were given. But in the original problem, there was no statement of the value of \\(t\\). This shows one of the advantages of the algebraic techniques. If you solve the problem algebraically, you’ll quickly see that the \\(t\\) cancels out on both sides of the equation. The numerical `findZeros( )` function doesn’t know the rules of algebra, so it can’t figure this out. Of course, you can try other values of \\(t\\) to make sure that \\(t\\) doesn’t matter.
```
findZeros( g(b, t=2) ~ b, xlim=range(-1000,1000) )
```
```
## b
## 1 1.1611
```
### 4\.1\.4 Exercises
#### 4\.1\.4\.1 Exercise 1
Solve the equation \\(\\sin(\\cos(x^2\) \- x) \- x \= 0\.5\\) for \\(x\\). {0\.0000,0\.1328,*0\.2098*,0\.3654,0\.4217}
ANSWER:
```
findZeros( sin(cos(x^2) - x) -x - 0.5 ~ x, xlim=range(-10,10))
```
```
## x
## 1 0.2098
```
#### 4\.1\.4\.2 Exercise 2
Find any zeros of the function \\(3 e^{\- t/5} \\sin(\\frac{2\\pi}{2} t)\\) that are between \\(t\=1\\) and \\(t\=10\\).
1. There aren’t any zeros in that interval.}
2. There aren’t any zeros at all!}
3. $ 2, 4, 6, 8$}
4. $ 1, 3, 5, 7, 9$}
5. {*\\(1, 2, 3, 4, 5, 6, 7, 8, 9\\)*}
ANSWER:
```
findZeros( 3*exp(-t/5)*sin(pi*t) ~ t, xlim=range(1,10))
```
```
## t
## 1 0
## 2 1
## 3 2
## 4 3
## 5 4
## 6 5
## 7 6
## 8 7
## 9 8
## 10 9
```
#### 4\.1\.4\.3 Exercise 3
Use `findZeros()` to find the zeros of each of these polynomials:
1. \\(3 x^2 \+7 x \- 10\\)
Where are the zeros?
a. *\\(x\=\-3\.33\\) or \\(1\\)*
b. \\(x\=3\.33\\) or \\(1\\)
c. \\(x\=\-3\.33\\) or \\(\-1\\)
d. \\(x\=3\.33\\) or \\(\-1\\)
e. No zeros
ANSWER:
```
findZeros( 3*x^2 + 7*x - 10 ~ x, xlim=range(-100,100))
```
```
## x
## 1 -3.3334
## 2 1.0000
```
2. \\(4 x^2 \-2 x \+ 20\\)
Where are the zeros?
a. *\\(x\=\-3\.33\\) or \\(1\\)*}
b. \\(x\=3\.33\\) or \\(1\\)}
c. \\(x\=\-3\.33\\) or \\(\-1\\)}
d. \\(x\=3\.33\\) or \\(\-1\\)}
e. No zeros
3. \\(2 x^3 \- 4x^2 \- 3x \- 10\\)
Which one of these is a zero? {\-1\.0627,0,1\.5432,1\.8011,2\.1223,*3\.0363*,none}
ANSWER:
```
findZeros(2*x^3 - 4*x^2 - 3*x - 10 ~ x, xlim=c(-10,10))
```
```
## x
## 1 3.0363
```
4. \\(7x^4 \- 2 x^3 \- 4x^2 \- 3x \- 10\\)
Which one of these is a zero? {*\-1\.0627*,0,1\.5432,1\.8011,2\.1223,3\.0363,none}
ANSWER:
```
findZeros( 7*x^4 -2*x^3 - 4*x^2 - 3*x - 10 ~ x, xlim=c(-10,10))
```
```
## x
## 1 -1.0628
## 2 1.4123
```
5. \\(6 x^5 \-7x^4 \- 2 x^3 \- 4x^2 \- 3x \- 10\\)
Which one of these is a zero? {\-1\.0627,0,1\.5432,*1\.8012*,2\.1223,3\.0363,none}
ANSWER:
```
findZeros( 6*x^5-7*x^4 -2*x^3 - 4*x^2 - 3*x - 10 ~ x, xlim=c(-10,10))
```
```
## x
## 1 1.8012
```
#### 4\.1\.4\.1 Exercise 1
Solve the equation \\(\\sin(\\cos(x^2\) \- x) \- x \= 0\.5\\) for \\(x\\). {0\.0000,0\.1328,*0\.2098*,0\.3654,0\.4217}
ANSWER:
```
findZeros( sin(cos(x^2) - x) -x - 0.5 ~ x, xlim=range(-10,10))
```
```
## x
## 1 0.2098
```
#### 4\.1\.4\.2 Exercise 2
Find any zeros of the function \\(3 e^{\- t/5} \\sin(\\frac{2\\pi}{2} t)\\) that are between \\(t\=1\\) and \\(t\=10\\).
1. There aren’t any zeros in that interval.}
2. There aren’t any zeros at all!}
3. $ 2, 4, 6, 8$}
4. $ 1, 3, 5, 7, 9$}
5. {*\\(1, 2, 3, 4, 5, 6, 7, 8, 9\\)*}
ANSWER:
```
findZeros( 3*exp(-t/5)*sin(pi*t) ~ t, xlim=range(1,10))
```
```
## t
## 1 0
## 2 1
## 3 2
## 4 3
## 5 4
## 6 5
## 7 6
## 8 7
## 9 8
## 10 9
```
#### 4\.1\.4\.3 Exercise 3
Use `findZeros()` to find the zeros of each of these polynomials:
1. \\(3 x^2 \+7 x \- 10\\)
Where are the zeros?
a. *\\(x\=\-3\.33\\) or \\(1\\)*
b. \\(x\=3\.33\\) or \\(1\\)
c. \\(x\=\-3\.33\\) or \\(\-1\\)
d. \\(x\=3\.33\\) or \\(\-1\\)
e. No zeros
ANSWER:
```
findZeros( 3*x^2 + 7*x - 10 ~ x, xlim=range(-100,100))
```
```
## x
## 1 -3.3334
## 2 1.0000
```
2. \\(4 x^2 \-2 x \+ 20\\)
Where are the zeros?
a. *\\(x\=\-3\.33\\) or \\(1\\)*}
b. \\(x\=3\.33\\) or \\(1\\)}
c. \\(x\=\-3\.33\\) or \\(\-1\\)}
d. \\(x\=3\.33\\) or \\(\-1\\)}
e. No zeros
3. \\(2 x^3 \- 4x^2 \- 3x \- 10\\)
Which one of these is a zero? {\-1\.0627,0,1\.5432,1\.8011,2\.1223,*3\.0363*,none}
ANSWER:
```
findZeros(2*x^3 - 4*x^2 - 3*x - 10 ~ x, xlim=c(-10,10))
```
```
## x
## 1 3.0363
```
4. \\(7x^4 \- 2 x^3 \- 4x^2 \- 3x \- 10\\)
Which one of these is a zero? {*\-1\.0627*,0,1\.5432,1\.8011,2\.1223,3\.0363,none}
ANSWER:
```
findZeros( 7*x^4 -2*x^3 - 4*x^2 - 3*x - 10 ~ x, xlim=c(-10,10))
```
```
## x
## 1 -1.0628
## 2 1.4123
```
5. \\(6 x^5 \-7x^4 \- 2 x^3 \- 4x^2 \- 3x \- 10\\)
Which one of these is a zero? {\-1\.0627,0,1\.5432,*1\.8012*,2\.1223,3\.0363,none}
ANSWER:
```
findZeros( 6*x^5-7*x^4 -2*x^3 - 4*x^2 - 3*x - 10 ~ x, xlim=c(-10,10))
```
```
## x
## 1 1.8012
```
| Field Specific |
dtkaplan.github.io | https://dtkaplan.github.io/RforCalculus/modeling-with-linear-combinations.html |
Chapter 5 Modeling with linear combinations
===========================================
5\.1 Linear algebra
-------------------
The computations for performing linear algebra operations are among the most important in science. It’s so important that the unit used to measure computer performance for scientific computation is called a “flop”, standing for “floating point operation” and is defined in terms of a linear algebra calculation.
For you, the issue with using the computer to perform linear algebra is mainly how to set up the problem so that the computer can solve it. The notation that we will use has been chosen specifically to relate to the kinds of problems for which you will be using linear algebra: fitting models to data. This means that the notation will be very compact.
The basic linear algebra operations of importance are:
* ***Project*** a single vector onto the space defined by a set of vectors.
* Make a ***linear combination*** of vectors.
In performing these operations, you will use two main functions, `project( )` and `mat( )`, along with the ordinary multiplication `*` and addition `+` operations. There is also a new sort of operation that provides a compact description for taking a linear combination: “matrix multiplication,” written `%*%`.
By the end of this ;ession, you should feel comfortable with those two functions and the new form of multiplication `%*%`.
To start, consider the sort of linear algebra problem often presented in textbooks in the form of simultaneous linear equations. For example:
\\\[\\begin{array}{rcrcr}
x \& \+ \& 5 y \& \= \&1\\\\
2x \& \+ \& \-2 y \& \= \&1\\\\
4x \& \+ \& 0 y \& \= \& 1\\\\
\\end{array} .\\]
Thinking in terms of vectors, this equation can be re\-written as
\\\[
x \\left(\\begin{array}{r}1\\\\2\\\\4\\end{array}\\right) \+
y \\left(\\begin{array}{r}5\\\\\-2\\\\0\\end{array}\\right) \=
\\left(\\begin{array}{r}1\\\\1\\\\1\\end{array}\\right) .\\]
Solving this vector equation involves projecting the vector
\\(\\vec{b} \= \\left(\\begin{array}{r}1\\\\1\\\\1\\end{array}\\right)\\) onto the space defined by the two vectors
\\(\\vec{v}\_1 \= \\left(\\begin{array}{r}1\\\\2\\\\4\\end{array}\\right)\\) and
\\(\\vec{v}\_2 \= \\left(\\begin{array}{r}5\\\\\-2\\\\0\\end{array}\\right)\\). The solution, \\(x\\) and \\(y\\) will be the number of multiples of their respective vectors needed to reach the projected vectors.
When setting this up with the R notation that you will be using, you need to create each of the vectors \\(\\vec{b}, \\vec{v}\_1\\), and \\(\\vec{v}\_2\\). Here’s how:
The projection is accomplished using the `project()` function:
```
## v1 v2
## 0.32894737 0.09210526
```
Read this as "project \\(\\vec{b}\\) onto the subspace defined by
\\(\\vec{v}\_1\\) and \\(\\vec{v}\_1\\).
The answer is given in the form of the multiplier on \\(\\vec{v}\_1\\) and \\(\\vec{v}\_2\\), that is, the values of \\(x\\) and \\(y\\) in the original problem. This answer is the “best” in the sense that these particular values for \\(x\\) and \\(y\\) are the ones that come the closest to \\(\\vec{b}\\), that is, the linear combination that give the projection of \\(\\vec{b}\\) onto the subspace defined by \\(\\vec{v}\_1\\) and \\(\\vec{v}\_2\\).
If you want to see what that projection is, just multiply the coefficients by the vectors and add them up. In other words, take the linear combination
```
## [1] 0.7894737 0.4736842 1.3157895
```
When there are lots of vectors involved in the linear combination,
it’s much easier to be able to refer to all of them by a single object
name. The `mat( )` function takes the vectors and packages them
together into a matrix. It works just like `project( )`, but
doesn’t involve the vector that’s being projected onto the subspace.
Like this:
```
## v1 v2
## [1,] 1 5
## [2,] 2 -2
## [3,] 4 0
```
Notice that \\(A\\) doesn’t have any new information; it’s just the two
vectors \\(\\vec{v}\_1\\) and \\(\\vec{v}\_2\\) placed side by side.
Let’s do the projection again:
```
## v1 v2
## 0.32894737 0.09210526
```
To get the linear combination of the vectors in \\(A\\), you matrix\-multiply the matrix \\(A\\) times the solution \\(z\\):
```
## [,1]
## [1,] 0.7894737
## [2,] 0.4736842
## [3,] 1.3157895
```
Notice, it’s the same answer you got when you did the multiplication “by hand.”
When working with data, statisticians almost always include another vector called the ***intercept*** which is simply a vector of all 1s. You can denote the intercept vector with a plain `1` in the `mat()` or `project()` function, like this:
```
## (Intercept) v1 v2
## [1,] 1 1 5
## [2,] 1 2 -2
## [3,] 1 4 0
```
```
## A(Intercept) Av1 Av2
## 1.000000e+00 0.000000e+00 2.775558e-17
```
```
## [,1]
## [1,] 1
## [2,] 1
## [3,] 1
```
Notice that the matrix `A` has a *third* vector: the intercept vector. The solution consequently has three coefficients. Notice as well that the linear combination of the three vectors exactly reaches the vector \\(\\vec{b}\\). That’s because now there are three vectors that define the subspace: \\(\\vec{v}\_1\\), \\(\\vec{v}\_2\\), and the intercept vectors of all ones.
### 5\.1\.1 Example: Atomic bomb data.
The data file `blastdata.csv` contains measurements of the radius of the fireball from an atomic bomb (in meters) versus time (in seconds). In the analysis of these data, it’s appropriate to look for a power\-law relationship between radius and time. This will show up as a linear relationship between log\-radius and log\-time. In other words, we want to find \\(m\\) and \\(b\\) in the relationship log\-radius \\(\= m\\) log\-time \\(\+ b\\). This amounts to the projection
```
## (Intercept) log(time)
## 6.2946893 0.3866425
```
The parameter \\(m\\) is the coefficient on log\-time, found to be 0\.3866\.
### 5\.1\.2 Exercises
#### 5\.1\.2\.1 Exercise 1
Remember all those “find the line that goes through the points
problems” from algebra class. They can be a bit simpler with the proper
linear\-algebra tools.
Example: “Find the line that goes through the points \\((2,3\)\\) and \\((7,\-8\)\\).”
One way to interpret this is that we are looking for a relationship between \\(x\\) and \\(y\\) such that \\(y \= mx \+ b\\). In vector terms, this means that the \\(x\\)\-coordinates of the two points, \\(2\\) and \\(7\\), made into a vector \\(\\left(\\begin{array}{c}2\\\\7\\end{array}\\right)\\) will be scaled by \\(m\\), and an intercept vector \\(\\left(\\begin{array}{c}1\\\\1\\end{array}\\right)\\) will be scaled by \\(b\\).
```
## (Intercept) x
## 7.4 -2.2
```
Now you know \\(m\\) and \\(b\\).
YOUR TASKS: For each of the following, find the line that goes through the two Cartesian points using the `project( )` function. Remember, the vectors involved in the projection will have the form
\\\[\\vec{x}\=\\left(\\begin{array}{r}x\_1\\\\x\_2\\end{array}\\right) \\mbox{and} \\ \\
\\vec{y}\=\\left(\\begin{array}{r}y\_1\\\\y\_2\\end{array}\\right)\\] and
1. Find the line that goes through the two points \\((x\_1\=9, y\_1\=1\)\\) and \\((x\_2\=3, y\_2\=7\)\\).
1. \\(y \= x \+ 2\\)
2. *\\(y \= \-x \+ 10\\)*
3. \\(y\=x \+ 0\\)
4. \\(y \= \-x \+ 0\\)
5. \\(y \= x \- 2\\)
ANSWER:
```
## (Intercept) c(9, 3)
## 10 -1
```
2. Find the line that goes through the origin \\((x\_1\=0, y\_1\=0\)\\) and \\((x\_2\=2,y\_2\=\-2\)\\).
1. \\(y \= x \+ 2\\)
2. \\(y \= \-x \+ 10\\)
3. \\(y\=x \+ 0\\)
4. *\\(y \= \-x \+ 0\\)*
5. \\(y \= x \- 2\\)
ANSWER:
```
## (Intercept) c(0, 2)
## -5.551115e-17 -1.000000e+00
```
3. Find the line that goes through \\((x\_1\=1, y\_1\=3\)\\) and \\((x\_2\=7, y\_2\=9\)\\)
1. *\\(y \= x \+ 2\\)*
2. \\(y \= \-x \+ 10\\)
3. \\(y\=x \+ 0\\)
4. \\(y \= \-x \+ 0\\)
5. \\(y \= x \- 2\\)
ANSWER:
```
## (Intercept) c(1, 7)
## 2 1
```
#### 5\.1\.2\.2 Exercise 2
1. Find \\(x\\), \\(y\\), and \\(z\\) that solve the following:
\\\[
x \\left(\\begin{array}{r}1\\\\2\\\\4\\end{array}\\right) \+
y \\left(\\begin{array}{r}5\\\\\-2\\\\0\\end{array}\\right) \+
z \\left(\\begin{array}{r}1\\\\\-2\\\\3\\end{array}\\right)
\=
\\left(\\begin{array}{r}1\\\\1\\\\1\\end{array}\\right) .\\]
What’s the value of \\(x\\)?: {\-0\.2353,0\.1617,*0\.4265*,1\.3235,1\.5739}
ANSWER:
```
## c(1, 2, 4) c(5, -2, 0) c(1, -2, 3)
## 0.4264706 0.1617647 -0.2352941
```
2. Find \\(x\\), \\(y\\), and \\(z\\) that solve the following:
\\\[
x \\left(\\begin{array}{r}1\\\\2\\\\4\\end{array}\\right) \+
y \\left(\\begin{array}{r}5\\\\\-2\\\\0\\end{array}\\right) \+
z \\left(\\begin{array}{r}1\\\\\-2\\\\3\\end{array}\\right)
\=
\\left(\\begin{array}{r}1\\\\4\\\\3\\end{array}\\right) .\\]
What’s the value of \\(x\\)? {\-0\.2353,0\.1617,0\.4264,*1\.3235*,1\.5739}
ANSWER:
```
## c(1, 2, 4) c(5, -2, 0) c(1, -2, 3)
## 1.32352941 0.08823529 -0.76470588
```
#### 5\.1\.2\.3 Exercise 3
Using `project( )`, solve these sets of simultaneous linear equations for \\(x\\), \\(y\\), and \\(z\\):
Two equations in two unknowns:
\\\[\\begin{array}{rcrcr}
x \& \+ \& 2 y \& \= \&1\\\\
3 x \& \+ \& 2 y \& \= \&7\\\\
\\end{array}\\]
1. *\\(x\=3\\) and \\(y\=\-1\\)*
2. \\(x\=1\\) and \\(y\=3\\)
3. \\(x\=3\\) and \\(y\=3\\)
ANSWER:
```
## x y
## 3 -1
```
Three equations in three unknowns:
\\\[\\begin{array}{rcrcrcr}
x \& \+ \& 2 y \& \+ \& 7 z \& \= \&1\\\\
3 x \& \+ \& 2 y \& \+ \&2 z\&\= \&7\\\\
\-2 x \& \+ \& 3 y \& \+ \& z\&\= \&7\\\\
\\end{array} \\]
1. \\(x\=3\.1644\\), \\(y\=\-0\.8767\\), \\(z\=0\.8082\\)
2. \\(x\=\-0\.8767\\),\\(y\=0\.8082\\), \\(z\=3\.1644\\)
3. *\\(x\=0\.8082\\), \\(y\=3\.1644\\), \\(z\=\-0\.8767\\)*
ANSWER:
```
## x y z
## 0.8082192 3.1643836 -0.8767123
```
Four equations in four unknowns:
\\\[\\begin{array}{rcrcrcrcr}
x \& \+ \& 2 y \& \+ \& 7 z \& \+\& 8 w\& \= \&1\\\\
3 x \& \+ \& 2 y \& \+ \&2 z\& \+\& 2 w\& \= \&7\\\\
\-2 x \& \+ \& 3 y \& \+ \& z\&\+\& w\&\= \&7\\\\
x \& \+ \& 5 y \& \+ \&3 z\&\+\& w\&\= \&3\\\\
\\end{array} \\]
\\begin{MultipleChoice}
a. \\(x\=5\.500\\), \\(y\=\-7\.356\\), \\(z\=3\.6918\\), \\(w\=1\.1096\\)
\#. *\\(x\=1\.1096\\), \\(y\=3\.6918\\), \\(z\=\-7\.356\\), \\(w\=5\.500\\)*
\#. \\(x\=5\.500\\), \\(y\=\-7\.356\\), \\(z\=1\.1096\\), \\(w\=3\.6918\\)
\#. \\(x\=1\.1096\\), \\(y\=\-7\.356\\), \\(z\=5\.500\\), \\(w\=3\.6918\\)
ANSWER:
```
## x y z w
## 1.109589 3.691781 -7.356164 5.500000
```
Three equations in four unknowns:
\\\[\\begin{array}{rcrcrcrcr}
x \& \+ \& 2 y \& \+ \& 7 z \& \+\& 8 w\& \= \&1\\\\
3 x \& \+ \& 2 y \& \+ \&2 z\& \+\& 2 w\& \= \&7\\\\
\-2 x \& \+ \& 3 y \& \+ \& z\&\+\& w\&\= \&7\\\\
\\end{array} \\]
1. There is no solution.
2. *There is a solution.*
ANSWER:
```
## [,1]
## [1,] 1
## [2,] 7
## [3,] 7
```
You may hear it said that there is no solution to a problem of three equations in four unknowns. But a more precise statement is that there are many solutions, an infinity of them. Mathematicians tend to use “a solution” to stand for “a unique, exact solution.” In applied work, neither “unique” nor “exact” mean very much.
5\.1 Linear algebra
-------------------
The computations for performing linear algebra operations are among the most important in science. It’s so important that the unit used to measure computer performance for scientific computation is called a “flop”, standing for “floating point operation” and is defined in terms of a linear algebra calculation.
For you, the issue with using the computer to perform linear algebra is mainly how to set up the problem so that the computer can solve it. The notation that we will use has been chosen specifically to relate to the kinds of problems for which you will be using linear algebra: fitting models to data. This means that the notation will be very compact.
The basic linear algebra operations of importance are:
* ***Project*** a single vector onto the space defined by a set of vectors.
* Make a ***linear combination*** of vectors.
In performing these operations, you will use two main functions, `project( )` and `mat( )`, along with the ordinary multiplication `*` and addition `+` operations. There is also a new sort of operation that provides a compact description for taking a linear combination: “matrix multiplication,” written `%*%`.
By the end of this ;ession, you should feel comfortable with those two functions and the new form of multiplication `%*%`.
To start, consider the sort of linear algebra problem often presented in textbooks in the form of simultaneous linear equations. For example:
\\\[\\begin{array}{rcrcr}
x \& \+ \& 5 y \& \= \&1\\\\
2x \& \+ \& \-2 y \& \= \&1\\\\
4x \& \+ \& 0 y \& \= \& 1\\\\
\\end{array} .\\]
Thinking in terms of vectors, this equation can be re\-written as
\\\[
x \\left(\\begin{array}{r}1\\\\2\\\\4\\end{array}\\right) \+
y \\left(\\begin{array}{r}5\\\\\-2\\\\0\\end{array}\\right) \=
\\left(\\begin{array}{r}1\\\\1\\\\1\\end{array}\\right) .\\]
Solving this vector equation involves projecting the vector
\\(\\vec{b} \= \\left(\\begin{array}{r}1\\\\1\\\\1\\end{array}\\right)\\) onto the space defined by the two vectors
\\(\\vec{v}\_1 \= \\left(\\begin{array}{r}1\\\\2\\\\4\\end{array}\\right)\\) and
\\(\\vec{v}\_2 \= \\left(\\begin{array}{r}5\\\\\-2\\\\0\\end{array}\\right)\\). The solution, \\(x\\) and \\(y\\) will be the number of multiples of their respective vectors needed to reach the projected vectors.
When setting this up with the R notation that you will be using, you need to create each of the vectors \\(\\vec{b}, \\vec{v}\_1\\), and \\(\\vec{v}\_2\\). Here’s how:
The projection is accomplished using the `project()` function:
```
## v1 v2
## 0.32894737 0.09210526
```
Read this as "project \\(\\vec{b}\\) onto the subspace defined by
\\(\\vec{v}\_1\\) and \\(\\vec{v}\_1\\).
The answer is given in the form of the multiplier on \\(\\vec{v}\_1\\) and \\(\\vec{v}\_2\\), that is, the values of \\(x\\) and \\(y\\) in the original problem. This answer is the “best” in the sense that these particular values for \\(x\\) and \\(y\\) are the ones that come the closest to \\(\\vec{b}\\), that is, the linear combination that give the projection of \\(\\vec{b}\\) onto the subspace defined by \\(\\vec{v}\_1\\) and \\(\\vec{v}\_2\\).
If you want to see what that projection is, just multiply the coefficients by the vectors and add them up. In other words, take the linear combination
```
## [1] 0.7894737 0.4736842 1.3157895
```
When there are lots of vectors involved in the linear combination,
it’s much easier to be able to refer to all of them by a single object
name. The `mat( )` function takes the vectors and packages them
together into a matrix. It works just like `project( )`, but
doesn’t involve the vector that’s being projected onto the subspace.
Like this:
```
## v1 v2
## [1,] 1 5
## [2,] 2 -2
## [3,] 4 0
```
Notice that \\(A\\) doesn’t have any new information; it’s just the two
vectors \\(\\vec{v}\_1\\) and \\(\\vec{v}\_2\\) placed side by side.
Let’s do the projection again:
```
## v1 v2
## 0.32894737 0.09210526
```
To get the linear combination of the vectors in \\(A\\), you matrix\-multiply the matrix \\(A\\) times the solution \\(z\\):
```
## [,1]
## [1,] 0.7894737
## [2,] 0.4736842
## [3,] 1.3157895
```
Notice, it’s the same answer you got when you did the multiplication “by hand.”
When working with data, statisticians almost always include another vector called the ***intercept*** which is simply a vector of all 1s. You can denote the intercept vector with a plain `1` in the `mat()` or `project()` function, like this:
```
## (Intercept) v1 v2
## [1,] 1 1 5
## [2,] 1 2 -2
## [3,] 1 4 0
```
```
## A(Intercept) Av1 Av2
## 1.000000e+00 0.000000e+00 2.775558e-17
```
```
## [,1]
## [1,] 1
## [2,] 1
## [3,] 1
```
Notice that the matrix `A` has a *third* vector: the intercept vector. The solution consequently has three coefficients. Notice as well that the linear combination of the three vectors exactly reaches the vector \\(\\vec{b}\\). That’s because now there are three vectors that define the subspace: \\(\\vec{v}\_1\\), \\(\\vec{v}\_2\\), and the intercept vectors of all ones.
### 5\.1\.1 Example: Atomic bomb data.
The data file `blastdata.csv` contains measurements of the radius of the fireball from an atomic bomb (in meters) versus time (in seconds). In the analysis of these data, it’s appropriate to look for a power\-law relationship between radius and time. This will show up as a linear relationship between log\-radius and log\-time. In other words, we want to find \\(m\\) and \\(b\\) in the relationship log\-radius \\(\= m\\) log\-time \\(\+ b\\). This amounts to the projection
```
## (Intercept) log(time)
## 6.2946893 0.3866425
```
The parameter \\(m\\) is the coefficient on log\-time, found to be 0\.3866\.
### 5\.1\.2 Exercises
#### 5\.1\.2\.1 Exercise 1
Remember all those “find the line that goes through the points
problems” from algebra class. They can be a bit simpler with the proper
linear\-algebra tools.
Example: “Find the line that goes through the points \\((2,3\)\\) and \\((7,\-8\)\\).”
One way to interpret this is that we are looking for a relationship between \\(x\\) and \\(y\\) such that \\(y \= mx \+ b\\). In vector terms, this means that the \\(x\\)\-coordinates of the two points, \\(2\\) and \\(7\\), made into a vector \\(\\left(\\begin{array}{c}2\\\\7\\end{array}\\right)\\) will be scaled by \\(m\\), and an intercept vector \\(\\left(\\begin{array}{c}1\\\\1\\end{array}\\right)\\) will be scaled by \\(b\\).
```
## (Intercept) x
## 7.4 -2.2
```
Now you know \\(m\\) and \\(b\\).
YOUR TASKS: For each of the following, find the line that goes through the two Cartesian points using the `project( )` function. Remember, the vectors involved in the projection will have the form
\\\[\\vec{x}\=\\left(\\begin{array}{r}x\_1\\\\x\_2\\end{array}\\right) \\mbox{and} \\ \\
\\vec{y}\=\\left(\\begin{array}{r}y\_1\\\\y\_2\\end{array}\\right)\\] and
1. Find the line that goes through the two points \\((x\_1\=9, y\_1\=1\)\\) and \\((x\_2\=3, y\_2\=7\)\\).
1. \\(y \= x \+ 2\\)
2. *\\(y \= \-x \+ 10\\)*
3. \\(y\=x \+ 0\\)
4. \\(y \= \-x \+ 0\\)
5. \\(y \= x \- 2\\)
ANSWER:
```
## (Intercept) c(9, 3)
## 10 -1
```
2. Find the line that goes through the origin \\((x\_1\=0, y\_1\=0\)\\) and \\((x\_2\=2,y\_2\=\-2\)\\).
1. \\(y \= x \+ 2\\)
2. \\(y \= \-x \+ 10\\)
3. \\(y\=x \+ 0\\)
4. *\\(y \= \-x \+ 0\\)*
5. \\(y \= x \- 2\\)
ANSWER:
```
## (Intercept) c(0, 2)
## -5.551115e-17 -1.000000e+00
```
3. Find the line that goes through \\((x\_1\=1, y\_1\=3\)\\) and \\((x\_2\=7, y\_2\=9\)\\)
1. *\\(y \= x \+ 2\\)*
2. \\(y \= \-x \+ 10\\)
3. \\(y\=x \+ 0\\)
4. \\(y \= \-x \+ 0\\)
5. \\(y \= x \- 2\\)
ANSWER:
```
## (Intercept) c(1, 7)
## 2 1
```
#### 5\.1\.2\.2 Exercise 2
1. Find \\(x\\), \\(y\\), and \\(z\\) that solve the following:
\\\[
x \\left(\\begin{array}{r}1\\\\2\\\\4\\end{array}\\right) \+
y \\left(\\begin{array}{r}5\\\\\-2\\\\0\\end{array}\\right) \+
z \\left(\\begin{array}{r}1\\\\\-2\\\\3\\end{array}\\right)
\=
\\left(\\begin{array}{r}1\\\\1\\\\1\\end{array}\\right) .\\]
What’s the value of \\(x\\)?: {\-0\.2353,0\.1617,*0\.4265*,1\.3235,1\.5739}
ANSWER:
```
## c(1, 2, 4) c(5, -2, 0) c(1, -2, 3)
## 0.4264706 0.1617647 -0.2352941
```
2. Find \\(x\\), \\(y\\), and \\(z\\) that solve the following:
\\\[
x \\left(\\begin{array}{r}1\\\\2\\\\4\\end{array}\\right) \+
y \\left(\\begin{array}{r}5\\\\\-2\\\\0\\end{array}\\right) \+
z \\left(\\begin{array}{r}1\\\\\-2\\\\3\\end{array}\\right)
\=
\\left(\\begin{array}{r}1\\\\4\\\\3\\end{array}\\right) .\\]
What’s the value of \\(x\\)? {\-0\.2353,0\.1617,0\.4264,*1\.3235*,1\.5739}
ANSWER:
```
## c(1, 2, 4) c(5, -2, 0) c(1, -2, 3)
## 1.32352941 0.08823529 -0.76470588
```
#### 5\.1\.2\.3 Exercise 3
Using `project( )`, solve these sets of simultaneous linear equations for \\(x\\), \\(y\\), and \\(z\\):
Two equations in two unknowns:
\\\[\\begin{array}{rcrcr}
x \& \+ \& 2 y \& \= \&1\\\\
3 x \& \+ \& 2 y \& \= \&7\\\\
\\end{array}\\]
1. *\\(x\=3\\) and \\(y\=\-1\\)*
2. \\(x\=1\\) and \\(y\=3\\)
3. \\(x\=3\\) and \\(y\=3\\)
ANSWER:
```
## x y
## 3 -1
```
Three equations in three unknowns:
\\\[\\begin{array}{rcrcrcr}
x \& \+ \& 2 y \& \+ \& 7 z \& \= \&1\\\\
3 x \& \+ \& 2 y \& \+ \&2 z\&\= \&7\\\\
\-2 x \& \+ \& 3 y \& \+ \& z\&\= \&7\\\\
\\end{array} \\]
1. \\(x\=3\.1644\\), \\(y\=\-0\.8767\\), \\(z\=0\.8082\\)
2. \\(x\=\-0\.8767\\),\\(y\=0\.8082\\), \\(z\=3\.1644\\)
3. *\\(x\=0\.8082\\), \\(y\=3\.1644\\), \\(z\=\-0\.8767\\)*
ANSWER:
```
## x y z
## 0.8082192 3.1643836 -0.8767123
```
Four equations in four unknowns:
\\\[\\begin{array}{rcrcrcrcr}
x \& \+ \& 2 y \& \+ \& 7 z \& \+\& 8 w\& \= \&1\\\\
3 x \& \+ \& 2 y \& \+ \&2 z\& \+\& 2 w\& \= \&7\\\\
\-2 x \& \+ \& 3 y \& \+ \& z\&\+\& w\&\= \&7\\\\
x \& \+ \& 5 y \& \+ \&3 z\&\+\& w\&\= \&3\\\\
\\end{array} \\]
\\begin{MultipleChoice}
a. \\(x\=5\.500\\), \\(y\=\-7\.356\\), \\(z\=3\.6918\\), \\(w\=1\.1096\\)
\#. *\\(x\=1\.1096\\), \\(y\=3\.6918\\), \\(z\=\-7\.356\\), \\(w\=5\.500\\)*
\#. \\(x\=5\.500\\), \\(y\=\-7\.356\\), \\(z\=1\.1096\\), \\(w\=3\.6918\\)
\#. \\(x\=1\.1096\\), \\(y\=\-7\.356\\), \\(z\=5\.500\\), \\(w\=3\.6918\\)
ANSWER:
```
## x y z w
## 1.109589 3.691781 -7.356164 5.500000
```
Three equations in four unknowns:
\\\[\\begin{array}{rcrcrcrcr}
x \& \+ \& 2 y \& \+ \& 7 z \& \+\& 8 w\& \= \&1\\\\
3 x \& \+ \& 2 y \& \+ \&2 z\& \+\& 2 w\& \= \&7\\\\
\-2 x \& \+ \& 3 y \& \+ \& z\&\+\& w\&\= \&7\\\\
\\end{array} \\]
1. There is no solution.
2. *There is a solution.*
ANSWER:
```
## [,1]
## [1,] 1
## [2,] 7
## [3,] 7
```
You may hear it said that there is no solution to a problem of three equations in four unknowns. But a more precise statement is that there are many solutions, an infinity of them. Mathematicians tend to use “a solution” to stand for “a unique, exact solution.” In applied work, neither “unique” nor “exact” mean very much.
### 5\.1\.1 Example: Atomic bomb data.
The data file `blastdata.csv` contains measurements of the radius of the fireball from an atomic bomb (in meters) versus time (in seconds). In the analysis of these data, it’s appropriate to look for a power\-law relationship between radius and time. This will show up as a linear relationship between log\-radius and log\-time. In other words, we want to find \\(m\\) and \\(b\\) in the relationship log\-radius \\(\= m\\) log\-time \\(\+ b\\). This amounts to the projection
```
## (Intercept) log(time)
## 6.2946893 0.3866425
```
The parameter \\(m\\) is the coefficient on log\-time, found to be 0\.3866\.
### 5\.1\.2 Exercises
#### 5\.1\.2\.1 Exercise 1
Remember all those “find the line that goes through the points
problems” from algebra class. They can be a bit simpler with the proper
linear\-algebra tools.
Example: “Find the line that goes through the points \\((2,3\)\\) and \\((7,\-8\)\\).”
One way to interpret this is that we are looking for a relationship between \\(x\\) and \\(y\\) such that \\(y \= mx \+ b\\). In vector terms, this means that the \\(x\\)\-coordinates of the two points, \\(2\\) and \\(7\\), made into a vector \\(\\left(\\begin{array}{c}2\\\\7\\end{array}\\right)\\) will be scaled by \\(m\\), and an intercept vector \\(\\left(\\begin{array}{c}1\\\\1\\end{array}\\right)\\) will be scaled by \\(b\\).
```
## (Intercept) x
## 7.4 -2.2
```
Now you know \\(m\\) and \\(b\\).
YOUR TASKS: For each of the following, find the line that goes through the two Cartesian points using the `project( )` function. Remember, the vectors involved in the projection will have the form
\\\[\\vec{x}\=\\left(\\begin{array}{r}x\_1\\\\x\_2\\end{array}\\right) \\mbox{and} \\ \\
\\vec{y}\=\\left(\\begin{array}{r}y\_1\\\\y\_2\\end{array}\\right)\\] and
1. Find the line that goes through the two points \\((x\_1\=9, y\_1\=1\)\\) and \\((x\_2\=3, y\_2\=7\)\\).
1. \\(y \= x \+ 2\\)
2. *\\(y \= \-x \+ 10\\)*
3. \\(y\=x \+ 0\\)
4. \\(y \= \-x \+ 0\\)
5. \\(y \= x \- 2\\)
ANSWER:
```
## (Intercept) c(9, 3)
## 10 -1
```
2. Find the line that goes through the origin \\((x\_1\=0, y\_1\=0\)\\) and \\((x\_2\=2,y\_2\=\-2\)\\).
1. \\(y \= x \+ 2\\)
2. \\(y \= \-x \+ 10\\)
3. \\(y\=x \+ 0\\)
4. *\\(y \= \-x \+ 0\\)*
5. \\(y \= x \- 2\\)
ANSWER:
```
## (Intercept) c(0, 2)
## -5.551115e-17 -1.000000e+00
```
3. Find the line that goes through \\((x\_1\=1, y\_1\=3\)\\) and \\((x\_2\=7, y\_2\=9\)\\)
1. *\\(y \= x \+ 2\\)*
2. \\(y \= \-x \+ 10\\)
3. \\(y\=x \+ 0\\)
4. \\(y \= \-x \+ 0\\)
5. \\(y \= x \- 2\\)
ANSWER:
```
## (Intercept) c(1, 7)
## 2 1
```
#### 5\.1\.2\.2 Exercise 2
1. Find \\(x\\), \\(y\\), and \\(z\\) that solve the following:
\\\[
x \\left(\\begin{array}{r}1\\\\2\\\\4\\end{array}\\right) \+
y \\left(\\begin{array}{r}5\\\\\-2\\\\0\\end{array}\\right) \+
z \\left(\\begin{array}{r}1\\\\\-2\\\\3\\end{array}\\right)
\=
\\left(\\begin{array}{r}1\\\\1\\\\1\\end{array}\\right) .\\]
What’s the value of \\(x\\)?: {\-0\.2353,0\.1617,*0\.4265*,1\.3235,1\.5739}
ANSWER:
```
## c(1, 2, 4) c(5, -2, 0) c(1, -2, 3)
## 0.4264706 0.1617647 -0.2352941
```
2. Find \\(x\\), \\(y\\), and \\(z\\) that solve the following:
\\\[
x \\left(\\begin{array}{r}1\\\\2\\\\4\\end{array}\\right) \+
y \\left(\\begin{array}{r}5\\\\\-2\\\\0\\end{array}\\right) \+
z \\left(\\begin{array}{r}1\\\\\-2\\\\3\\end{array}\\right)
\=
\\left(\\begin{array}{r}1\\\\4\\\\3\\end{array}\\right) .\\]
What’s the value of \\(x\\)? {\-0\.2353,0\.1617,0\.4264,*1\.3235*,1\.5739}
ANSWER:
```
## c(1, 2, 4) c(5, -2, 0) c(1, -2, 3)
## 1.32352941 0.08823529 -0.76470588
```
#### 5\.1\.2\.3 Exercise 3
Using `project( )`, solve these sets of simultaneous linear equations for \\(x\\), \\(y\\), and \\(z\\):
Two equations in two unknowns:
\\\[\\begin{array}{rcrcr}
x \& \+ \& 2 y \& \= \&1\\\\
3 x \& \+ \& 2 y \& \= \&7\\\\
\\end{array}\\]
1. *\\(x\=3\\) and \\(y\=\-1\\)*
2. \\(x\=1\\) and \\(y\=3\\)
3. \\(x\=3\\) and \\(y\=3\\)
ANSWER:
```
## x y
## 3 -1
```
Three equations in three unknowns:
\\\[\\begin{array}{rcrcrcr}
x \& \+ \& 2 y \& \+ \& 7 z \& \= \&1\\\\
3 x \& \+ \& 2 y \& \+ \&2 z\&\= \&7\\\\
\-2 x \& \+ \& 3 y \& \+ \& z\&\= \&7\\\\
\\end{array} \\]
1. \\(x\=3\.1644\\), \\(y\=\-0\.8767\\), \\(z\=0\.8082\\)
2. \\(x\=\-0\.8767\\),\\(y\=0\.8082\\), \\(z\=3\.1644\\)
3. *\\(x\=0\.8082\\), \\(y\=3\.1644\\), \\(z\=\-0\.8767\\)*
ANSWER:
```
## x y z
## 0.8082192 3.1643836 -0.8767123
```
Four equations in four unknowns:
\\\[\\begin{array}{rcrcrcrcr}
x \& \+ \& 2 y \& \+ \& 7 z \& \+\& 8 w\& \= \&1\\\\
3 x \& \+ \& 2 y \& \+ \&2 z\& \+\& 2 w\& \= \&7\\\\
\-2 x \& \+ \& 3 y \& \+ \& z\&\+\& w\&\= \&7\\\\
x \& \+ \& 5 y \& \+ \&3 z\&\+\& w\&\= \&3\\\\
\\end{array} \\]
\\begin{MultipleChoice}
a. \\(x\=5\.500\\), \\(y\=\-7\.356\\), \\(z\=3\.6918\\), \\(w\=1\.1096\\)
\#. *\\(x\=1\.1096\\), \\(y\=3\.6918\\), \\(z\=\-7\.356\\), \\(w\=5\.500\\)*
\#. \\(x\=5\.500\\), \\(y\=\-7\.356\\), \\(z\=1\.1096\\), \\(w\=3\.6918\\)
\#. \\(x\=1\.1096\\), \\(y\=\-7\.356\\), \\(z\=5\.500\\), \\(w\=3\.6918\\)
ANSWER:
```
## x y z w
## 1.109589 3.691781 -7.356164 5.500000
```
Three equations in four unknowns:
\\\[\\begin{array}{rcrcrcrcr}
x \& \+ \& 2 y \& \+ \& 7 z \& \+\& 8 w\& \= \&1\\\\
3 x \& \+ \& 2 y \& \+ \&2 z\& \+\& 2 w\& \= \&7\\\\
\-2 x \& \+ \& 3 y \& \+ \& z\&\+\& w\&\= \&7\\\\
\\end{array} \\]
1. There is no solution.
2. *There is a solution.*
ANSWER:
```
## [,1]
## [1,] 1
## [2,] 7
## [3,] 7
```
You may hear it said that there is no solution to a problem of three equations in four unknowns. But a more precise statement is that there are many solutions, an infinity of them. Mathematicians tend to use “a solution” to stand for “a unique, exact solution.” In applied work, neither “unique” nor “exact” mean very much.
#### 5\.1\.2\.1 Exercise 1
Remember all those “find the line that goes through the points
problems” from algebra class. They can be a bit simpler with the proper
linear\-algebra tools.
Example: “Find the line that goes through the points \\((2,3\)\\) and \\((7,\-8\)\\).”
One way to interpret this is that we are looking for a relationship between \\(x\\) and \\(y\\) such that \\(y \= mx \+ b\\). In vector terms, this means that the \\(x\\)\-coordinates of the two points, \\(2\\) and \\(7\\), made into a vector \\(\\left(\\begin{array}{c}2\\\\7\\end{array}\\right)\\) will be scaled by \\(m\\), and an intercept vector \\(\\left(\\begin{array}{c}1\\\\1\\end{array}\\right)\\) will be scaled by \\(b\\).
```
## (Intercept) x
## 7.4 -2.2
```
Now you know \\(m\\) and \\(b\\).
YOUR TASKS: For each of the following, find the line that goes through the two Cartesian points using the `project( )` function. Remember, the vectors involved in the projection will have the form
\\\[\\vec{x}\=\\left(\\begin{array}{r}x\_1\\\\x\_2\\end{array}\\right) \\mbox{and} \\ \\
\\vec{y}\=\\left(\\begin{array}{r}y\_1\\\\y\_2\\end{array}\\right)\\] and
1. Find the line that goes through the two points \\((x\_1\=9, y\_1\=1\)\\) and \\((x\_2\=3, y\_2\=7\)\\).
1. \\(y \= x \+ 2\\)
2. *\\(y \= \-x \+ 10\\)*
3. \\(y\=x \+ 0\\)
4. \\(y \= \-x \+ 0\\)
5. \\(y \= x \- 2\\)
ANSWER:
```
## (Intercept) c(9, 3)
## 10 -1
```
2. Find the line that goes through the origin \\((x\_1\=0, y\_1\=0\)\\) and \\((x\_2\=2,y\_2\=\-2\)\\).
1. \\(y \= x \+ 2\\)
2. \\(y \= \-x \+ 10\\)
3. \\(y\=x \+ 0\\)
4. *\\(y \= \-x \+ 0\\)*
5. \\(y \= x \- 2\\)
ANSWER:
```
## (Intercept) c(0, 2)
## -5.551115e-17 -1.000000e+00
```
3. Find the line that goes through \\((x\_1\=1, y\_1\=3\)\\) and \\((x\_2\=7, y\_2\=9\)\\)
1. *\\(y \= x \+ 2\\)*
2. \\(y \= \-x \+ 10\\)
3. \\(y\=x \+ 0\\)
4. \\(y \= \-x \+ 0\\)
5. \\(y \= x \- 2\\)
ANSWER:
```
## (Intercept) c(1, 7)
## 2 1
```
#### 5\.1\.2\.2 Exercise 2
1. Find \\(x\\), \\(y\\), and \\(z\\) that solve the following:
\\\[
x \\left(\\begin{array}{r}1\\\\2\\\\4\\end{array}\\right) \+
y \\left(\\begin{array}{r}5\\\\\-2\\\\0\\end{array}\\right) \+
z \\left(\\begin{array}{r}1\\\\\-2\\\\3\\end{array}\\right)
\=
\\left(\\begin{array}{r}1\\\\1\\\\1\\end{array}\\right) .\\]
What’s the value of \\(x\\)?: {\-0\.2353,0\.1617,*0\.4265*,1\.3235,1\.5739}
ANSWER:
```
## c(1, 2, 4) c(5, -2, 0) c(1, -2, 3)
## 0.4264706 0.1617647 -0.2352941
```
2. Find \\(x\\), \\(y\\), and \\(z\\) that solve the following:
\\\[
x \\left(\\begin{array}{r}1\\\\2\\\\4\\end{array}\\right) \+
y \\left(\\begin{array}{r}5\\\\\-2\\\\0\\end{array}\\right) \+
z \\left(\\begin{array}{r}1\\\\\-2\\\\3\\end{array}\\right)
\=
\\left(\\begin{array}{r}1\\\\4\\\\3\\end{array}\\right) .\\]
What’s the value of \\(x\\)? {\-0\.2353,0\.1617,0\.4264,*1\.3235*,1\.5739}
ANSWER:
```
## c(1, 2, 4) c(5, -2, 0) c(1, -2, 3)
## 1.32352941 0.08823529 -0.76470588
```
#### 5\.1\.2\.3 Exercise 3
Using `project( )`, solve these sets of simultaneous linear equations for \\(x\\), \\(y\\), and \\(z\\):
Two equations in two unknowns:
\\\[\\begin{array}{rcrcr}
x \& \+ \& 2 y \& \= \&1\\\\
3 x \& \+ \& 2 y \& \= \&7\\\\
\\end{array}\\]
1. *\\(x\=3\\) and \\(y\=\-1\\)*
2. \\(x\=1\\) and \\(y\=3\\)
3. \\(x\=3\\) and \\(y\=3\\)
ANSWER:
```
## x y
## 3 -1
```
Three equations in three unknowns:
\\\[\\begin{array}{rcrcrcr}
x \& \+ \& 2 y \& \+ \& 7 z \& \= \&1\\\\
3 x \& \+ \& 2 y \& \+ \&2 z\&\= \&7\\\\
\-2 x \& \+ \& 3 y \& \+ \& z\&\= \&7\\\\
\\end{array} \\]
1. \\(x\=3\.1644\\), \\(y\=\-0\.8767\\), \\(z\=0\.8082\\)
2. \\(x\=\-0\.8767\\),\\(y\=0\.8082\\), \\(z\=3\.1644\\)
3. *\\(x\=0\.8082\\), \\(y\=3\.1644\\), \\(z\=\-0\.8767\\)*
ANSWER:
```
## x y z
## 0.8082192 3.1643836 -0.8767123
```
Four equations in four unknowns:
\\\[\\begin{array}{rcrcrcrcr}
x \& \+ \& 2 y \& \+ \& 7 z \& \+\& 8 w\& \= \&1\\\\
3 x \& \+ \& 2 y \& \+ \&2 z\& \+\& 2 w\& \= \&7\\\\
\-2 x \& \+ \& 3 y \& \+ \& z\&\+\& w\&\= \&7\\\\
x \& \+ \& 5 y \& \+ \&3 z\&\+\& w\&\= \&3\\\\
\\end{array} \\]
\\begin{MultipleChoice}
a. \\(x\=5\.500\\), \\(y\=\-7\.356\\), \\(z\=3\.6918\\), \\(w\=1\.1096\\)
\#. *\\(x\=1\.1096\\), \\(y\=3\.6918\\), \\(z\=\-7\.356\\), \\(w\=5\.500\\)*
\#. \\(x\=5\.500\\), \\(y\=\-7\.356\\), \\(z\=1\.1096\\), \\(w\=3\.6918\\)
\#. \\(x\=1\.1096\\), \\(y\=\-7\.356\\), \\(z\=5\.500\\), \\(w\=3\.6918\\)
ANSWER:
```
## x y z w
## 1.109589 3.691781 -7.356164 5.500000
```
Three equations in four unknowns:
\\\[\\begin{array}{rcrcrcrcr}
x \& \+ \& 2 y \& \+ \& 7 z \& \+\& 8 w\& \= \&1\\\\
3 x \& \+ \& 2 y \& \+ \&2 z\& \+\& 2 w\& \= \&7\\\\
\-2 x \& \+ \& 3 y \& \+ \& z\&\+\& w\&\= \&7\\\\
\\end{array} \\]
1. There is no solution.
2. *There is a solution.*
ANSWER:
```
## [,1]
## [1,] 1
## [2,] 7
## [3,] 7
```
You may hear it said that there is no solution to a problem of three equations in four unknowns. But a more precise statement is that there are many solutions, an infinity of them. Mathematicians tend to use “a solution” to stand for “a unique, exact solution.” In applied work, neither “unique” nor “exact” mean very much.
| Field Specific |
dtkaplan.github.io | https://dtkaplan.github.io/RforCalculus/fitting-functions-to-data.html |
Chapter 6 Fitting functions to data
===================================
Often, you have an idea for the form of a function for a model and you need to select parameters that will make the model function a good match for observations. The process of selecting parameters to match observations is called ***model fitting***.
To illustrate, the data in the file “utilities.csv” records the average temperature each month (in degrees F) as well as the monthly natural gas usage (in cubic feet, ccf). There is, as you might expect, a strong relationship between the two.
```
Utils <- read.csv("http://www.mosaic-web.org/go/datasets/utilities.csv")
gf_point(ccf ~ temp, data = Utils) %>%
gf_labs(y = "Natural gas usage (ccf/month)",
x = "Average outdoor temperature (F)")
```
Many different sorts of functions might be used to represent these data. One of the simplest and most com\- monly used in modeling is a straight\-line function \\(f(x) \= A x \+ B\\). In function \\(f(x)\\), the variable \\(x\\) stands for the input, while A and B are parameters. It’s important to remember what are the names of the inputs and outputs when fitting models to data – you need to arrange for the name to match the corresponding data.
With the utilities data, the input is the temperature, temp. The output that is to be modeled is ccf. To fit the model function to the data, you write down the formula with the appropriate names of inputs, parameters, and the output in the right places:
```
f <- fitModel(ccf ~ A * temp + B, data = Utils)
```
The output of `fitModel()` is a function of the same form mathematical form as you specified in the first argument (here, `ccf ~ A * temp + B`) with specific numerical values given to the parameters in order to make the function best match the data. How does `fitModel()` know which of the quantities in the mathematical form are variables and which are parameters? Anything contained in the data used for fitting is a variable (here `temp`); other things (here, `A` and `B`) are parameters.
```
gf_point(ccf ~ temp, data = Utils) %>%
slice_plot(f(temp) ~ temp)
```
You can add other functions into the mix easily. For instance, you might think that `sqrt(temp)` works in there somehow. Try it out!
```
f2 <- fitModel(
ccf ~ A * temp + B + C *sqrt(temp),
data = Utils)
gf_point(
ccf ~ temp, data = Utils) %>%
slice_plot(f2(temp) ~ temp)
```
This example has involved just one input variable. Throughout the natural and social sciences, a very important and widely used technique is to use multiple variables in a projection. To illustrate, look at the data in `"used-hondas.csv"` on the prices of used Honda automobiles.
```
Hondas <- read.csv("http://www.mosaic-web.org/go/datasets/used-hondas.csv")
head(Hondas)
```
```
## Price Year Mileage Location Color Age
## 1 20746 2006 18394 St.Paul Grey 1
## 2 19787 2007 8 St.Paul Black 0
## 3 17987 2005 39998 St.Paul Grey 2
## 4 17588 2004 35882 St.Paul Black 3
## 5 16987 2004 25306 St.Paul Grey 3
## 6 16987 2005 33399 St.Paul Black 2
```
As you can see, the data set includes the variables `Price`, `Age`, and `Mileage`. It seems reasonable to think that price will depend both on the mileage and age of the car. Here’s a very simple model that uses both variables:
```
carPrice1 <- fitModel(
Price ~ A + B * Age + C * Mileage, data = Hondas
)
```
You can plot the fitted function out:
```
contour_plot(
carPrice1(Age = age, Mileage = miles) ~ age + miles,
domain(age=2:8, miles=range(0, 60000)))
```
Consider now another way of reading a contour plot. For the example, let’s focus on the contour at $17,000\. Any combination of age and miles that falls on this contour produces the same car price: $17,000\. The **slope** of the contour tells you the trade\-off between mileage and age. Look at two points on the contour that differ by 10,000 miles. The corresponding difference in age is about 1\.5 years. So, when comparing two cars at the same price, a decrease in mileage by 10,000 is balanced by an increase in age of 1\.5 miles.
A somewhat more sophisticated model might include what’s called an ***interaction*** between age and mileage, recognizing that the effect of age might be different depending on mileage.
```
carPrice2 <- fitModel(
Price ~ A + B * Age + C * Mileage + D * Age * Mileage,
data = Hondas)
```
Again, once the function has been fitted to the data, you can plot it in the ordinary way:
```
contour_plot(
carPrice2(Age=age, Mileage=miles) ~ age + miles,
domain(age = range(0, 8), miles = range(0, 60000)))
```
The shape of the contours is slightly different than in `carPrice1()`; they bulge upward a little. interpreting such contours requires a bit of practice. Look at a small region on one of the contours. The slope of the contour tells you the ***trade\-off*** between mileage and age. To see this, look at the $17,000 contour where it passes through age \= 6 years and mileage \= 10,000 miles. Now look at the $ 17,000 contour at zero mileage. In moving along the contour, the price stays constant. (That’s how contours are defined: the points where the price is the same, in this case $17,000\.) Lowering the mileage by 10,000 miles is balanced out by increasing the age by just under one year. (The $17,000 contour has a point at zero mileage and 6\.8 years.) Another way to say this is that the effect of an age increase of 0\.8 years is the same as a mileage *decrease* of 10,000 miles.
Now look at the same $17,000 contour at zero age (that is, at the left extreme of the graph). A decrease in mileage by 10,000 increasecorresponds to an in age by 1\.6 years. In other words, according to the model, for newer cars the relative importance of mileage vs. age is *less* than for older cars. For cars aged zero, 10,000 miles is worth 1\.6 years, but for six\-year old cars, 10,000 miles is worth only 0\.8 years.
The interaction that was added in `priceFun2()` is what produces the different effect on price of mileage for different age cars.
The `fitModel()` operator makes it very easy to find the parameters in any given model that make the model most closely approximate the data. The work in modeling is choosing the right form of the model (Interaction term or not? Whether to include a new variable or not?)
and interpreting the result. In the next section, we’ll look at some different choices in model form (linear vs. nonlinear) and some of the mathematical logic behind fitting.
### 6\.0\.1 Exercises
#### 6\.0\.1\.1 Exercise 1
The text of the section describes a model `carPrice1()` with Age and Mileage as the input quantities and price (in USD) as the output. The claim was made that price is discernably a function of both `Age` and `Mileage`. Let’s make that graph again.
```
contour_plot(
carPrice1(Age = age, Mileage = miles) ~ age + miles,
domain(age = range(0, 8), miles = range(0, 60000)))
```
In the above graph, the contours are vertical.
1. What do the vertical contours say about price as a function of `Age` and `Mileage`?
1. Price depends strongly on both variables.
2. *Price depends on `Age` but not `Mileage`.*
3. Price depends on `Mileage` but not `Age`.
4. Price doesn’t depend much on either variable.
ANSWER:
Each contour corresponds to a different price. As you track horizontally with `Age`, you cross from one contour to another. But as you track vertically with `Mileage`, you don’t cross contours. This means that price does **not** depend on `Mileage`, since changing `Mileage` doesn’t lead to a change in price. But price does change with `Age`.
2. The graph of the same function shown in the body of the text has contours that slope downwards from left to right. What does this say about price as a function of `Age` and `Mileage`?
1. *Price depends strongly on both variables.*
2. Price depends on `Age` but not `Mileage`.
3. Price depends on `Mileage` but not `Age`.
4. Price doesn’t depend much on either variable.
ANSWER:
As you trace horizontally, with `Age`, you move from contour to contour: the price changes. So price depends on `Age`. The same is true when you trace vertically, with `Mileage`. So price also depends on `Mileage`.
3. The same function is being graphed both in the body of the text and in this exercise. But the graphs are very different! Explain why there is a difference and say which graph is right.
ANSWER:
Look at the tick marks on the axes. In the graph in the body of the text, `Age` runs from two to eight years. But in the exercise’s graph, `Age` runs only from zero to one year. Similarly, the graph in the body of the text has `Mileage` running from 0 to 60,000 miles, but in the exercise’s graph, `Mileage` runs from 0 to 1\.
The two graphs show the same function, so both are “right.” But the exercise’s graph is visually misleading. It’s hardly a surprise that price doesn’t change very much from 0 miles to 1 mile, but it does change (somewhat) from 0 years to 1 year.
The moral here: Pay careful attention to the axes and the range that they display. When you draw a graph, make sure that you set the ranges to something relevant to the problem at hand.
#### 6\.0\.1\.2 Exercise 2
Economists usually think about prices in terms of their logarithms. The advantage of doing this is that it doesn’t matter what currency the price is in; an increase of 1 in log price is the same proportion regardless of the price or its currency.
Consider a model of \\(\\log\_10 \\mbox{price}\\) as a function of miles and age.
```
logPrice2 <- fitModel(
logPrice ~ A + B * Age + C * Mileage + D * Age * Mileage,
data = Hondas %>% mutate(logPrice = log10(Price)))
```
This model was defined to include an interaction between age and mileage. Of course, it might happen that the parameter `D` will be close to zero. That would mean that the data don’t provide any evidence for an interaction.
Fit the model and look at the contours of log price. What does the shape of the contours tell you about whether the data give evidence for an interaction in log price?
ANSWER:
```
contour_plot(
logPrice2(Age=age, Mileage=miles) ~ age + miles,
domain(age = range(0, 8), miles = range(0, 60000)))
```
The contours are pretty much straight, which suggests that there is little interaction. When interpreting log prices, you can think about a increase of, say, 0\.05 in output as corresponding to the same *proportional* increase in price. For example, an increase in log price from 4\.2 (which is \\(10^{4\.2}\\) \= 15,849\) to 4\.25 (which is \\(10^{4\.25}\\) \= 17,783\) is an increase by 12% in actual price. A further increase in log price to 4\.3 (which is, in actual price, \\(10^{4\.3}\\) \= 19,953\) is a further 12% increase in actual price.
#### 6\.0\.1\.3 Exercise 3: Stay near the data
Fitting functions to data is not magic. Insofar as the data constrains the plausible forms of the model, the model will be a reasonable match to the data. But for inputs for which there is no data (e.g. 0 year\-old cars with 60,000 miles) a model can do crazy things. This is particularly so if the model is complicated, say including powers of the variable, as in this one:
```
carPrice3 <- fitModel(
Price ~ A + B * Age + C * Mileage + D * Age * Mileage +
E * Age^2 + F * Mileage^2 + G * Age^2 * Mileage +
H * Age * Mileage^2,
data = Hondas)
gf_point(Mileage ~ Age, data = Hondas, fill = NA) %>%
contour_plot(
carPrice3(Age=Age, Mileage=Mileage) ~ Age + Mileage)
```
For cars under 3 years old or older cars with either very high or very low mileage, the contours are doing some crazy things! Common sense says that higher mileage or greater age results in higher price. In terms of the contours, common sense translates to contours that have a negative slope. But the slope of these contours is often positive.
It helps to consider whether there are regions where there is little data. As a rule, a complicated model like `carPrice3()` is unreliable for inputs where there is little or no data.
Focus just on the regions of the plot where there is lots of data. Do the contours have the shape expected by common sense?
ANSWER:
Where there is lots of data, the local shape of the contour is indeed gently sloping downward from left to right, as anticipated by common sense.
6\.1 Curves and linear models
-----------------------------
At first glance, the terms “linear” and “curve” might seem contradictory. Lines are straight, curves are not.
The word linear in “linear models” refers to “linear combination,” not “straight line.” As you will see, you can construct complicated curves by taking linear combinations of functions, and use the linear algebra projection operation to match these curves as closely as possible to data. That process of matching is called “fitting.”
To illustrate, the data in the file `"utilities.csv"` records the average temperature each month (in degrees F) as well as the monthly natural gas usage (in cubic feet, ccf). There is, as you might expect, a strong relationship between the two.
```
Utilities = read.csv("http://www.mosaic-web.org/go/datasets/utilities.csv")
gf_point(ccf ~ temp, data = Utilities)
```
Many different sorts of functions might be used to represent these data. One of the simplest and most commonly used in modeling is a straight\-line function. In terms of linear algebra, this is a linear combination of the functions \\(f\_1(T) \= 1\\) and \\(f\_2(T) \= T\\). Conventionally, of course, the straight\-line function is written \\(f(T) \= b \+ m T\\). (Perhaps you prefer to write it this way: \\(f(x) \= m x \+ b\\). Same thing.) This conventional notation is merely naming the scalars as \\(m\\) and \\(b\\) that will participate in the linear combination. To find the numerical scalars that best match the data — to “fit the function” to the data — can be done with the linear algebra `project( )` operator.
```
project(ccf ~ temp + 1, data = Utilities)
```
```
## (Intercept) temp
## 253.098208 -3.464251
```
The `project( )` operator gives the values of the scalars. The best fitting function itself is built by using these scalar values to combine the functions involved.
```
model_fun = makeFun( 253.098 - 3.464*temp ~ temp)
gf_point(ccf ~ temp, data=Utils) %>%
slice_plot(model_fun(temp) ~ temp)
```
You can add other functions into the mix easily. For instance, you might think that `sqrt(T)` works in there somehow. Try it out!
```
project(ccf ~ temp + sqrt(temp) + 1, data = Utils)
```
```
## (Intercept) temp sqrt(temp)
## 447.029273 1.377666 -63.208025
```
```
mod2 <- makeFun(447.03 + 1.378*temp - 63.21*sqrt(temp) ~ temp)
gf_point(ccf ~ temp, data=Utils) %>% # the data
slice_plot(mod2(temp) ~ temp) %>%
gf_labs(x = "Temperature (F)",
y = "Natural gas used (ccf)")
```
Understanding the mathematics of projection is important for using it, but focus for a moment on the **notation** being used to direct the computer to carry out the linear algebra notation.
The `project( )` operator takes a series of vectors. When fitting a function to data, these vectors are coming from a data set and so the command must refer to the names of the quantities as they appear in the data set, e.g., `ccf` or `temp`. You’re allowed to perform operations on those quantities, for instance the `sqrt` in the above example, to create a new vector. The `~` is used to separate out the “target” vector from the set of one or more vectors onto which the projection is being made. In traditional mathematical notation, this operation would be written as an equation involving a matrix \\(\\mathbf A\\) composed of a set of vectors \\(\\left( \\vec{v}\_1, \\vec{v}\_2, \\ldots, \\vec{v}\_p \\right) \= {\\mathbf A}\\), a target vector \\(\\vec{b}\\), and the set of unknown coefficients \\(\\vec{x}\\). The equation that connects these quantities is written \\({\\mathbf A} \\cdot \\vec{x} \\approx \\vec{b}\\). In this notation, the process of “solving” for \\(\\vec{x}\\) is implicit. The computer notation rearranges this to
\\\[ \\vec{x} \= \\mbox{\\texttt{project(}} \\vec{b} \\sim \\vec{v}\_1 \+
\\vec{v}\_2 \+ \\ldots \+ \\vec{v}\_p \\mbox{\\texttt{)}} .\\]
Once you’ve done the projection and found the coefficients, you can construct the corresponding mathematical function by using the coefficients in a mathematical expression to create a function. As with all functions, the names you use for the arguments are a matter of personal choice, although it’s sensible to use names that remind you of what’s being represented by the function.
The choice of what vectors to use in the projection is yours: part of the modeler’s art.
Throughout the natural and social sciences, a very important and widely used technique is to use multiple variables in a projection. To illustrate, look at the data in `"used-hondas.csv"` on the prices of used Honda automobiles.
```
Hondas = read.csv("http://www.mosaic-web.org/go/datasets/used-hondas.csv")
head(Hondas)
```
```
## Price Year Mileage Location Color Age
## 1 20746 2006 18394 St.Paul Grey 1
## 2 19787 2007 8 St.Paul Black 0
## 3 17987 2005 39998 St.Paul Grey 2
## 4 17588 2004 35882 St.Paul Black 3
## 5 16987 2004 25306 St.Paul Grey 3
## 6 16987 2005 33399 St.Paul Black 2
```
As you can see, the data set includes the variables `Price`, `Age`, and `Mileage`. It seems reasonable to think that price will depend both on the mileage and age of the car. Here’s a very simple model that uses both variables:
```
project(Price ~ Age + Mileage + 1, data = Hondas)
```
```
## (Intercept) Age Mileage
## 2.133049e+04 -5.382931e+02 -7.668922e-02
```
You can plot that out as a mathematical function:
```
car_price <- makeFun(21330-5.383e2*age-7.669e-2*miles ~ age & miles)
contour_plot(car_price(age, miles) ~ age + miles,
domain(age=range(2, 8), miles=range(0, 60000))) %>%
gf_labs(title = "Miles per gallon")
```
A somewhat more sophisticated model might include what’s called an “interaction” between age and mileage, recognizing that the effect of age might be different depending on mileage.
```
project(Price ~ Age + Mileage + Age*Mileage + 1, data = Hondas)
```
```
## (Intercept) Age Mileage Age:Mileage
## 2.213744e+04 -7.494928e+02 -9.413962e-02 3.450033e-03
```
```
car_price2 <- makeFun(22137 - 7.495e2*age - 9.414e-2*miles +
3.450e-3*age*miles ~ age & miles)
contour_plot(
car_price2(Age, Mileage) ~ Age + Mileage,
domain(Age = range(0, 10), Mileage = range(0, 100000))) %>%
gf_labs(title = "Price of car (USD)")
```
### 6\.1\.1 Exercises
#### 6\.1\.1\.1 Exercise 1: Fitting Polynomials
Most college students take a course in algebra that includes a lot about polynomials, and polynomials are very often used in modeling. (Probably, they are used more often than they should be. And algebra teachers might be disappointed to hear that the most important polynomials models are low\-order ones, e.g., \\(f(x,y) \= a \+ bx \+ cy \+ dx y\\) rather than being cubics or quartics, etc.) Fitting a polynomial to data is a matter of linear algebra: constructing the appropriate vectors to represent the various powers. For example, here’s how to fit a quadratic model to the `ccf` versus `temp` variables in the `"utilities.csv"` data file:
```
Utilities = read.csv("http://www.mosaic-web.org/go/datasets/utilities.csv")
project(ccf ~ 1 + temp + I(temp^2), data = Utilities)
```
```
## (Intercept) temp I(temp^2)
## 317.58743630 -6.85301947 0.03609138
```
You may wonder, what is the `I( )` for? It turns out that there are different notations for statistics and mathematics, and that the `^` has a subtly different meaning in R formulas than simple exponentiation. The `I( )` tells the software to take the exponentiation literally in a mathematical sense.
The coefficients tell us that the best\-fitting quadratic model of `ccf` versus `temp` is:
```
ccfQuad <- makeFun(317.587 - 6.853*T + 0.0361*T^2 ~ T)
gf_point(ccf ~ temp, data = Utilities) %>%
slice_plot(ccfQuad(temp) ~ temp)
```
To find the value of this model at a given temperature, just evaluate the function. (And note that `ccfQuad( )` was defined with an input variable `T`.)
```
ccfQuad(T=72)
```
```
## [1] 11.3134
```
1. Fit a 3rd\-order polynomial of versus to the utilities data. What is the value of this model for a temperature of 32 degrees? {87,103,128,*142*,143,168,184}
ANSWER:
```
project(ccf ~ 1 + temp + I(temp^2) + I(temp^3), data = Utils)
```
```
## (Intercept) temp I(temp^2) I(temp^3)
## 2.550709e+02 -1.427408e+00 -9.643482e-02 9.609511e-04
```
```
ccfCubic <-
makeFun(2.551e2 - 1.427*T -
9.643e-2*T^2 + 9.6095e-4*T^3 ~ T)
gf_point(ccf ~ temp, data = Utils) %>%
slice_plot(ccfCubic(temp) ~ temp)
```
```
ccfCubic(32)
```
```
## [1] 142.1801
```
1. Fit a 4th\-order polynomial of `ccf` versus `temp` to the utilities data. What is the value of this model for a temperature of 32 degrees? {87,103,128,140,*143*,168,184}
ANSWER:
```
project(ccf ~ 1 + temp + I(temp^2) + I(temp^3) + I(temp^4),
data = Utils)
```
```
## (Intercept) temp I(temp^2) I(temp^3) I(temp^4)
## 1.757579e+02 8.225746e+00 -4.815403e-01 7.102673e-03 -3.384490e-05
```
```
ccfQuad <- makeFun(1.7576e2 + 8.225*T -4.815e-1*T^2 +
7.103e-3*T^3 - 3.384e-5*T^4 ~ T)
gf_point(ccf ~ temp, data = Utils) %>%
slice_plot(ccfQuad(temp) ~ temp) %>%
gf_labs(y = "Natural gas use (ccf)", x = "Temperature (F)")
```
```
ccfQuad(32)
```
```
## [1] 143.1713
```
1. Make a plot of the **difference** between the 3rd\- and 4th\-order models over a temperature range from 20 to 60 degrees. What’s the biggest difference (in absolute value) between the outputs of the two models?
1. About 1 ccf.
2. *About 4 ccf.*
3. About 8 ccf.
4. About 1 degree F.
5. *About 4 degrees F.*
6. About 8 degress F.
ANSWER:
The output of the models is in units of ccf.
```
slice_plot(ccfQuad(temp) - ccfCubic(temp) ~ temp,
domain(temp = range(20, 60)))
```
The difference between the two models is always within about 4 ccf.
\\end{enumerate}
#### 6\.1\.1\.2 Exercise 2: Multiple Regression
In 1980, the magazine Consumer Reports studied 1978\-79 model cars to explore how different factors influence fuel economy. The measurement included fuel efficiency in miles\-per\-gallon, curb weight in pounds, engine power in horsepower, and number of cylinders. These variables are included in the file `"cardata.csv"`.
```
Cars = read.csv("http://www.mosaic-web.org/go/datasets/cardata.csv")
head(Cars)
```
```
## mpg pounds horsepower cylinders tons constant
## 1 16.9 3967.60 155 8 2.0 1
## 2 15.5 3689.14 142 8 1.8 1
## 3 19.2 3280.55 125 8 1.6 1
## 4 18.5 3585.40 150 8 1.8 1
## 5 30.0 1961.05 68 4 1.0 1
## 6 27.5 2329.60 95 4 1.2 1
```
1. Use these data to fit the following model of fuel economy (variable `mpg`):
\\\[ \\mbox{\\texttt{mpg}} \= x\_0 \+ x\_1 \\mbox{\\texttt{pounds}}. \\]
What’s the value of the model for an input of 2000 pounds? {14\.9,19\.4,21\.1,25\.0,*28\.8*,33\.9,35\.2}
ANSWER:
```
project(mpg ~ pounds + 1, data = Cars)
```
```
## (Intercept) pounds
## 43.188646127 -0.007200773
```
```
43.1886 - 0.00720*2000
```
```
## [1] 28.7886
```
1. Use the data to fit the following model of fuel economy (variable `mpg`):
\\\[ \\mbox{\\texttt{mpg}} \= y\_0 \+ y\_1 \\mbox{\\texttt{pounds}} \+ y\_2 \\mbox{\\texttt{horsepower}}. \\]
1. What’s the value of the model for an input of 2000 pounds and 150 horsepower? {14\.9,*19\.4*,21\.1,25\.0,28\.8,33\.9,35\.2}
2. What’s the value of the model for an input of 2000 pounds and 50 horsepower? {14\.9,19\.4,21\.1,25\.0,28\.8,*33\.9*,35\.2}
ANSWER:
```
project(mpg ~ pounds + horsepower + 1, data = Cars)
```
```
## (Intercept) pounds horsepower
## 46.932738241 -0.002902265 -0.144930546
```
```
mod_fun <- makeFun(46.933 - 0.00290*lbs - 0.1449*hp ~ lbs + hp)
mod_fun(lbs = 2000, hp = 50)
```
```
## [1] 33.888
```
1. Construct a linear function that uses `pounds`, `horsepower` and `cylinders` to model `mpg`. We don’t have a good way to plot out functions of three input variables, but you can still write down the formula. What is it?
#### 6\.1\.1\.3 Exercise 3: The Intercept
Go back to the problem where you fit polynomials to the `ccf` versus `temp` data. Do it again, but this time tell the software to remove the intercept from the set of vectors. (You do this with the notation `-1` in the `project( )` operator.)
Plot out the polynomials you find over a temperature range from \-10 to 50 degrees, and plot the raw data over them. There’s something very strange about the models you will get. What is it?
1. The computer refuses to carry out this instruction.
2. All the models show a constant output of `ccf`.
3. * All the models have a `ccf` of zero when `temp` is zero.
4. All the models are exactly the same!
6\.2 `fitModel()`
-----------------
6\.3 Functions with nonlinear parameters
----------------------------------------
The techniques of linear algebra can be used to find the best linear combination of a set of functions. But, often, there are parameters in functions that appear in a nonlinear way. Examples include \\(k\\) in \\(f(t) \= A \\exp( k t ) \+ C\\) and \\(P\\) in \\(A \\sin(\\frac{2\\pi}{P} t) \+ C\\). Finding these nonlinear parameters cannot be done directly using linear algebra, although the methods of linear algebra do help in simplifying the situation.
Fortunately, the idea that the distance between functions can be measured works perfectly well when there are nonlinear parameters involved. So we’ll continue to use the “sum of square residuals” when evaluating how close an function approximation is to a set of data.
6\.4 Exponential functions
--------------------------
To illustrate, consider the `"Income-Housing.csv"` data which
shows an exponential relationship between the fraction of families
with two cars and income:
```
Families <- read.csv("http://www.mosaic-web.org/go/datasets/Income-Housing.csv")
gf_point(TwoVehicles ~ Income, data = Families)
```
The pattern of the data suggests exponential “decay” towards close to 100% of the families having two vehicles. The mathematical form of this exponential function is \\(A exp(k Y) \+ C\\). A and C are unknown linear parameters. \\(k\\) is an unknown nonlinear parameter – it will be negative for exponential decay. Linear algebra allows us to find the best linear parameters \\(A\\) and \\(C\\) in order to match the data. But how to find \\(k\\)?
Suppose you make a guess at \\(k\\). The guess doesn’t need to be completely random; you can see from the data themselves that the “half\-life” is something like $25,000\. The parameter \\(k\\) is corresponds to the half life, it’s \\(\\ln(0\.5\)/\\mbox{half\-life}\\), so here a good guess for \\(k\\) is \\(\\ln(0\.5\)/25000\\), that is
```
kguess <- log(0.5) / 25000
kguess
```
```
## [1] -2.772589e-05
```
Starting with that guess, you can find the best values of the linear
parameters \\(A\\) and \\(C\\) through linear algebra techniques:
```
project( TwoVehicles ~ 1 + exp(Income*kguess), data = Families)
```
```
## (Intercept) exp(Income * kguess)
## 110.4263 -101.5666
```
Make sure that you understand completely the meaning of the above statement. It does NOT mean that `TwoVehicles` is the sum \\(1 \+ \\exp{\-\\mbox{Income} \\times \\mbox{kguess}}\\). Rather, it means that you are searching for the linear combination of the two functions \\(1\\) and \\(\\exp{\-\\mbox{Income} \\times\\mbox{kguess}}\\) that matches `TwoVehicles` as closely as possible. The values returned by tell you what this combination will be: how much of \\(1\\) and how much of \\(\\exp{\-\\mbox{Income}\\times\\mbox{kguess}}\\) to add together to approximate `TwoVehicles`.
You can construct the function that is the best linear combination by explicitly adding together the two functions:
```
f <- makeFun( 110.43 - 101.57*exp(Income * k) ~ Income, k = kguess)
gf_point(TwoVehicles ~ Income, data = Families) %>%
slice_plot(f(Income) ~ Income)
```
The graph goes satisfyingly close to the data points. But you can also look at the numerical values of the function for any income:
```
f(Income = 10000)
```
```
## [1] 33.45433
```
```
f(Income = 50000)
```
```
## [1] 85.0375
```
It’s particularly informative to look at the values of the function
for the specific `Income` levels in the data used for fitting,
that is, the data frame `Families`:
```
Results <- Families %>%
dplyr::select(Income, TwoVehicles) %>%
mutate(model_val = f(Income = Income),
resids = TwoVehicles - model_val)
Results
```
```
## Income TwoVehicles model_val resids
## 1 3914 17.3 19.30528 -2.0052822
## 2 10817 34.3 35.17839 -0.8783904
## 3 21097 56.4 53.84097 2.5590313
## 4 34548 75.3 71.45680 3.8432013
## 5 51941 86.6 86.36790 0.2320981
## 6 72079 92.9 96.66273 -3.7627306
```
The ***residuals*** are the difference between these model values and the
actual values of `TwoVehicles` in the data set.
The `resids` column gives the residual for each row. But you can also think of the `resids` column as a ***vector***. Recall that the square\-length of a vector
is the sum of squared residuals
```
sum(Results$resids^2)
```
```
## [1] 40.32358
```
This square length of the `resids` vector is an important way to quantify how well the model fits the data.
6\.5 Optimizing the guesses
---------------------------
Keep in mind that the sum of square residuals is a function of \\(k\\). The above value is just for our particular guess $k \= $`kguess`. Rather than using just one guess for \\(k\\), you can look at many different possibilities. To see them all at the same time, let’s plot out the sum of squared residuals as a *function of* \\(k\\). We’ll do this by building a function that calculates the sum of square residuals for any given value of \\(k\\).
```
sum_square_resids <- Vectorize(function(k) {
sum((Families$TwoVehicles - f(Income=Families$Income, k)) ^ 2)
})
slice_plot(
sum_square_resids(k) ~ k,
domain(k = range(log(0.5)/40000,log(0.5)/20000)))
```
This is a rather complicated computer command, but the graph is straightforward. You can see that the “best” value of \\(k\\), that is, the value of \\(k\\) that makes the sum of square residuals as small as possible, is near \\(k\=\-2\.8\\times10^{\-5}\\) — not very far from the original guess, as it happens. (That’s because the half\-life is very easy to estimate.)
To continue your explorations in nonlinear curve fitting, you are going to use a special purpose function that does much of the work for you while allowing you to try out various values of \\(k\\) by moving a slider.
### 6\.5\.1 Exercises
NEED TO WRITE A SHINY APP FOR THESE EXERCISES and change the text accordingly.
To continue your explorations in nonlinear curve fitting, you are going to use a special purpose function that does much of the work for you while allowing you to try out various values of \\(k\\) by moving a slider.
To set it to work on these data, give the following commands, which you can just cut and paste from here:
```
Families = read.csv("http://www.mosaic-web.org/go/datasets/Income-Housing.csv")
Families <- Families %>%
mutate(tens = Income / 10000)
mFitExp(TwoVehicles ~ tens, data = Families)
```
You should see a graph with the data points, and a continuous function drawn in red. There will also be a control box with a few check\-boxes and a slider, like this:
The check\-boxes indicate which functions you want to take a linear combination of. You should check “Constant” and “exp(kx)”, as in the figure. Then you can use the slider to vary \\(k\\) in order to make the function approximate the data as best you can. At the top of the graph is the RMS error — which here corresponds to the square root of the sum of square residuals. Making the RMS error as small as possible will give you the best \\(k\\).
You may wonder, what was the point of the line that said
```
mutate(tens = Income / 10000)
```
This is just a concession to the imprecision of the slider. The
slider works over a pretty small range of possible \\(k\\), so this line
constructs a new income variable with units of tens of thousands of
dollars, rather than the original dollars. The instructions will tell
you when you need to do such things.
#### 6\.5\.1\.1 Exercise 1
The data in `"stan-data.csv"` contains measurements made by Prof. Stan Wagon of the temperature of a cooling cup of hot water. The time was measured in seconds, which is not very convenient for the slider, so translate it to minutes. Then find the best value of \\(k\\) in an exponential model.
```
water = read.csv("http://www.mosaic-web.org/go/datasets/stan-data.csv")
water$minutes = water$time/60
mFitExp( temp ~ minutes, data=water)
```
1. What’s the value of \\(k\\) that gives the smallest RMS error? {\-1\.50,*\-1\.25*,\-1\.00,\-0\.75}
2. What are the units of this \\(k\\)? (This is not an R question, but a mathematical one.)
1. seconds
2. minutes
3. per second
4. *per minute*
3. Move the slider to set \\(k\=0\.00\\). You will get an error message from the system about a “singular matrix.” This is because the function \\(e^{0x}\\) is redundant with the constant function. What is it about \\(e^{kx}\\) with \\(k\=0\\) that makes it redundant with the constant function?
#### 6\.5\.1\.2 Exercise 2
The `"hawaii.csv"` data set contains a record of ocean tide levels in Hawaii over a few days. The `time` variable is in hours, which is perfectly sensible, but you are going to rescale it to “quarter days” so that the slider will give better results. Then, you are going to use the `mFitSines( )` program to allow you to explore what happens as you vary the nonlinear parameter \\(P\\) in the linear combination \\(A \\sin(\\frac{2 \\pi}{P} t) \+ B \\cos(\\frac{2 \\pi}{P} t) \+ C\\).
```
Hawaii = read.csv("http://www.mosaic-web.org/go/datasets/hawaii.csv")
Hawaii$quarterdays = Hawaii$time/6
mFitSines(water~quarterdays, data=Hawaii)
```
Check both the \\(\\sin\\) and \\(\\cos\\) checkbox, as well as the “constant.” Then vary the slider for \\(P\\) to find the period that makes the RMS error as small as possible. Make sure the slider labeled \\(n\\) stays at \\(n\=1\\). What is the period \\(P\\) that makes the RMS error as small as possible (in terms of “quarter days”)?
1. 3\.95 quarter days
2. 4\.00 quarter days
3. *4\.05 quarter days*
4. 4\.10 quarter days
You may notice that the “best fitting” sine wave is not particularly close to the data points. One reason for this is that the pattern is more complicated than a simple sine wave. You can get a better approximation by including additional sine functions with different periods. By moving the \\(n\\) slider to \\(n\=2\\), you will include both the sine and cosine functions of period \\(P\\) and of period \\(P/2\\) — the “first harmonic.” Setting \\(n\=2\\) will give a markedly better match to the data.
What period \\(P\\) shows up as best when you have \\(n\=2\\): {3\.92,4\.0,4\.06,*4\.09*,4\.10,4\.15}
### 6\.0\.1 Exercises
#### 6\.0\.1\.1 Exercise 1
The text of the section describes a model `carPrice1()` with Age and Mileage as the input quantities and price (in USD) as the output. The claim was made that price is discernably a function of both `Age` and `Mileage`. Let’s make that graph again.
```
contour_plot(
carPrice1(Age = age, Mileage = miles) ~ age + miles,
domain(age = range(0, 8), miles = range(0, 60000)))
```
In the above graph, the contours are vertical.
1. What do the vertical contours say about price as a function of `Age` and `Mileage`?
1. Price depends strongly on both variables.
2. *Price depends on `Age` but not `Mileage`.*
3. Price depends on `Mileage` but not `Age`.
4. Price doesn’t depend much on either variable.
ANSWER:
Each contour corresponds to a different price. As you track horizontally with `Age`, you cross from one contour to another. But as you track vertically with `Mileage`, you don’t cross contours. This means that price does **not** depend on `Mileage`, since changing `Mileage` doesn’t lead to a change in price. But price does change with `Age`.
2. The graph of the same function shown in the body of the text has contours that slope downwards from left to right. What does this say about price as a function of `Age` and `Mileage`?
1. *Price depends strongly on both variables.*
2. Price depends on `Age` but not `Mileage`.
3. Price depends on `Mileage` but not `Age`.
4. Price doesn’t depend much on either variable.
ANSWER:
As you trace horizontally, with `Age`, you move from contour to contour: the price changes. So price depends on `Age`. The same is true when you trace vertically, with `Mileage`. So price also depends on `Mileage`.
3. The same function is being graphed both in the body of the text and in this exercise. But the graphs are very different! Explain why there is a difference and say which graph is right.
ANSWER:
Look at the tick marks on the axes. In the graph in the body of the text, `Age` runs from two to eight years. But in the exercise’s graph, `Age` runs only from zero to one year. Similarly, the graph in the body of the text has `Mileage` running from 0 to 60,000 miles, but in the exercise’s graph, `Mileage` runs from 0 to 1\.
The two graphs show the same function, so both are “right.” But the exercise’s graph is visually misleading. It’s hardly a surprise that price doesn’t change very much from 0 miles to 1 mile, but it does change (somewhat) from 0 years to 1 year.
The moral here: Pay careful attention to the axes and the range that they display. When you draw a graph, make sure that you set the ranges to something relevant to the problem at hand.
#### 6\.0\.1\.2 Exercise 2
Economists usually think about prices in terms of their logarithms. The advantage of doing this is that it doesn’t matter what currency the price is in; an increase of 1 in log price is the same proportion regardless of the price or its currency.
Consider a model of \\(\\log\_10 \\mbox{price}\\) as a function of miles and age.
```
logPrice2 <- fitModel(
logPrice ~ A + B * Age + C * Mileage + D * Age * Mileage,
data = Hondas %>% mutate(logPrice = log10(Price)))
```
This model was defined to include an interaction between age and mileage. Of course, it might happen that the parameter `D` will be close to zero. That would mean that the data don’t provide any evidence for an interaction.
Fit the model and look at the contours of log price. What does the shape of the contours tell you about whether the data give evidence for an interaction in log price?
ANSWER:
```
contour_plot(
logPrice2(Age=age, Mileage=miles) ~ age + miles,
domain(age = range(0, 8), miles = range(0, 60000)))
```
The contours are pretty much straight, which suggests that there is little interaction. When interpreting log prices, you can think about a increase of, say, 0\.05 in output as corresponding to the same *proportional* increase in price. For example, an increase in log price from 4\.2 (which is \\(10^{4\.2}\\) \= 15,849\) to 4\.25 (which is \\(10^{4\.25}\\) \= 17,783\) is an increase by 12% in actual price. A further increase in log price to 4\.3 (which is, in actual price, \\(10^{4\.3}\\) \= 19,953\) is a further 12% increase in actual price.
#### 6\.0\.1\.3 Exercise 3: Stay near the data
Fitting functions to data is not magic. Insofar as the data constrains the plausible forms of the model, the model will be a reasonable match to the data. But for inputs for which there is no data (e.g. 0 year\-old cars with 60,000 miles) a model can do crazy things. This is particularly so if the model is complicated, say including powers of the variable, as in this one:
```
carPrice3 <- fitModel(
Price ~ A + B * Age + C * Mileage + D * Age * Mileage +
E * Age^2 + F * Mileage^2 + G * Age^2 * Mileage +
H * Age * Mileage^2,
data = Hondas)
gf_point(Mileage ~ Age, data = Hondas, fill = NA) %>%
contour_plot(
carPrice3(Age=Age, Mileage=Mileage) ~ Age + Mileage)
```
For cars under 3 years old or older cars with either very high or very low mileage, the contours are doing some crazy things! Common sense says that higher mileage or greater age results in higher price. In terms of the contours, common sense translates to contours that have a negative slope. But the slope of these contours is often positive.
It helps to consider whether there are regions where there is little data. As a rule, a complicated model like `carPrice3()` is unreliable for inputs where there is little or no data.
Focus just on the regions of the plot where there is lots of data. Do the contours have the shape expected by common sense?
ANSWER:
Where there is lots of data, the local shape of the contour is indeed gently sloping downward from left to right, as anticipated by common sense.
#### 6\.0\.1\.1 Exercise 1
The text of the section describes a model `carPrice1()` with Age and Mileage as the input quantities and price (in USD) as the output. The claim was made that price is discernably a function of both `Age` and `Mileage`. Let’s make that graph again.
```
contour_plot(
carPrice1(Age = age, Mileage = miles) ~ age + miles,
domain(age = range(0, 8), miles = range(0, 60000)))
```
In the above graph, the contours are vertical.
1. What do the vertical contours say about price as a function of `Age` and `Mileage`?
1. Price depends strongly on both variables.
2. *Price depends on `Age` but not `Mileage`.*
3. Price depends on `Mileage` but not `Age`.
4. Price doesn’t depend much on either variable.
ANSWER:
Each contour corresponds to a different price. As you track horizontally with `Age`, you cross from one contour to another. But as you track vertically with `Mileage`, you don’t cross contours. This means that price does **not** depend on `Mileage`, since changing `Mileage` doesn’t lead to a change in price. But price does change with `Age`.
2. The graph of the same function shown in the body of the text has contours that slope downwards from left to right. What does this say about price as a function of `Age` and `Mileage`?
1. *Price depends strongly on both variables.*
2. Price depends on `Age` but not `Mileage`.
3. Price depends on `Mileage` but not `Age`.
4. Price doesn’t depend much on either variable.
ANSWER:
As you trace horizontally, with `Age`, you move from contour to contour: the price changes. So price depends on `Age`. The same is true when you trace vertically, with `Mileage`. So price also depends on `Mileage`.
3. The same function is being graphed both in the body of the text and in this exercise. But the graphs are very different! Explain why there is a difference and say which graph is right.
ANSWER:
Look at the tick marks on the axes. In the graph in the body of the text, `Age` runs from two to eight years. But in the exercise’s graph, `Age` runs only from zero to one year. Similarly, the graph in the body of the text has `Mileage` running from 0 to 60,000 miles, but in the exercise’s graph, `Mileage` runs from 0 to 1\.
The two graphs show the same function, so both are “right.” But the exercise’s graph is visually misleading. It’s hardly a surprise that price doesn’t change very much from 0 miles to 1 mile, but it does change (somewhat) from 0 years to 1 year.
The moral here: Pay careful attention to the axes and the range that they display. When you draw a graph, make sure that you set the ranges to something relevant to the problem at hand.
#### 6\.0\.1\.2 Exercise 2
Economists usually think about prices in terms of their logarithms. The advantage of doing this is that it doesn’t matter what currency the price is in; an increase of 1 in log price is the same proportion regardless of the price or its currency.
Consider a model of \\(\\log\_10 \\mbox{price}\\) as a function of miles and age.
```
logPrice2 <- fitModel(
logPrice ~ A + B * Age + C * Mileage + D * Age * Mileage,
data = Hondas %>% mutate(logPrice = log10(Price)))
```
This model was defined to include an interaction between age and mileage. Of course, it might happen that the parameter `D` will be close to zero. That would mean that the data don’t provide any evidence for an interaction.
Fit the model and look at the contours of log price. What does the shape of the contours tell you about whether the data give evidence for an interaction in log price?
ANSWER:
```
contour_plot(
logPrice2(Age=age, Mileage=miles) ~ age + miles,
domain(age = range(0, 8), miles = range(0, 60000)))
```
The contours are pretty much straight, which suggests that there is little interaction. When interpreting log prices, you can think about a increase of, say, 0\.05 in output as corresponding to the same *proportional* increase in price. For example, an increase in log price from 4\.2 (which is \\(10^{4\.2}\\) \= 15,849\) to 4\.25 (which is \\(10^{4\.25}\\) \= 17,783\) is an increase by 12% in actual price. A further increase in log price to 4\.3 (which is, in actual price, \\(10^{4\.3}\\) \= 19,953\) is a further 12% increase in actual price.
#### 6\.0\.1\.3 Exercise 3: Stay near the data
Fitting functions to data is not magic. Insofar as the data constrains the plausible forms of the model, the model will be a reasonable match to the data. But for inputs for which there is no data (e.g. 0 year\-old cars with 60,000 miles) a model can do crazy things. This is particularly so if the model is complicated, say including powers of the variable, as in this one:
```
carPrice3 <- fitModel(
Price ~ A + B * Age + C * Mileage + D * Age * Mileage +
E * Age^2 + F * Mileage^2 + G * Age^2 * Mileage +
H * Age * Mileage^2,
data = Hondas)
gf_point(Mileage ~ Age, data = Hondas, fill = NA) %>%
contour_plot(
carPrice3(Age=Age, Mileage=Mileage) ~ Age + Mileage)
```
For cars under 3 years old or older cars with either very high or very low mileage, the contours are doing some crazy things! Common sense says that higher mileage or greater age results in higher price. In terms of the contours, common sense translates to contours that have a negative slope. But the slope of these contours is often positive.
It helps to consider whether there are regions where there is little data. As a rule, a complicated model like `carPrice3()` is unreliable for inputs where there is little or no data.
Focus just on the regions of the plot where there is lots of data. Do the contours have the shape expected by common sense?
ANSWER:
Where there is lots of data, the local shape of the contour is indeed gently sloping downward from left to right, as anticipated by common sense.
6\.1 Curves and linear models
-----------------------------
At first glance, the terms “linear” and “curve” might seem contradictory. Lines are straight, curves are not.
The word linear in “linear models” refers to “linear combination,” not “straight line.” As you will see, you can construct complicated curves by taking linear combinations of functions, and use the linear algebra projection operation to match these curves as closely as possible to data. That process of matching is called “fitting.”
To illustrate, the data in the file `"utilities.csv"` records the average temperature each month (in degrees F) as well as the monthly natural gas usage (in cubic feet, ccf). There is, as you might expect, a strong relationship between the two.
```
Utilities = read.csv("http://www.mosaic-web.org/go/datasets/utilities.csv")
gf_point(ccf ~ temp, data = Utilities)
```
Many different sorts of functions might be used to represent these data. One of the simplest and most commonly used in modeling is a straight\-line function. In terms of linear algebra, this is a linear combination of the functions \\(f\_1(T) \= 1\\) and \\(f\_2(T) \= T\\). Conventionally, of course, the straight\-line function is written \\(f(T) \= b \+ m T\\). (Perhaps you prefer to write it this way: \\(f(x) \= m x \+ b\\). Same thing.) This conventional notation is merely naming the scalars as \\(m\\) and \\(b\\) that will participate in the linear combination. To find the numerical scalars that best match the data — to “fit the function” to the data — can be done with the linear algebra `project( )` operator.
```
project(ccf ~ temp + 1, data = Utilities)
```
```
## (Intercept) temp
## 253.098208 -3.464251
```
The `project( )` operator gives the values of the scalars. The best fitting function itself is built by using these scalar values to combine the functions involved.
```
model_fun = makeFun( 253.098 - 3.464*temp ~ temp)
gf_point(ccf ~ temp, data=Utils) %>%
slice_plot(model_fun(temp) ~ temp)
```
You can add other functions into the mix easily. For instance, you might think that `sqrt(T)` works in there somehow. Try it out!
```
project(ccf ~ temp + sqrt(temp) + 1, data = Utils)
```
```
## (Intercept) temp sqrt(temp)
## 447.029273 1.377666 -63.208025
```
```
mod2 <- makeFun(447.03 + 1.378*temp - 63.21*sqrt(temp) ~ temp)
gf_point(ccf ~ temp, data=Utils) %>% # the data
slice_plot(mod2(temp) ~ temp) %>%
gf_labs(x = "Temperature (F)",
y = "Natural gas used (ccf)")
```
Understanding the mathematics of projection is important for using it, but focus for a moment on the **notation** being used to direct the computer to carry out the linear algebra notation.
The `project( )` operator takes a series of vectors. When fitting a function to data, these vectors are coming from a data set and so the command must refer to the names of the quantities as they appear in the data set, e.g., `ccf` or `temp`. You’re allowed to perform operations on those quantities, for instance the `sqrt` in the above example, to create a new vector. The `~` is used to separate out the “target” vector from the set of one or more vectors onto which the projection is being made. In traditional mathematical notation, this operation would be written as an equation involving a matrix \\(\\mathbf A\\) composed of a set of vectors \\(\\left( \\vec{v}\_1, \\vec{v}\_2, \\ldots, \\vec{v}\_p \\right) \= {\\mathbf A}\\), a target vector \\(\\vec{b}\\), and the set of unknown coefficients \\(\\vec{x}\\). The equation that connects these quantities is written \\({\\mathbf A} \\cdot \\vec{x} \\approx \\vec{b}\\). In this notation, the process of “solving” for \\(\\vec{x}\\) is implicit. The computer notation rearranges this to
\\\[ \\vec{x} \= \\mbox{\\texttt{project(}} \\vec{b} \\sim \\vec{v}\_1 \+
\\vec{v}\_2 \+ \\ldots \+ \\vec{v}\_p \\mbox{\\texttt{)}} .\\]
Once you’ve done the projection and found the coefficients, you can construct the corresponding mathematical function by using the coefficients in a mathematical expression to create a function. As with all functions, the names you use for the arguments are a matter of personal choice, although it’s sensible to use names that remind you of what’s being represented by the function.
The choice of what vectors to use in the projection is yours: part of the modeler’s art.
Throughout the natural and social sciences, a very important and widely used technique is to use multiple variables in a projection. To illustrate, look at the data in `"used-hondas.csv"` on the prices of used Honda automobiles.
```
Hondas = read.csv("http://www.mosaic-web.org/go/datasets/used-hondas.csv")
head(Hondas)
```
```
## Price Year Mileage Location Color Age
## 1 20746 2006 18394 St.Paul Grey 1
## 2 19787 2007 8 St.Paul Black 0
## 3 17987 2005 39998 St.Paul Grey 2
## 4 17588 2004 35882 St.Paul Black 3
## 5 16987 2004 25306 St.Paul Grey 3
## 6 16987 2005 33399 St.Paul Black 2
```
As you can see, the data set includes the variables `Price`, `Age`, and `Mileage`. It seems reasonable to think that price will depend both on the mileage and age of the car. Here’s a very simple model that uses both variables:
```
project(Price ~ Age + Mileage + 1, data = Hondas)
```
```
## (Intercept) Age Mileage
## 2.133049e+04 -5.382931e+02 -7.668922e-02
```
You can plot that out as a mathematical function:
```
car_price <- makeFun(21330-5.383e2*age-7.669e-2*miles ~ age & miles)
contour_plot(car_price(age, miles) ~ age + miles,
domain(age=range(2, 8), miles=range(0, 60000))) %>%
gf_labs(title = "Miles per gallon")
```
A somewhat more sophisticated model might include what’s called an “interaction” between age and mileage, recognizing that the effect of age might be different depending on mileage.
```
project(Price ~ Age + Mileage + Age*Mileage + 1, data = Hondas)
```
```
## (Intercept) Age Mileage Age:Mileage
## 2.213744e+04 -7.494928e+02 -9.413962e-02 3.450033e-03
```
```
car_price2 <- makeFun(22137 - 7.495e2*age - 9.414e-2*miles +
3.450e-3*age*miles ~ age & miles)
contour_plot(
car_price2(Age, Mileage) ~ Age + Mileage,
domain(Age = range(0, 10), Mileage = range(0, 100000))) %>%
gf_labs(title = "Price of car (USD)")
```
### 6\.1\.1 Exercises
#### 6\.1\.1\.1 Exercise 1: Fitting Polynomials
Most college students take a course in algebra that includes a lot about polynomials, and polynomials are very often used in modeling. (Probably, they are used more often than they should be. And algebra teachers might be disappointed to hear that the most important polynomials models are low\-order ones, e.g., \\(f(x,y) \= a \+ bx \+ cy \+ dx y\\) rather than being cubics or quartics, etc.) Fitting a polynomial to data is a matter of linear algebra: constructing the appropriate vectors to represent the various powers. For example, here’s how to fit a quadratic model to the `ccf` versus `temp` variables in the `"utilities.csv"` data file:
```
Utilities = read.csv("http://www.mosaic-web.org/go/datasets/utilities.csv")
project(ccf ~ 1 + temp + I(temp^2), data = Utilities)
```
```
## (Intercept) temp I(temp^2)
## 317.58743630 -6.85301947 0.03609138
```
You may wonder, what is the `I( )` for? It turns out that there are different notations for statistics and mathematics, and that the `^` has a subtly different meaning in R formulas than simple exponentiation. The `I( )` tells the software to take the exponentiation literally in a mathematical sense.
The coefficients tell us that the best\-fitting quadratic model of `ccf` versus `temp` is:
```
ccfQuad <- makeFun(317.587 - 6.853*T + 0.0361*T^2 ~ T)
gf_point(ccf ~ temp, data = Utilities) %>%
slice_plot(ccfQuad(temp) ~ temp)
```
To find the value of this model at a given temperature, just evaluate the function. (And note that `ccfQuad( )` was defined with an input variable `T`.)
```
ccfQuad(T=72)
```
```
## [1] 11.3134
```
1. Fit a 3rd\-order polynomial of versus to the utilities data. What is the value of this model for a temperature of 32 degrees? {87,103,128,*142*,143,168,184}
ANSWER:
```
project(ccf ~ 1 + temp + I(temp^2) + I(temp^3), data = Utils)
```
```
## (Intercept) temp I(temp^2) I(temp^3)
## 2.550709e+02 -1.427408e+00 -9.643482e-02 9.609511e-04
```
```
ccfCubic <-
makeFun(2.551e2 - 1.427*T -
9.643e-2*T^2 + 9.6095e-4*T^3 ~ T)
gf_point(ccf ~ temp, data = Utils) %>%
slice_plot(ccfCubic(temp) ~ temp)
```
```
ccfCubic(32)
```
```
## [1] 142.1801
```
1. Fit a 4th\-order polynomial of `ccf` versus `temp` to the utilities data. What is the value of this model for a temperature of 32 degrees? {87,103,128,140,*143*,168,184}
ANSWER:
```
project(ccf ~ 1 + temp + I(temp^2) + I(temp^3) + I(temp^4),
data = Utils)
```
```
## (Intercept) temp I(temp^2) I(temp^3) I(temp^4)
## 1.757579e+02 8.225746e+00 -4.815403e-01 7.102673e-03 -3.384490e-05
```
```
ccfQuad <- makeFun(1.7576e2 + 8.225*T -4.815e-1*T^2 +
7.103e-3*T^3 - 3.384e-5*T^4 ~ T)
gf_point(ccf ~ temp, data = Utils) %>%
slice_plot(ccfQuad(temp) ~ temp) %>%
gf_labs(y = "Natural gas use (ccf)", x = "Temperature (F)")
```
```
ccfQuad(32)
```
```
## [1] 143.1713
```
1. Make a plot of the **difference** between the 3rd\- and 4th\-order models over a temperature range from 20 to 60 degrees. What’s the biggest difference (in absolute value) between the outputs of the two models?
1. About 1 ccf.
2. *About 4 ccf.*
3. About 8 ccf.
4. About 1 degree F.
5. *About 4 degrees F.*
6. About 8 degress F.
ANSWER:
The output of the models is in units of ccf.
```
slice_plot(ccfQuad(temp) - ccfCubic(temp) ~ temp,
domain(temp = range(20, 60)))
```
The difference between the two models is always within about 4 ccf.
\\end{enumerate}
#### 6\.1\.1\.2 Exercise 2: Multiple Regression
In 1980, the magazine Consumer Reports studied 1978\-79 model cars to explore how different factors influence fuel economy. The measurement included fuel efficiency in miles\-per\-gallon, curb weight in pounds, engine power in horsepower, and number of cylinders. These variables are included in the file `"cardata.csv"`.
```
Cars = read.csv("http://www.mosaic-web.org/go/datasets/cardata.csv")
head(Cars)
```
```
## mpg pounds horsepower cylinders tons constant
## 1 16.9 3967.60 155 8 2.0 1
## 2 15.5 3689.14 142 8 1.8 1
## 3 19.2 3280.55 125 8 1.6 1
## 4 18.5 3585.40 150 8 1.8 1
## 5 30.0 1961.05 68 4 1.0 1
## 6 27.5 2329.60 95 4 1.2 1
```
1. Use these data to fit the following model of fuel economy (variable `mpg`):
\\\[ \\mbox{\\texttt{mpg}} \= x\_0 \+ x\_1 \\mbox{\\texttt{pounds}}. \\]
What’s the value of the model for an input of 2000 pounds? {14\.9,19\.4,21\.1,25\.0,*28\.8*,33\.9,35\.2}
ANSWER:
```
project(mpg ~ pounds + 1, data = Cars)
```
```
## (Intercept) pounds
## 43.188646127 -0.007200773
```
```
43.1886 - 0.00720*2000
```
```
## [1] 28.7886
```
1. Use the data to fit the following model of fuel economy (variable `mpg`):
\\\[ \\mbox{\\texttt{mpg}} \= y\_0 \+ y\_1 \\mbox{\\texttt{pounds}} \+ y\_2 \\mbox{\\texttt{horsepower}}. \\]
1. What’s the value of the model for an input of 2000 pounds and 150 horsepower? {14\.9,*19\.4*,21\.1,25\.0,28\.8,33\.9,35\.2}
2. What’s the value of the model for an input of 2000 pounds and 50 horsepower? {14\.9,19\.4,21\.1,25\.0,28\.8,*33\.9*,35\.2}
ANSWER:
```
project(mpg ~ pounds + horsepower + 1, data = Cars)
```
```
## (Intercept) pounds horsepower
## 46.932738241 -0.002902265 -0.144930546
```
```
mod_fun <- makeFun(46.933 - 0.00290*lbs - 0.1449*hp ~ lbs + hp)
mod_fun(lbs = 2000, hp = 50)
```
```
## [1] 33.888
```
1. Construct a linear function that uses `pounds`, `horsepower` and `cylinders` to model `mpg`. We don’t have a good way to plot out functions of three input variables, but you can still write down the formula. What is it?
#### 6\.1\.1\.3 Exercise 3: The Intercept
Go back to the problem where you fit polynomials to the `ccf` versus `temp` data. Do it again, but this time tell the software to remove the intercept from the set of vectors. (You do this with the notation `-1` in the `project( )` operator.)
Plot out the polynomials you find over a temperature range from \-10 to 50 degrees, and plot the raw data over them. There’s something very strange about the models you will get. What is it?
1. The computer refuses to carry out this instruction.
2. All the models show a constant output of `ccf`.
3. * All the models have a `ccf` of zero when `temp` is zero.
4. All the models are exactly the same!
### 6\.1\.1 Exercises
#### 6\.1\.1\.1 Exercise 1: Fitting Polynomials
Most college students take a course in algebra that includes a lot about polynomials, and polynomials are very often used in modeling. (Probably, they are used more often than they should be. And algebra teachers might be disappointed to hear that the most important polynomials models are low\-order ones, e.g., \\(f(x,y) \= a \+ bx \+ cy \+ dx y\\) rather than being cubics or quartics, etc.) Fitting a polynomial to data is a matter of linear algebra: constructing the appropriate vectors to represent the various powers. For example, here’s how to fit a quadratic model to the `ccf` versus `temp` variables in the `"utilities.csv"` data file:
```
Utilities = read.csv("http://www.mosaic-web.org/go/datasets/utilities.csv")
project(ccf ~ 1 + temp + I(temp^2), data = Utilities)
```
```
## (Intercept) temp I(temp^2)
## 317.58743630 -6.85301947 0.03609138
```
You may wonder, what is the `I( )` for? It turns out that there are different notations for statistics and mathematics, and that the `^` has a subtly different meaning in R formulas than simple exponentiation. The `I( )` tells the software to take the exponentiation literally in a mathematical sense.
The coefficients tell us that the best\-fitting quadratic model of `ccf` versus `temp` is:
```
ccfQuad <- makeFun(317.587 - 6.853*T + 0.0361*T^2 ~ T)
gf_point(ccf ~ temp, data = Utilities) %>%
slice_plot(ccfQuad(temp) ~ temp)
```
To find the value of this model at a given temperature, just evaluate the function. (And note that `ccfQuad( )` was defined with an input variable `T`.)
```
ccfQuad(T=72)
```
```
## [1] 11.3134
```
1. Fit a 3rd\-order polynomial of versus to the utilities data. What is the value of this model for a temperature of 32 degrees? {87,103,128,*142*,143,168,184}
ANSWER:
```
project(ccf ~ 1 + temp + I(temp^2) + I(temp^3), data = Utils)
```
```
## (Intercept) temp I(temp^2) I(temp^3)
## 2.550709e+02 -1.427408e+00 -9.643482e-02 9.609511e-04
```
```
ccfCubic <-
makeFun(2.551e2 - 1.427*T -
9.643e-2*T^2 + 9.6095e-4*T^3 ~ T)
gf_point(ccf ~ temp, data = Utils) %>%
slice_plot(ccfCubic(temp) ~ temp)
```
```
ccfCubic(32)
```
```
## [1] 142.1801
```
1. Fit a 4th\-order polynomial of `ccf` versus `temp` to the utilities data. What is the value of this model for a temperature of 32 degrees? {87,103,128,140,*143*,168,184}
ANSWER:
```
project(ccf ~ 1 + temp + I(temp^2) + I(temp^3) + I(temp^4),
data = Utils)
```
```
## (Intercept) temp I(temp^2) I(temp^3) I(temp^4)
## 1.757579e+02 8.225746e+00 -4.815403e-01 7.102673e-03 -3.384490e-05
```
```
ccfQuad <- makeFun(1.7576e2 + 8.225*T -4.815e-1*T^2 +
7.103e-3*T^3 - 3.384e-5*T^4 ~ T)
gf_point(ccf ~ temp, data = Utils) %>%
slice_plot(ccfQuad(temp) ~ temp) %>%
gf_labs(y = "Natural gas use (ccf)", x = "Temperature (F)")
```
```
ccfQuad(32)
```
```
## [1] 143.1713
```
1. Make a plot of the **difference** between the 3rd\- and 4th\-order models over a temperature range from 20 to 60 degrees. What’s the biggest difference (in absolute value) between the outputs of the two models?
1. About 1 ccf.
2. *About 4 ccf.*
3. About 8 ccf.
4. About 1 degree F.
5. *About 4 degrees F.*
6. About 8 degress F.
ANSWER:
The output of the models is in units of ccf.
```
slice_plot(ccfQuad(temp) - ccfCubic(temp) ~ temp,
domain(temp = range(20, 60)))
```
The difference between the two models is always within about 4 ccf.
\\end{enumerate}
#### 6\.1\.1\.2 Exercise 2: Multiple Regression
In 1980, the magazine Consumer Reports studied 1978\-79 model cars to explore how different factors influence fuel economy. The measurement included fuel efficiency in miles\-per\-gallon, curb weight in pounds, engine power in horsepower, and number of cylinders. These variables are included in the file `"cardata.csv"`.
```
Cars = read.csv("http://www.mosaic-web.org/go/datasets/cardata.csv")
head(Cars)
```
```
## mpg pounds horsepower cylinders tons constant
## 1 16.9 3967.60 155 8 2.0 1
## 2 15.5 3689.14 142 8 1.8 1
## 3 19.2 3280.55 125 8 1.6 1
## 4 18.5 3585.40 150 8 1.8 1
## 5 30.0 1961.05 68 4 1.0 1
## 6 27.5 2329.60 95 4 1.2 1
```
1. Use these data to fit the following model of fuel economy (variable `mpg`):
\\\[ \\mbox{\\texttt{mpg}} \= x\_0 \+ x\_1 \\mbox{\\texttt{pounds}}. \\]
What’s the value of the model for an input of 2000 pounds? {14\.9,19\.4,21\.1,25\.0,*28\.8*,33\.9,35\.2}
ANSWER:
```
project(mpg ~ pounds + 1, data = Cars)
```
```
## (Intercept) pounds
## 43.188646127 -0.007200773
```
```
43.1886 - 0.00720*2000
```
```
## [1] 28.7886
```
1. Use the data to fit the following model of fuel economy (variable `mpg`):
\\\[ \\mbox{\\texttt{mpg}} \= y\_0 \+ y\_1 \\mbox{\\texttt{pounds}} \+ y\_2 \\mbox{\\texttt{horsepower}}. \\]
1. What’s the value of the model for an input of 2000 pounds and 150 horsepower? {14\.9,*19\.4*,21\.1,25\.0,28\.8,33\.9,35\.2}
2. What’s the value of the model for an input of 2000 pounds and 50 horsepower? {14\.9,19\.4,21\.1,25\.0,28\.8,*33\.9*,35\.2}
ANSWER:
```
project(mpg ~ pounds + horsepower + 1, data = Cars)
```
```
## (Intercept) pounds horsepower
## 46.932738241 -0.002902265 -0.144930546
```
```
mod_fun <- makeFun(46.933 - 0.00290*lbs - 0.1449*hp ~ lbs + hp)
mod_fun(lbs = 2000, hp = 50)
```
```
## [1] 33.888
```
1. Construct a linear function that uses `pounds`, `horsepower` and `cylinders` to model `mpg`. We don’t have a good way to plot out functions of three input variables, but you can still write down the formula. What is it?
#### 6\.1\.1\.3 Exercise 3: The Intercept
Go back to the problem where you fit polynomials to the `ccf` versus `temp` data. Do it again, but this time tell the software to remove the intercept from the set of vectors. (You do this with the notation `-1` in the `project( )` operator.)
Plot out the polynomials you find over a temperature range from \-10 to 50 degrees, and plot the raw data over them. There’s something very strange about the models you will get. What is it?
1. The computer refuses to carry out this instruction.
2. All the models show a constant output of `ccf`.
3. * All the models have a `ccf` of zero when `temp` is zero.
4. All the models are exactly the same!
#### 6\.1\.1\.1 Exercise 1: Fitting Polynomials
Most college students take a course in algebra that includes a lot about polynomials, and polynomials are very often used in modeling. (Probably, they are used more often than they should be. And algebra teachers might be disappointed to hear that the most important polynomials models are low\-order ones, e.g., \\(f(x,y) \= a \+ bx \+ cy \+ dx y\\) rather than being cubics or quartics, etc.) Fitting a polynomial to data is a matter of linear algebra: constructing the appropriate vectors to represent the various powers. For example, here’s how to fit a quadratic model to the `ccf` versus `temp` variables in the `"utilities.csv"` data file:
```
Utilities = read.csv("http://www.mosaic-web.org/go/datasets/utilities.csv")
project(ccf ~ 1 + temp + I(temp^2), data = Utilities)
```
```
## (Intercept) temp I(temp^2)
## 317.58743630 -6.85301947 0.03609138
```
You may wonder, what is the `I( )` for? It turns out that there are different notations for statistics and mathematics, and that the `^` has a subtly different meaning in R formulas than simple exponentiation. The `I( )` tells the software to take the exponentiation literally in a mathematical sense.
The coefficients tell us that the best\-fitting quadratic model of `ccf` versus `temp` is:
```
ccfQuad <- makeFun(317.587 - 6.853*T + 0.0361*T^2 ~ T)
gf_point(ccf ~ temp, data = Utilities) %>%
slice_plot(ccfQuad(temp) ~ temp)
```
To find the value of this model at a given temperature, just evaluate the function. (And note that `ccfQuad( )` was defined with an input variable `T`.)
```
ccfQuad(T=72)
```
```
## [1] 11.3134
```
1. Fit a 3rd\-order polynomial of versus to the utilities data. What is the value of this model for a temperature of 32 degrees? {87,103,128,*142*,143,168,184}
ANSWER:
```
project(ccf ~ 1 + temp + I(temp^2) + I(temp^3), data = Utils)
```
```
## (Intercept) temp I(temp^2) I(temp^3)
## 2.550709e+02 -1.427408e+00 -9.643482e-02 9.609511e-04
```
```
ccfCubic <-
makeFun(2.551e2 - 1.427*T -
9.643e-2*T^2 + 9.6095e-4*T^3 ~ T)
gf_point(ccf ~ temp, data = Utils) %>%
slice_plot(ccfCubic(temp) ~ temp)
```
```
ccfCubic(32)
```
```
## [1] 142.1801
```
1. Fit a 4th\-order polynomial of `ccf` versus `temp` to the utilities data. What is the value of this model for a temperature of 32 degrees? {87,103,128,140,*143*,168,184}
ANSWER:
```
project(ccf ~ 1 + temp + I(temp^2) + I(temp^3) + I(temp^4),
data = Utils)
```
```
## (Intercept) temp I(temp^2) I(temp^3) I(temp^4)
## 1.757579e+02 8.225746e+00 -4.815403e-01 7.102673e-03 -3.384490e-05
```
```
ccfQuad <- makeFun(1.7576e2 + 8.225*T -4.815e-1*T^2 +
7.103e-3*T^3 - 3.384e-5*T^4 ~ T)
gf_point(ccf ~ temp, data = Utils) %>%
slice_plot(ccfQuad(temp) ~ temp) %>%
gf_labs(y = "Natural gas use (ccf)", x = "Temperature (F)")
```
```
ccfQuad(32)
```
```
## [1] 143.1713
```
1. Make a plot of the **difference** between the 3rd\- and 4th\-order models over a temperature range from 20 to 60 degrees. What’s the biggest difference (in absolute value) between the outputs of the two models?
1. About 1 ccf.
2. *About 4 ccf.*
3. About 8 ccf.
4. About 1 degree F.
5. *About 4 degrees F.*
6. About 8 degress F.
ANSWER:
The output of the models is in units of ccf.
```
slice_plot(ccfQuad(temp) - ccfCubic(temp) ~ temp,
domain(temp = range(20, 60)))
```
The difference between the two models is always within about 4 ccf.
\\end{enumerate}
#### 6\.1\.1\.2 Exercise 2: Multiple Regression
In 1980, the magazine Consumer Reports studied 1978\-79 model cars to explore how different factors influence fuel economy. The measurement included fuel efficiency in miles\-per\-gallon, curb weight in pounds, engine power in horsepower, and number of cylinders. These variables are included in the file `"cardata.csv"`.
```
Cars = read.csv("http://www.mosaic-web.org/go/datasets/cardata.csv")
head(Cars)
```
```
## mpg pounds horsepower cylinders tons constant
## 1 16.9 3967.60 155 8 2.0 1
## 2 15.5 3689.14 142 8 1.8 1
## 3 19.2 3280.55 125 8 1.6 1
## 4 18.5 3585.40 150 8 1.8 1
## 5 30.0 1961.05 68 4 1.0 1
## 6 27.5 2329.60 95 4 1.2 1
```
1. Use these data to fit the following model of fuel economy (variable `mpg`):
\\\[ \\mbox{\\texttt{mpg}} \= x\_0 \+ x\_1 \\mbox{\\texttt{pounds}}. \\]
What’s the value of the model for an input of 2000 pounds? {14\.9,19\.4,21\.1,25\.0,*28\.8*,33\.9,35\.2}
ANSWER:
```
project(mpg ~ pounds + 1, data = Cars)
```
```
## (Intercept) pounds
## 43.188646127 -0.007200773
```
```
43.1886 - 0.00720*2000
```
```
## [1] 28.7886
```
1. Use the data to fit the following model of fuel economy (variable `mpg`):
\\\[ \\mbox{\\texttt{mpg}} \= y\_0 \+ y\_1 \\mbox{\\texttt{pounds}} \+ y\_2 \\mbox{\\texttt{horsepower}}. \\]
1. What’s the value of the model for an input of 2000 pounds and 150 horsepower? {14\.9,*19\.4*,21\.1,25\.0,28\.8,33\.9,35\.2}
2. What’s the value of the model for an input of 2000 pounds and 50 horsepower? {14\.9,19\.4,21\.1,25\.0,28\.8,*33\.9*,35\.2}
ANSWER:
```
project(mpg ~ pounds + horsepower + 1, data = Cars)
```
```
## (Intercept) pounds horsepower
## 46.932738241 -0.002902265 -0.144930546
```
```
mod_fun <- makeFun(46.933 - 0.00290*lbs - 0.1449*hp ~ lbs + hp)
mod_fun(lbs = 2000, hp = 50)
```
```
## [1] 33.888
```
1. Construct a linear function that uses `pounds`, `horsepower` and `cylinders` to model `mpg`. We don’t have a good way to plot out functions of three input variables, but you can still write down the formula. What is it?
#### 6\.1\.1\.3 Exercise 3: The Intercept
Go back to the problem where you fit polynomials to the `ccf` versus `temp` data. Do it again, but this time tell the software to remove the intercept from the set of vectors. (You do this with the notation `-1` in the `project( )` operator.)
Plot out the polynomials you find over a temperature range from \-10 to 50 degrees, and plot the raw data over them. There’s something very strange about the models you will get. What is it?
1. The computer refuses to carry out this instruction.
2. All the models show a constant output of `ccf`.
3. * All the models have a `ccf` of zero when `temp` is zero.
4. All the models are exactly the same!
6\.2 `fitModel()`
-----------------
6\.3 Functions with nonlinear parameters
----------------------------------------
The techniques of linear algebra can be used to find the best linear combination of a set of functions. But, often, there are parameters in functions that appear in a nonlinear way. Examples include \\(k\\) in \\(f(t) \= A \\exp( k t ) \+ C\\) and \\(P\\) in \\(A \\sin(\\frac{2\\pi}{P} t) \+ C\\). Finding these nonlinear parameters cannot be done directly using linear algebra, although the methods of linear algebra do help in simplifying the situation.
Fortunately, the idea that the distance between functions can be measured works perfectly well when there are nonlinear parameters involved. So we’ll continue to use the “sum of square residuals” when evaluating how close an function approximation is to a set of data.
6\.4 Exponential functions
--------------------------
To illustrate, consider the `"Income-Housing.csv"` data which
shows an exponential relationship between the fraction of families
with two cars and income:
```
Families <- read.csv("http://www.mosaic-web.org/go/datasets/Income-Housing.csv")
gf_point(TwoVehicles ~ Income, data = Families)
```
The pattern of the data suggests exponential “decay” towards close to 100% of the families having two vehicles. The mathematical form of this exponential function is \\(A exp(k Y) \+ C\\). A and C are unknown linear parameters. \\(k\\) is an unknown nonlinear parameter – it will be negative for exponential decay. Linear algebra allows us to find the best linear parameters \\(A\\) and \\(C\\) in order to match the data. But how to find \\(k\\)?
Suppose you make a guess at \\(k\\). The guess doesn’t need to be completely random; you can see from the data themselves that the “half\-life” is something like $25,000\. The parameter \\(k\\) is corresponds to the half life, it’s \\(\\ln(0\.5\)/\\mbox{half\-life}\\), so here a good guess for \\(k\\) is \\(\\ln(0\.5\)/25000\\), that is
```
kguess <- log(0.5) / 25000
kguess
```
```
## [1] -2.772589e-05
```
Starting with that guess, you can find the best values of the linear
parameters \\(A\\) and \\(C\\) through linear algebra techniques:
```
project( TwoVehicles ~ 1 + exp(Income*kguess), data = Families)
```
```
## (Intercept) exp(Income * kguess)
## 110.4263 -101.5666
```
Make sure that you understand completely the meaning of the above statement. It does NOT mean that `TwoVehicles` is the sum \\(1 \+ \\exp{\-\\mbox{Income} \\times \\mbox{kguess}}\\). Rather, it means that you are searching for the linear combination of the two functions \\(1\\) and \\(\\exp{\-\\mbox{Income} \\times\\mbox{kguess}}\\) that matches `TwoVehicles` as closely as possible. The values returned by tell you what this combination will be: how much of \\(1\\) and how much of \\(\\exp{\-\\mbox{Income}\\times\\mbox{kguess}}\\) to add together to approximate `TwoVehicles`.
You can construct the function that is the best linear combination by explicitly adding together the two functions:
```
f <- makeFun( 110.43 - 101.57*exp(Income * k) ~ Income, k = kguess)
gf_point(TwoVehicles ~ Income, data = Families) %>%
slice_plot(f(Income) ~ Income)
```
The graph goes satisfyingly close to the data points. But you can also look at the numerical values of the function for any income:
```
f(Income = 10000)
```
```
## [1] 33.45433
```
```
f(Income = 50000)
```
```
## [1] 85.0375
```
It’s particularly informative to look at the values of the function
for the specific `Income` levels in the data used for fitting,
that is, the data frame `Families`:
```
Results <- Families %>%
dplyr::select(Income, TwoVehicles) %>%
mutate(model_val = f(Income = Income),
resids = TwoVehicles - model_val)
Results
```
```
## Income TwoVehicles model_val resids
## 1 3914 17.3 19.30528 -2.0052822
## 2 10817 34.3 35.17839 -0.8783904
## 3 21097 56.4 53.84097 2.5590313
## 4 34548 75.3 71.45680 3.8432013
## 5 51941 86.6 86.36790 0.2320981
## 6 72079 92.9 96.66273 -3.7627306
```
The ***residuals*** are the difference between these model values and the
actual values of `TwoVehicles` in the data set.
The `resids` column gives the residual for each row. But you can also think of the `resids` column as a ***vector***. Recall that the square\-length of a vector
is the sum of squared residuals
```
sum(Results$resids^2)
```
```
## [1] 40.32358
```
This square length of the `resids` vector is an important way to quantify how well the model fits the data.
6\.5 Optimizing the guesses
---------------------------
Keep in mind that the sum of square residuals is a function of \\(k\\). The above value is just for our particular guess $k \= $`kguess`. Rather than using just one guess for \\(k\\), you can look at many different possibilities. To see them all at the same time, let’s plot out the sum of squared residuals as a *function of* \\(k\\). We’ll do this by building a function that calculates the sum of square residuals for any given value of \\(k\\).
```
sum_square_resids <- Vectorize(function(k) {
sum((Families$TwoVehicles - f(Income=Families$Income, k)) ^ 2)
})
slice_plot(
sum_square_resids(k) ~ k,
domain(k = range(log(0.5)/40000,log(0.5)/20000)))
```
This is a rather complicated computer command, but the graph is straightforward. You can see that the “best” value of \\(k\\), that is, the value of \\(k\\) that makes the sum of square residuals as small as possible, is near \\(k\=\-2\.8\\times10^{\-5}\\) — not very far from the original guess, as it happens. (That’s because the half\-life is very easy to estimate.)
To continue your explorations in nonlinear curve fitting, you are going to use a special purpose function that does much of the work for you while allowing you to try out various values of \\(k\\) by moving a slider.
### 6\.5\.1 Exercises
NEED TO WRITE A SHINY APP FOR THESE EXERCISES and change the text accordingly.
To continue your explorations in nonlinear curve fitting, you are going to use a special purpose function that does much of the work for you while allowing you to try out various values of \\(k\\) by moving a slider.
To set it to work on these data, give the following commands, which you can just cut and paste from here:
```
Families = read.csv("http://www.mosaic-web.org/go/datasets/Income-Housing.csv")
Families <- Families %>%
mutate(tens = Income / 10000)
mFitExp(TwoVehicles ~ tens, data = Families)
```
You should see a graph with the data points, and a continuous function drawn in red. There will also be a control box with a few check\-boxes and a slider, like this:
The check\-boxes indicate which functions you want to take a linear combination of. You should check “Constant” and “exp(kx)”, as in the figure. Then you can use the slider to vary \\(k\\) in order to make the function approximate the data as best you can. At the top of the graph is the RMS error — which here corresponds to the square root of the sum of square residuals. Making the RMS error as small as possible will give you the best \\(k\\).
You may wonder, what was the point of the line that said
```
mutate(tens = Income / 10000)
```
This is just a concession to the imprecision of the slider. The
slider works over a pretty small range of possible \\(k\\), so this line
constructs a new income variable with units of tens of thousands of
dollars, rather than the original dollars. The instructions will tell
you when you need to do such things.
#### 6\.5\.1\.1 Exercise 1
The data in `"stan-data.csv"` contains measurements made by Prof. Stan Wagon of the temperature of a cooling cup of hot water. The time was measured in seconds, which is not very convenient for the slider, so translate it to minutes. Then find the best value of \\(k\\) in an exponential model.
```
water = read.csv("http://www.mosaic-web.org/go/datasets/stan-data.csv")
water$minutes = water$time/60
mFitExp( temp ~ minutes, data=water)
```
1. What’s the value of \\(k\\) that gives the smallest RMS error? {\-1\.50,*\-1\.25*,\-1\.00,\-0\.75}
2. What are the units of this \\(k\\)? (This is not an R question, but a mathematical one.)
1. seconds
2. minutes
3. per second
4. *per minute*
3. Move the slider to set \\(k\=0\.00\\). You will get an error message from the system about a “singular matrix.” This is because the function \\(e^{0x}\\) is redundant with the constant function. What is it about \\(e^{kx}\\) with \\(k\=0\\) that makes it redundant with the constant function?
#### 6\.5\.1\.2 Exercise 2
The `"hawaii.csv"` data set contains a record of ocean tide levels in Hawaii over a few days. The `time` variable is in hours, which is perfectly sensible, but you are going to rescale it to “quarter days” so that the slider will give better results. Then, you are going to use the `mFitSines( )` program to allow you to explore what happens as you vary the nonlinear parameter \\(P\\) in the linear combination \\(A \\sin(\\frac{2 \\pi}{P} t) \+ B \\cos(\\frac{2 \\pi}{P} t) \+ C\\).
```
Hawaii = read.csv("http://www.mosaic-web.org/go/datasets/hawaii.csv")
Hawaii$quarterdays = Hawaii$time/6
mFitSines(water~quarterdays, data=Hawaii)
```
Check both the \\(\\sin\\) and \\(\\cos\\) checkbox, as well as the “constant.” Then vary the slider for \\(P\\) to find the period that makes the RMS error as small as possible. Make sure the slider labeled \\(n\\) stays at \\(n\=1\\). What is the period \\(P\\) that makes the RMS error as small as possible (in terms of “quarter days”)?
1. 3\.95 quarter days
2. 4\.00 quarter days
3. *4\.05 quarter days*
4. 4\.10 quarter days
You may notice that the “best fitting” sine wave is not particularly close to the data points. One reason for this is that the pattern is more complicated than a simple sine wave. You can get a better approximation by including additional sine functions with different periods. By moving the \\(n\\) slider to \\(n\=2\\), you will include both the sine and cosine functions of period \\(P\\) and of period \\(P/2\\) — the “first harmonic.” Setting \\(n\=2\\) will give a markedly better match to the data.
What period \\(P\\) shows up as best when you have \\(n\=2\\): {3\.92,4\.0,4\.06,*4\.09*,4\.10,4\.15}
### 6\.5\.1 Exercises
NEED TO WRITE A SHINY APP FOR THESE EXERCISES and change the text accordingly.
To continue your explorations in nonlinear curve fitting, you are going to use a special purpose function that does much of the work for you while allowing you to try out various values of \\(k\\) by moving a slider.
To set it to work on these data, give the following commands, which you can just cut and paste from here:
```
Families = read.csv("http://www.mosaic-web.org/go/datasets/Income-Housing.csv")
Families <- Families %>%
mutate(tens = Income / 10000)
mFitExp(TwoVehicles ~ tens, data = Families)
```
You should see a graph with the data points, and a continuous function drawn in red. There will also be a control box with a few check\-boxes and a slider, like this:
The check\-boxes indicate which functions you want to take a linear combination of. You should check “Constant” and “exp(kx)”, as in the figure. Then you can use the slider to vary \\(k\\) in order to make the function approximate the data as best you can. At the top of the graph is the RMS error — which here corresponds to the square root of the sum of square residuals. Making the RMS error as small as possible will give you the best \\(k\\).
You may wonder, what was the point of the line that said
```
mutate(tens = Income / 10000)
```
This is just a concession to the imprecision of the slider. The
slider works over a pretty small range of possible \\(k\\), so this line
constructs a new income variable with units of tens of thousands of
dollars, rather than the original dollars. The instructions will tell
you when you need to do such things.
#### 6\.5\.1\.1 Exercise 1
The data in `"stan-data.csv"` contains measurements made by Prof. Stan Wagon of the temperature of a cooling cup of hot water. The time was measured in seconds, which is not very convenient for the slider, so translate it to minutes. Then find the best value of \\(k\\) in an exponential model.
```
water = read.csv("http://www.mosaic-web.org/go/datasets/stan-data.csv")
water$minutes = water$time/60
mFitExp( temp ~ minutes, data=water)
```
1. What’s the value of \\(k\\) that gives the smallest RMS error? {\-1\.50,*\-1\.25*,\-1\.00,\-0\.75}
2. What are the units of this \\(k\\)? (This is not an R question, but a mathematical one.)
1. seconds
2. minutes
3. per second
4. *per minute*
3. Move the slider to set \\(k\=0\.00\\). You will get an error message from the system about a “singular matrix.” This is because the function \\(e^{0x}\\) is redundant with the constant function. What is it about \\(e^{kx}\\) with \\(k\=0\\) that makes it redundant with the constant function?
#### 6\.5\.1\.2 Exercise 2
The `"hawaii.csv"` data set contains a record of ocean tide levels in Hawaii over a few days. The `time` variable is in hours, which is perfectly sensible, but you are going to rescale it to “quarter days” so that the slider will give better results. Then, you are going to use the `mFitSines( )` program to allow you to explore what happens as you vary the nonlinear parameter \\(P\\) in the linear combination \\(A \\sin(\\frac{2 \\pi}{P} t) \+ B \\cos(\\frac{2 \\pi}{P} t) \+ C\\).
```
Hawaii = read.csv("http://www.mosaic-web.org/go/datasets/hawaii.csv")
Hawaii$quarterdays = Hawaii$time/6
mFitSines(water~quarterdays, data=Hawaii)
```
Check both the \\(\\sin\\) and \\(\\cos\\) checkbox, as well as the “constant.” Then vary the slider for \\(P\\) to find the period that makes the RMS error as small as possible. Make sure the slider labeled \\(n\\) stays at \\(n\=1\\). What is the period \\(P\\) that makes the RMS error as small as possible (in terms of “quarter days”)?
1. 3\.95 quarter days
2. 4\.00 quarter days
3. *4\.05 quarter days*
4. 4\.10 quarter days
You may notice that the “best fitting” sine wave is not particularly close to the data points. One reason for this is that the pattern is more complicated than a simple sine wave. You can get a better approximation by including additional sine functions with different periods. By moving the \\(n\\) slider to \\(n\=2\\), you will include both the sine and cosine functions of period \\(P\\) and of period \\(P/2\\) — the “first harmonic.” Setting \\(n\=2\\) will give a markedly better match to the data.
What period \\(P\\) shows up as best when you have \\(n\=2\\): {3\.92,4\.0,4\.06,*4\.09*,4\.10,4\.15}
#### 6\.5\.1\.1 Exercise 1
The data in `"stan-data.csv"` contains measurements made by Prof. Stan Wagon of the temperature of a cooling cup of hot water. The time was measured in seconds, which is not very convenient for the slider, so translate it to minutes. Then find the best value of \\(k\\) in an exponential model.
```
water = read.csv("http://www.mosaic-web.org/go/datasets/stan-data.csv")
water$minutes = water$time/60
mFitExp( temp ~ minutes, data=water)
```
1. What’s the value of \\(k\\) that gives the smallest RMS error? {\-1\.50,*\-1\.25*,\-1\.00,\-0\.75}
2. What are the units of this \\(k\\)? (This is not an R question, but a mathematical one.)
1. seconds
2. minutes
3. per second
4. *per minute*
3. Move the slider to set \\(k\=0\.00\\). You will get an error message from the system about a “singular matrix.” This is because the function \\(e^{0x}\\) is redundant with the constant function. What is it about \\(e^{kx}\\) with \\(k\=0\\) that makes it redundant with the constant function?
#### 6\.5\.1\.2 Exercise 2
The `"hawaii.csv"` data set contains a record of ocean tide levels in Hawaii over a few days. The `time` variable is in hours, which is perfectly sensible, but you are going to rescale it to “quarter days” so that the slider will give better results. Then, you are going to use the `mFitSines( )` program to allow you to explore what happens as you vary the nonlinear parameter \\(P\\) in the linear combination \\(A \\sin(\\frac{2 \\pi}{P} t) \+ B \\cos(\\frac{2 \\pi}{P} t) \+ C\\).
```
Hawaii = read.csv("http://www.mosaic-web.org/go/datasets/hawaii.csv")
Hawaii$quarterdays = Hawaii$time/6
mFitSines(water~quarterdays, data=Hawaii)
```
Check both the \\(\\sin\\) and \\(\\cos\\) checkbox, as well as the “constant.” Then vary the slider for \\(P\\) to find the period that makes the RMS error as small as possible. Make sure the slider labeled \\(n\\) stays at \\(n\=1\\). What is the period \\(P\\) that makes the RMS error as small as possible (in terms of “quarter days”)?
1. 3\.95 quarter days
2. 4\.00 quarter days
3. *4\.05 quarter days*
4. 4\.10 quarter days
You may notice that the “best fitting” sine wave is not particularly close to the data points. One reason for this is that the pattern is more complicated than a simple sine wave. You can get a better approximation by including additional sine functions with different periods. By moving the \\(n\\) slider to \\(n\=2\\), you will include both the sine and cosine functions of period \\(P\\) and of period \\(P/2\\) — the “first harmonic.” Setting \\(n\=2\\) will give a markedly better match to the data.
What period \\(P\\) shows up as best when you have \\(n\=2\\): {3\.92,4\.0,4\.06,*4\.09*,4\.10,4\.15}
| Field Specific |
dtkaplan.github.io | https://dtkaplan.github.io/RforCalculus/derivatives-and-differentiation.html |
Chapter 7 Derivatives and differentiation
=========================================
As with all computations, the operator for taking derivatives, `D()` takes inputs and produces an output. In fact, compared to many operators, `D()` is quite simple: it takes just one input.
* Input: an expression using the `~` notation. Examples: `x^2~x` or `sin(x^2)~x` or `y*cos(x)~y`
On the left of the `~` is a mathematical expression, written in correct R notation, that will evaluate to a number when numerical values are available for all of the quantities referenced. On the right of the `~` is the variable with respect to which the derivative is to be taken. By no means need this be called `x` or `y`; any valid variable name is allowed.
The **output** produced by `D()` is a function. The function will list as arguments all of the variables contained in the input expression. You can then evaluate the output function for particular numerical values of the arguments in order to find the value of the derivative function.
For example:
```
g <- D(x^2 ~ x)
g(1)
```
```
## [1] 2
```
```
g(3.5)
```
```
## [1] 7
```
7\.1 Formulas and Numerical Difference
--------------------------------------
When the expression is relatively simple and composed of basic mathematical functions, `D()` will often return a function that contains a mathematical formula. For instance, in the above example
```
g
```
```
## function (x)
## 2 * x
## <bytecode: 0x7fb3973d7e20>
```
For other input expressions, `D()` will return a function that is based on a numerical approximation to the derivative — you can’t \`\`see" the derivative, but it is there inside the numerical approximation method:
```
h <- D(sin(abs(x - 3) ) ~ x)
h
```
```
## function (x)
## numerical.first.partial(.function, .wrt, .hstep, match.call())
## <environment: 0x7fb3b1ea8bf0>
```
7\.2 Symbolic Parameters
------------------------
You can include symbolic parameters in an expression being input to `D()`, for example:
```
s2 <- D(A * sin(2 * pi * t / P) + C ~ t)
```
The parameters, in this case `A`, `P`, and `C`, will be turned into arguments to the `s2()` function. Note that `pi` is understood to be the number \\(\\pi\\), not a parameter.
```
s2
```
```
## function (t, A, P, C)
## A * (cos(2 * pi * t/P) * (2 * pi/P))
```
The `s2()` function thus created will work like any other mathematical function, but you will need to specify numerical values for the symbolic parameters when you evaluate the function:
```
s2
```
```
## function (t, A, P, C)
## A * (cos(2 * pi * t/P) * (2 * pi/P))
```
```
s2( t=3, A=2, P=10, C=4 )
```
```
## [1] -0.3883222
```
```
slice_plot(s2(t, A=2, P=10, C=4) ~ t,
domain(t=range(0,20)))
```
7\.3 Partial Derivatives
------------------------
The derivatives computed by `D( )` are *partial derivatives*. That is, they are derivatives where the variable on the right\-hand side of `~` is changed and all other variables are held constant.
### 7\.3\.1 Second derivatives
A second derivative is merely the derivative of a derivative. You can use the `D( )` operator twice to find a second derivative, like this.
```
df <- D(sin(x) ~ x)
ddf <- D(df(x) ~ x)
```
To save typing, particularly when there is more than one variable involved in the expression, you can put multiple variables to the right of the `~` sign, as in this second derivative with respect to \\(x\\):
```
another.ddf <- D(sin(x) ~ x & x)
```
This form for second and higher\-order derivatives also delivers more accurate computations.
### 7\.3\.2 Exercises
#### 7\.3\.2\.1 Exercise 1
Using `D()`, find the derivative of `3 * x ^ 2 - 2*x + 4 ~ x`.
What is the value of the derivative at \\(x\=0\\)? {\-6,\-4,\-3,*\-2*,0,2,3,4,6}
What does a graph of the derivative function look like?
a. A negative sloping line
\#. A positive sloping line
\#. *An upward\-facing parabola*
\#. A downward\-facing parabola
#### 7\.3\.2\.2 Exercise 2
Using `D()`, find the derivative of `5 * exp(0.2 * x) ~ x`.
1. What is the value of the derivative at \\(x\=0\\)? {\-5,\-2,\-1,0,*1*,2,5}.
2. Plot out both the original exponential expression and its derivative. How are they related to each other?
1. They are the same function
2. *Same exponential shape, but different initial values*
3. The derivative has a faster exponential increase
4. The derivative shows an exponential decay
#### 7\.3\.2\.3 Exercise 3
Use `D()` to find the derivative of \\(e^{\-x^2}\\) with respect to \\(x\\) (that is, `exp(-(x^2) ~ x`). Graph the derivative from \\(x\=\-2\\) to 2\. What does the graph look like?
1. A bell\-shaped mountain
2. Exponential growth
3. *A positive wave followed by a negative wave*
4. A negative wave followed by a positive wave
#### 7\.3\.2\.4 Exercise 4
What will be the value of this derivative?
```
D(fred^2 ~ ginger)
```
1. *0 everywhere*
2. 1 everywhere
3. A positive sloping line
4. A negative sloping line
#### 7\.3\.2\.5 Exercise 5
Use `D()` to find the 3rd derivative of `cos(2 * t)`. If you do this by using the `~t&t&t` notation, you will be able to read off a formula for the 3rd derivative.
1. What is it?
1. \\(\\sin(t)\\)
2. \\(\\sin(2 t)\\)
3. \\(4 \\sin(2 t)\\)
4. *\\(8 \\sin(2 t)\\)*
5. \\(16 \\sin(2 t)\\)
2. What’s the 4th derivative?
1. \\(\\cos(t)\\)
2. \\(\\cos(2 t)\\)
3. \\(4 \\cos(2 t)\\)
4. \\(8 \\cos(2 t)\\)
5. *\\(16 \\cos(2 t)\\)*
#### 7\.3\.2\.6 Exercise 6
Compute and graph the 4th derivative of `cos(2 * t ^ 2) ~ t` from \\(t\=0\\) to 5\.
1. What does the graph look like?
1. A constant
2. A cosine whose period decreases as \\(t\\) gets bigger
3. *A cosine whose amplitude increases and whose period decreases as \\(t\\) gets bigger*
4. A cosine whose amplitude decreases and whose period increases as \\(t\\) gets bigger
2. For `cos(2 * t ^ 2) ~ t` the fourth derivate is a complicated\-looking expression made up of simpler expressions. What functions appear in the complicated expression?
1. sin and cos functions
2. cos, squaring, multiplication and addition
3. *cos, sin, squaring, multiplication and addition*
4. log, cos, sin, squaring, multiplication and addition
#### 7\.3\.2\.7 Exercise 7
Consider the expression `x * sin(y)` involving variables \\(x\\) and \\(y\\). Use `D( )` to compute several derivative functions: the partial with respect to \\(x\\), the partial with respect to \\(y\\), the second partial derivative with respect to \\(x\\), the second partial derivative with respect to \\(y\\), and these two mixed partials:
```
pxy = D(x * sin(y) ~ x & y)
pyx = D(x * sin(y) ~ y & x)
```
Pick several \\((x,y)\\) pairs and evaluate each of the derivative
functions at them. Use the results to answer the following:
1. The partials with respect to \\(x\\) and to \\(y\\) are identical. T or *F*
2. The second partials with respect to \\(x\\) and to \\(y\\) are identical. T or *F*
3. The two mixed partials are identical. That is, it doesn’t matter whether you differentiate first with respect to \\(x\\) and then \\(y\\), or vice versa. *T* or F
7\.1 Formulas and Numerical Difference
--------------------------------------
When the expression is relatively simple and composed of basic mathematical functions, `D()` will often return a function that contains a mathematical formula. For instance, in the above example
```
g
```
```
## function (x)
## 2 * x
## <bytecode: 0x7fb3973d7e20>
```
For other input expressions, `D()` will return a function that is based on a numerical approximation to the derivative — you can’t \`\`see" the derivative, but it is there inside the numerical approximation method:
```
h <- D(sin(abs(x - 3) ) ~ x)
h
```
```
## function (x)
## numerical.first.partial(.function, .wrt, .hstep, match.call())
## <environment: 0x7fb3b1ea8bf0>
```
7\.2 Symbolic Parameters
------------------------
You can include symbolic parameters in an expression being input to `D()`, for example:
```
s2 <- D(A * sin(2 * pi * t / P) + C ~ t)
```
The parameters, in this case `A`, `P`, and `C`, will be turned into arguments to the `s2()` function. Note that `pi` is understood to be the number \\(\\pi\\), not a parameter.
```
s2
```
```
## function (t, A, P, C)
## A * (cos(2 * pi * t/P) * (2 * pi/P))
```
The `s2()` function thus created will work like any other mathematical function, but you will need to specify numerical values for the symbolic parameters when you evaluate the function:
```
s2
```
```
## function (t, A, P, C)
## A * (cos(2 * pi * t/P) * (2 * pi/P))
```
```
s2( t=3, A=2, P=10, C=4 )
```
```
## [1] -0.3883222
```
```
slice_plot(s2(t, A=2, P=10, C=4) ~ t,
domain(t=range(0,20)))
```
7\.3 Partial Derivatives
------------------------
The derivatives computed by `D( )` are *partial derivatives*. That is, they are derivatives where the variable on the right\-hand side of `~` is changed and all other variables are held constant.
### 7\.3\.1 Second derivatives
A second derivative is merely the derivative of a derivative. You can use the `D( )` operator twice to find a second derivative, like this.
```
df <- D(sin(x) ~ x)
ddf <- D(df(x) ~ x)
```
To save typing, particularly when there is more than one variable involved in the expression, you can put multiple variables to the right of the `~` sign, as in this second derivative with respect to \\(x\\):
```
another.ddf <- D(sin(x) ~ x & x)
```
This form for second and higher\-order derivatives also delivers more accurate computations.
### 7\.3\.2 Exercises
#### 7\.3\.2\.1 Exercise 1
Using `D()`, find the derivative of `3 * x ^ 2 - 2*x + 4 ~ x`.
What is the value of the derivative at \\(x\=0\\)? {\-6,\-4,\-3,*\-2*,0,2,3,4,6}
What does a graph of the derivative function look like?
a. A negative sloping line
\#. A positive sloping line
\#. *An upward\-facing parabola*
\#. A downward\-facing parabola
#### 7\.3\.2\.2 Exercise 2
Using `D()`, find the derivative of `5 * exp(0.2 * x) ~ x`.
1. What is the value of the derivative at \\(x\=0\\)? {\-5,\-2,\-1,0,*1*,2,5}.
2. Plot out both the original exponential expression and its derivative. How are they related to each other?
1. They are the same function
2. *Same exponential shape, but different initial values*
3. The derivative has a faster exponential increase
4. The derivative shows an exponential decay
#### 7\.3\.2\.3 Exercise 3
Use `D()` to find the derivative of \\(e^{\-x^2}\\) with respect to \\(x\\) (that is, `exp(-(x^2) ~ x`). Graph the derivative from \\(x\=\-2\\) to 2\. What does the graph look like?
1. A bell\-shaped mountain
2. Exponential growth
3. *A positive wave followed by a negative wave*
4. A negative wave followed by a positive wave
#### 7\.3\.2\.4 Exercise 4
What will be the value of this derivative?
```
D(fred^2 ~ ginger)
```
1. *0 everywhere*
2. 1 everywhere
3. A positive sloping line
4. A negative sloping line
#### 7\.3\.2\.5 Exercise 5
Use `D()` to find the 3rd derivative of `cos(2 * t)`. If you do this by using the `~t&t&t` notation, you will be able to read off a formula for the 3rd derivative.
1. What is it?
1. \\(\\sin(t)\\)
2. \\(\\sin(2 t)\\)
3. \\(4 \\sin(2 t)\\)
4. *\\(8 \\sin(2 t)\\)*
5. \\(16 \\sin(2 t)\\)
2. What’s the 4th derivative?
1. \\(\\cos(t)\\)
2. \\(\\cos(2 t)\\)
3. \\(4 \\cos(2 t)\\)
4. \\(8 \\cos(2 t)\\)
5. *\\(16 \\cos(2 t)\\)*
#### 7\.3\.2\.6 Exercise 6
Compute and graph the 4th derivative of `cos(2 * t ^ 2) ~ t` from \\(t\=0\\) to 5\.
1. What does the graph look like?
1. A constant
2. A cosine whose period decreases as \\(t\\) gets bigger
3. *A cosine whose amplitude increases and whose period decreases as \\(t\\) gets bigger*
4. A cosine whose amplitude decreases and whose period increases as \\(t\\) gets bigger
2. For `cos(2 * t ^ 2) ~ t` the fourth derivate is a complicated\-looking expression made up of simpler expressions. What functions appear in the complicated expression?
1. sin and cos functions
2. cos, squaring, multiplication and addition
3. *cos, sin, squaring, multiplication and addition*
4. log, cos, sin, squaring, multiplication and addition
#### 7\.3\.2\.7 Exercise 7
Consider the expression `x * sin(y)` involving variables \\(x\\) and \\(y\\). Use `D( )` to compute several derivative functions: the partial with respect to \\(x\\), the partial with respect to \\(y\\), the second partial derivative with respect to \\(x\\), the second partial derivative with respect to \\(y\\), and these two mixed partials:
```
pxy = D(x * sin(y) ~ x & y)
pyx = D(x * sin(y) ~ y & x)
```
Pick several \\((x,y)\\) pairs and evaluate each of the derivative
functions at them. Use the results to answer the following:
1. The partials with respect to \\(x\\) and to \\(y\\) are identical. T or *F*
2. The second partials with respect to \\(x\\) and to \\(y\\) are identical. T or *F*
3. The two mixed partials are identical. That is, it doesn’t matter whether you differentiate first with respect to \\(x\\) and then \\(y\\), or vice versa. *T* or F
### 7\.3\.1 Second derivatives
A second derivative is merely the derivative of a derivative. You can use the `D( )` operator twice to find a second derivative, like this.
```
df <- D(sin(x) ~ x)
ddf <- D(df(x) ~ x)
```
To save typing, particularly when there is more than one variable involved in the expression, you can put multiple variables to the right of the `~` sign, as in this second derivative with respect to \\(x\\):
```
another.ddf <- D(sin(x) ~ x & x)
```
This form for second and higher\-order derivatives also delivers more accurate computations.
### 7\.3\.2 Exercises
#### 7\.3\.2\.1 Exercise 1
Using `D()`, find the derivative of `3 * x ^ 2 - 2*x + 4 ~ x`.
What is the value of the derivative at \\(x\=0\\)? {\-6,\-4,\-3,*\-2*,0,2,3,4,6}
What does a graph of the derivative function look like?
a. A negative sloping line
\#. A positive sloping line
\#. *An upward\-facing parabola*
\#. A downward\-facing parabola
#### 7\.3\.2\.2 Exercise 2
Using `D()`, find the derivative of `5 * exp(0.2 * x) ~ x`.
1. What is the value of the derivative at \\(x\=0\\)? {\-5,\-2,\-1,0,*1*,2,5}.
2. Plot out both the original exponential expression and its derivative. How are they related to each other?
1. They are the same function
2. *Same exponential shape, but different initial values*
3. The derivative has a faster exponential increase
4. The derivative shows an exponential decay
#### 7\.3\.2\.3 Exercise 3
Use `D()` to find the derivative of \\(e^{\-x^2}\\) with respect to \\(x\\) (that is, `exp(-(x^2) ~ x`). Graph the derivative from \\(x\=\-2\\) to 2\. What does the graph look like?
1. A bell\-shaped mountain
2. Exponential growth
3. *A positive wave followed by a negative wave*
4. A negative wave followed by a positive wave
#### 7\.3\.2\.4 Exercise 4
What will be the value of this derivative?
```
D(fred^2 ~ ginger)
```
1. *0 everywhere*
2. 1 everywhere
3. A positive sloping line
4. A negative sloping line
#### 7\.3\.2\.5 Exercise 5
Use `D()` to find the 3rd derivative of `cos(2 * t)`. If you do this by using the `~t&t&t` notation, you will be able to read off a formula for the 3rd derivative.
1. What is it?
1. \\(\\sin(t)\\)
2. \\(\\sin(2 t)\\)
3. \\(4 \\sin(2 t)\\)
4. *\\(8 \\sin(2 t)\\)*
5. \\(16 \\sin(2 t)\\)
2. What’s the 4th derivative?
1. \\(\\cos(t)\\)
2. \\(\\cos(2 t)\\)
3. \\(4 \\cos(2 t)\\)
4. \\(8 \\cos(2 t)\\)
5. *\\(16 \\cos(2 t)\\)*
#### 7\.3\.2\.6 Exercise 6
Compute and graph the 4th derivative of `cos(2 * t ^ 2) ~ t` from \\(t\=0\\) to 5\.
1. What does the graph look like?
1. A constant
2. A cosine whose period decreases as \\(t\\) gets bigger
3. *A cosine whose amplitude increases and whose period decreases as \\(t\\) gets bigger*
4. A cosine whose amplitude decreases and whose period increases as \\(t\\) gets bigger
2. For `cos(2 * t ^ 2) ~ t` the fourth derivate is a complicated\-looking expression made up of simpler expressions. What functions appear in the complicated expression?
1. sin and cos functions
2. cos, squaring, multiplication and addition
3. *cos, sin, squaring, multiplication and addition*
4. log, cos, sin, squaring, multiplication and addition
#### 7\.3\.2\.7 Exercise 7
Consider the expression `x * sin(y)` involving variables \\(x\\) and \\(y\\). Use `D( )` to compute several derivative functions: the partial with respect to \\(x\\), the partial with respect to \\(y\\), the second partial derivative with respect to \\(x\\), the second partial derivative with respect to \\(y\\), and these two mixed partials:
```
pxy = D(x * sin(y) ~ x & y)
pyx = D(x * sin(y) ~ y & x)
```
Pick several \\((x,y)\\) pairs and evaluate each of the derivative
functions at them. Use the results to answer the following:
1. The partials with respect to \\(x\\) and to \\(y\\) are identical. T or *F*
2. The second partials with respect to \\(x\\) and to \\(y\\) are identical. T or *F*
3. The two mixed partials are identical. That is, it doesn’t matter whether you differentiate first with respect to \\(x\\) and then \\(y\\), or vice versa. *T* or F
#### 7\.3\.2\.1 Exercise 1
Using `D()`, find the derivative of `3 * x ^ 2 - 2*x + 4 ~ x`.
What is the value of the derivative at \\(x\=0\\)? {\-6,\-4,\-3,*\-2*,0,2,3,4,6}
What does a graph of the derivative function look like?
a. A negative sloping line
\#. A positive sloping line
\#. *An upward\-facing parabola*
\#. A downward\-facing parabola
#### 7\.3\.2\.2 Exercise 2
Using `D()`, find the derivative of `5 * exp(0.2 * x) ~ x`.
1. What is the value of the derivative at \\(x\=0\\)? {\-5,\-2,\-1,0,*1*,2,5}.
2. Plot out both the original exponential expression and its derivative. How are they related to each other?
1. They are the same function
2. *Same exponential shape, but different initial values*
3. The derivative has a faster exponential increase
4. The derivative shows an exponential decay
#### 7\.3\.2\.3 Exercise 3
Use `D()` to find the derivative of \\(e^{\-x^2}\\) with respect to \\(x\\) (that is, `exp(-(x^2) ~ x`). Graph the derivative from \\(x\=\-2\\) to 2\. What does the graph look like?
1. A bell\-shaped mountain
2. Exponential growth
3. *A positive wave followed by a negative wave*
4. A negative wave followed by a positive wave
#### 7\.3\.2\.4 Exercise 4
What will be the value of this derivative?
```
D(fred^2 ~ ginger)
```
1. *0 everywhere*
2. 1 everywhere
3. A positive sloping line
4. A negative sloping line
#### 7\.3\.2\.5 Exercise 5
Use `D()` to find the 3rd derivative of `cos(2 * t)`. If you do this by using the `~t&t&t` notation, you will be able to read off a formula for the 3rd derivative.
1. What is it?
1. \\(\\sin(t)\\)
2. \\(\\sin(2 t)\\)
3. \\(4 \\sin(2 t)\\)
4. *\\(8 \\sin(2 t)\\)*
5. \\(16 \\sin(2 t)\\)
2. What’s the 4th derivative?
1. \\(\\cos(t)\\)
2. \\(\\cos(2 t)\\)
3. \\(4 \\cos(2 t)\\)
4. \\(8 \\cos(2 t)\\)
5. *\\(16 \\cos(2 t)\\)*
#### 7\.3\.2\.6 Exercise 6
Compute and graph the 4th derivative of `cos(2 * t ^ 2) ~ t` from \\(t\=0\\) to 5\.
1. What does the graph look like?
1. A constant
2. A cosine whose period decreases as \\(t\\) gets bigger
3. *A cosine whose amplitude increases and whose period decreases as \\(t\\) gets bigger*
4. A cosine whose amplitude decreases and whose period increases as \\(t\\) gets bigger
2. For `cos(2 * t ^ 2) ~ t` the fourth derivate is a complicated\-looking expression made up of simpler expressions. What functions appear in the complicated expression?
1. sin and cos functions
2. cos, squaring, multiplication and addition
3. *cos, sin, squaring, multiplication and addition*
4. log, cos, sin, squaring, multiplication and addition
#### 7\.3\.2\.7 Exercise 7
Consider the expression `x * sin(y)` involving variables \\(x\\) and \\(y\\). Use `D( )` to compute several derivative functions: the partial with respect to \\(x\\), the partial with respect to \\(y\\), the second partial derivative with respect to \\(x\\), the second partial derivative with respect to \\(y\\), and these two mixed partials:
```
pxy = D(x * sin(y) ~ x & y)
pyx = D(x * sin(y) ~ y & x)
```
Pick several \\((x,y)\\) pairs and evaluate each of the derivative
functions at them. Use the results to answer the following:
1. The partials with respect to \\(x\\) and to \\(y\\) are identical. T or *F*
2. The second partials with respect to \\(x\\) and to \\(y\\) are identical. T or *F*
3. The two mixed partials are identical. That is, it doesn’t matter whether you differentiate first with respect to \\(x\\) and then \\(y\\), or vice versa. *T* or F
| Field Specific |
dtkaplan.github.io | https://dtkaplan.github.io/RforCalculus/integrals-and-integration.html |
Chapter 8 Integrals and integration
===================================
You’ve already seen a fundamental calculus operator, differentiation, which is implement by the R/`mosaicCalc` function `D()`. The diffentiation operator takes as input a function and a “with respect to” variable. The output is another function which has the “with respect to” variable as an argument, and potentially other arguments as well.
```
f <- makeFun( A * x ^ 2 ~ x, A = 0.5)
f(1)
```
```
## [1] 0.5
```
```
f(2)
```
```
## [1] 2
```
```
f(3)
```
```
## [1] 4.5
```
```
df <- D(f(x) ~ x)
df(1)
```
```
## [1] 1
```
```
df(2)
```
```
## [1] 2
```
```
df(3)
```
```
## [1] 3
```
```
slice_plot(f(x) ~ x, domain(x = -1:1)) %>%
gf_labs(title = "Original function f(x)")
slice_plot(df(x) ~ x, domain(x =-1:1), color = "red") %>%
gf_labs(title = "New function df(x), the derivative of f(x)")
```
Figure 8\.1: A function and its derivative.
Figure [8\.1](integrals-and-integration.html#fig:two-functions1) shows a graph of \\(f(x)\\) – a smiley curve – and its derivative \\(df(x)\\).
8\.1 The anti\-derivative
-------------------------
Now, imagine that we start with \\(df(x)\\) and we want to find a function \\(DF(x)\\) where the derivative of \\(DF(x)\\) is \\(f(x)\\). In other words, imagine applying the inverse of the \\(D()\\) operator to the function \\(df(x)\\) produces \\(f()\\) (or something much like it).
This inverse operator is implemented in R/`mosaicCalc` as the `antiD()` function. As the suffix `anti` suggests, `antiD()` “undoes” the what \\(D()\\) does. Like this:
```
DF <- antiD(df(x) ~ x)
DF(1)
```
```
## [1] 0.5
```
```
DF(2)
```
```
## [1] 2
```
```
DF(3)
```
```
## [1] 4.5
```
```
slice_plot(df(x) ~ x, domain(x=-1:1), color = "red") %>%
gf_labs(title = "Original function df(x)")
slice_plot(DF(x) ~ x, domain(x=-1:1)) %>%
gf_labs(title = "New function DF(x), the anti-derivative of df(x)")
```
Figure 8\.2: A function and its anti\-derivative.
Notice that the function \\(DF\\) was created by anti\-differentiating not \\(f\\) but \\(df\\) with respect to \\(x\\). The result is a function that’s “just like” \\(f\\). (Why the quotes on “just like”? You’ll see.) You can see that the values of \\(DF\\) are the same as the values of the original \\(f\\).
You can also go the other way: anti\-differentiating a function and then taking the derivative to get back to the original function.
```
h <- antiD( f(x) ~ x )
dh <- D(h(x) ~ x )
dh(1)
```
```
## [1] 0.5
```
```
dh(2)
```
```
## [1] 2
```
```
dh(3)
```
```
## [1] 4.5
```
As you can see, `antiD( )` undoes `D( )`, and `D( )` undoes `antiD( )`. It’s that easy. But there is one catch: for any function \\(f(x)\\) there are many anti\-derivatives of \\(f(x)\\).
8\.2 One variable becomes two arguments
---------------------------------------
It’s rarely the case that you will want to anti\-differentiate a function that you have just differentiated. One undoes the other, so there is little point except to illustrate in a textbook how differentiation and anti\-differentiation are related to one another. But it often happens that you are working with a function that describes the derivative of some unknown function, and you wish to find the unknown function.
This is often called “integrating” a function. “Integration” is a shorter and nicer term than “anti\-differentiation,” and is the more commonly used term. The function that’s produced by the process is generally called an “integral.” The terms “indefinite integral” and “definite integral” are often used to distinguish between the function produced by anti\-differentiation and the *value* of that function when evaluated at specific inputs. This will be confusing at first, but you’ll soon get a feeling for what’s going on.
As you know, a derivative tells you a **local** property of a function: how the function changes when one of the inputs is changed by a small amount. The derivative is a sort of slope. If you’ve ever stood on a hill, you know that you can tell the local slope without being able to see the whole hill; just feel what’s under your feet.
An anti\-derivative undoes a derivative, but what does it mean to “undo” a local property? The answer is that an anti\-derivative (or, in other words, an integral) tells you about some **global** or **distributed** properties of a function: not just the value at a point, but the value accumulated over a whole range of points. This global or distributed property of the anti\-derivative is what makes anti\-derivatives a bit more complicated than derivatives, but not much more so.
At the core of the problem is that there is more than one way to “undo” a derivative. Consider the following functions, each of which is different:
```
f1 <- makeFun(sin(x ^ 2) ~ x)
f2 <- makeFun(sin(x ^ 2) + 3 ~ x)
f3 <- makeFun(sin(x ^ 2) - 100 ~ x)
f1(1)
```
```
## [1] 0.841471
```
```
f2(1)
```
```
## [1] 3.841471
```
```
f3(1)
```
```
## [1] -99.15853
```
Despite the fact that the functions \\(f\_1(x)\\), \\(f\_2(x)\\), and \\(f\_3(x)\\), are different, they all have the same derivative.
```
df1 = D(f1(x) ~ x)
df2 = D(f2(x) ~ x)
df3 = D(f3(x) ~ x)
df1(1)
```
```
## [1] 1.080605
```
```
df2(1)
```
```
## [1] 1.080605
```
```
df3(1)
```
```
## [1] 1.080605
```
This raises a problem. When you “undo” the derivative of any of `df1`, `df2`, or `df3`, what should the answer be? Should you get \\(f\_1\\) or \\(f\_2\\) or \\(f\_3\\) or some other function? It appears that the antiderivative is, to some extent, indefinite.
The answer to this question is by no means a philosophical mystery. There’s a very definite answer. Or, rather, there are two answers that work out to be different faces of the same thing.
To start, it helps to review the traditional mathematical notation, so that it can be compared side\-by\-side with the computer notation. Given a function \\(f(x)\\), the derivative with respect to \\(x\\) is written \\(df/dx\\) and the anti\-derivative is written \\(\\int f(x) dx\\).
All of the functions that have the same derivative are similar. In fact, they are identical except for an additive constant. So the problem of indefiniteness of the antiderivative amounts just to an additive constant — the anti\-derivative of the derivative of a function will be the function give or take an additive constant:
\\\[ \\int \\frac{df}{dx} dx \= f(x) \+ C .\\]
So, as long as you are not concerned about additive constants, the anti\-derivative of the derivative of a function gives you back the original function.
8\.3 The integral
-----------------
The derivative tells you how a function changes locally. The anti\-derivative accumulates those local values to give you a global value; it considers not just the local properties of the function at a single particular input value but the values over a **range** of inputs.
Remember that the derivative of \\(f\\) is itself a function, and that function has the same arguments as \\(f\\). So, since `f(x)` was defined to have an argument named `x`, the function created by `D(f(x) ~ x)` also has an argument named `x` (and whatever other parameters are involved):
```
f
```
```
## function (x, A = 0.5)
## A * x^2
## <bytecode: 0x7fe6038205e0>
```
```
df
```
```
## function (x, A = 0.5)
## A * (2 * (x))
## <bytecode: 0x7fe603934720>
```
The anti\-derivative operation is a little bit different in this respect. When you use `antiD()`, the name of the function’s variable is replaced by *two* arguments: the actual name (in this example, \\(x\\)) and the constant \\(C\\):
```
antiD(f(x) ~ x)
antiD(df(x) ~ x)
```
The value of `C` sets, implicitly, the lower end of the range over which the accumulation is going to occur.
This is a point that is somewhat obscured by traditional mathematical notation, which allows you to write down statements that are not completely explicit. For example, in traditional notation, it’s quite accepted to write an integration statement like this:
\\\[ \\int x^2 dx \= \\frac{1}{3} x^3 . \\]
This looks like a function of \\(x\\). But that’s not the whole truth. In fact, the complete statement of the integral involve another argument: `C`:
\\\[\\int x^2 dx \= \\frac{1}{3} x^3 \+ C, \\]
So, really, the value of \\(\\int x^2 dx\\) is a function both of \\(x\\) and \\(C\\). In traditional notation, the \\(C\\) argument is often left out and the reader is expected to remember that \\(\\int x^2 dx\\) is an “indefinite integral.”
Another traditional style for writing an integral is
\\\[ \\int ^{\\mbox{to}}\_{\\mbox{from}} x^2 dx \= \\left. \\frac{1}{3} x^3
\\right\|^{to}\_{from} ,\\]
where the $. \|^{to}\_{from} $ means that you are to substitute the values of `from` and `to` in for \\(x\\). For instance:
\\\[ \\int ^{2}\_{\-1} x^2 dx \= \\left. \\frac{1}{3} x^3 \\right\|^{2}\_{\-1} \=
\\frac{1}{3} 2^3 \- \\frac{1}{3} (\-1\)^3 \= \\frac{9}{3} \= 3\\]
Notice that it doesn’t matter whether the function had been defined in terms of \\(x\\) or \\(y\\) or anything else. In the end, the indefinite integral is a function of `from` and `to`.
Notice how the calculation of the definite integral involves *two* applications of the anti\-derivative. The definite interval is the *difference* between the anti\-derivative evaluated at `to` and the anti\-derivate evaluated at `from`. But how do we know what value of \\(C\\) to provide when calculating a definite integral. The answer is simple: it doesn’t matter what \\(C\\) is so long as it is the same in both the `to` and `from` evaluations of the anti\-derivative. The \\(C\\) in the `from` calculation cancels out the \\(C\\) in the `to` calculation. Since the \\(C\\)’s cancel out, any value of \\(C\\) will do. In the software, we choose to use a default value of \\(C \= 0\\)
```
fun = antiD( x^2 ~ x )
fun
```
```
## function (x, C = 0)
## 1/3 * x^3 + C
```
When you evaluate the function at specific numerical values for those arguments, you end up with the “definite integral,” a number:
```
# This doesn't exist yet. FIX FIX FIX
fun(x = 2) - fun(x = -1)
```
For now, these are the essential things to remember:
1. The `antiD( )` function will compute an anti\-derivative.
2. Like the derivative, the anti\-derivative is always taken with respect to a variable, for instance `antiD( x^2 ~ x )`. That variable, here `x`, is called (sensibly enough) the “variable of integration.” You can also say, “the integral with respect to \\(x\\).”
3. The definite integral is a function of the variable of integration … sort of. To be more precise, the variable of integration appears as an argument in two guises since the definite integral involves two evaluations: one at \\(x \=\\) `to` and one at \\(x \=\\) `from`. The bounds defined by `from` and `to` are often called the “region of integration.”
The many vocabulary terms used reflect the different ways you might specify or not specify particular numerical values for `from` and `to`: “integral,” “anti\-derivative,” “indefinite integral,” and “definite integral.” Admittedly, this can be confusing, but that’s a consequence of something important: the integral is about the “global” or “distributed” properties of a function, the “whole.” In contrast, derivatives are about the “local” properties: the "part. The whole is generally more complicated than the part.
You’ll do well if you can remember this one fact about integrals: there are always two arguments that reflect the region of integration: `from` and `to`.
### 8\.3\.1 Exercises
#### 8\.3\.1\.1 Exercise 1
Find the numerical value of each of the following definite integrals.
1. \\(\\int^{5}\_{2} x^{1\.5} dx\\)
ANSWER:
\<\<\>\>\=
f \= antiD( x^1\.5 \~ x )
f(from\=2,to\=5\)
@
1. \\(\\int^{10}\_{0} sin( x^2 ) dx\\)
ANSWER:
\<\<\>\>\=
f \= antiD(sin(x^2\) \~ x)
f(from\=0,to\=10\)
@
1. \\(\\int^{4}\_{1} e^{2x} dx\\)
{0\.58,6\.32,20\.10,27\.29,53\.60,107\.9,*1486\.8*}
ANSWER:
```
f = integral(exp(2*x) ~ x)
f(from=1,to=4)
```
1. \\(\\int^{2}\_{\-2} e^{2x} dx\\)
{0\.58,6\.32,20\.10,*27\.29*,53\.60,107\.9,1486\.8}
ANSWER:
```
f = integral(exp(2*x) ~ x)
f(from=-2,to=2)
```
1. \\(\\int^{2}\_{\-2} e^{2 \| x \|} dx\\)
{0\.58,6\.32,20\.10,27\.29,*53\.60*,107\.9,1486\.8}
ANSWER:
```
f = integral(exp(abs(2*x)) ~ x)
f(from=-2,to=2)
```
#### 8\.3\.1\.2 Exercise 2
There’s a very simple relationship between \\(\\int^b\_a f(x) dx\\) and \\(\\int^a\_b f(x) dx\\) — integrating the same function \\(f\\), but reversing the values of `from` and `to`.
Create some functions, integrate them, and experiment with them to find the relationship.
1. They are the same value.
2. One is twice the value of the other.
3. *One is negative the other.*
4. One is the square of the other.
#### 8\.3\.1\.3 Exercise 3
The function being integrated can have additional variables or parameters beyond the variable of integration. To evaluate the definite integral, you need to specify values for those additional variables.
For example, a very important function in statistics and physics is the Gaussian, which has a bell\-shaped graph:
```
gaussian <-
makeFun((1/sqrt(2*pi*sigma^2)) *
exp( -(x-mean)^2/(2*sigma^2)) ~ x,
mean=2, sigma=2.5)
slice_plot(gaussian(x) ~ x, domain(x = -5:10)) %>%
slice_plot(gaussian(x, mean=0, sigma=1) ~ x, color="red")
```
As you can see, it’s a function of \\(x\\), but also of the parameters `mean` and `sigma`.
When you integrate this, you need to tell `antiD()` or `integral()` what the parameters are going to be called:
```
erf <- antiD(gaussian(x, mean=m, sigma=s) ~ x)
erf
```
Evaluate each of the following definite integrals:
1. \\(\\int^1\_0 \\mbox{erf}(x,m\=0,s\=1\) dx\\)
{0\.13,*0\.34*,0\.48,0\.50,0\.75,1\.00}
ANSWER:
```
erf(x = 1, m=0, s=1) - erf(x = 0, m=0, s=1)
```
The name `erf` is arbitrary. In mathematics, `erf` is the name of something called the ERror Function, just as `sin` is the name of the sine function. The [formal definition of `erf`](https://en.wikipedia.org/wiki/Error_function) is a bit different than the `erf` presented here, but the name `erf` is so much fun that I wanted to include it in the book. The real `erf` is
\\\[ \\mbox{erf}\_{formal}(x) \= 2\\ \\mbox{erf}\_{here}(x) \- 1\.\\]
1. \\(\\int^2\_0 f(x,m\=0,s\=1\) dx\\)
{0\.13,0\.34,*0\.48*,0\.50,0\.75,1\.00}
ANSWER:
```
erf(x = 2, m=0, s=1) - erf(x = 0, m=0, s=1)
```
1. \\(\\int^2\_0 f(x,m\=0,s\=2\) dx\\)
{\-0\.48,*\-0\.34*,\-0\.13, 0\.13, 0\.34, 0\.48}
ANSWER:
```
erf(x = 0, m=0, s=2) - erf(x = 2, m=0, s=2)
```
1. \\(\\int^3\_{\-\\infty} f(x,m\=3,s\=10\) dx\\). (Hint: The mathematical \\(\-\\infty\\) is represented as on the computer.)
{0\.13,0\.34,0\.48,*0\.50*,0\.75,1\.00}
ANSWER:
```
erf(x = -Inf, m=3, s=10) - erf(x = 3, m=3, s=10)
```
1. \\(\\int^{\-\\infty}\_{\\infty} f(x,m\=3,s\=10\) dx\\)
{0\.13,0\.34,0\.48,0\.50,0\.75,*1\.00*}
ANSWER:
```
erf(x = Inf, m=3, s=10) - erf(x = -Inf, m=3, s=10)
```
8\.1 The anti\-derivative
-------------------------
Now, imagine that we start with \\(df(x)\\) and we want to find a function \\(DF(x)\\) where the derivative of \\(DF(x)\\) is \\(f(x)\\). In other words, imagine applying the inverse of the \\(D()\\) operator to the function \\(df(x)\\) produces \\(f()\\) (or something much like it).
This inverse operator is implemented in R/`mosaicCalc` as the `antiD()` function. As the suffix `anti` suggests, `antiD()` “undoes” the what \\(D()\\) does. Like this:
```
DF <- antiD(df(x) ~ x)
DF(1)
```
```
## [1] 0.5
```
```
DF(2)
```
```
## [1] 2
```
```
DF(3)
```
```
## [1] 4.5
```
```
slice_plot(df(x) ~ x, domain(x=-1:1), color = "red") %>%
gf_labs(title = "Original function df(x)")
slice_plot(DF(x) ~ x, domain(x=-1:1)) %>%
gf_labs(title = "New function DF(x), the anti-derivative of df(x)")
```
Figure 8\.2: A function and its anti\-derivative.
Notice that the function \\(DF\\) was created by anti\-differentiating not \\(f\\) but \\(df\\) with respect to \\(x\\). The result is a function that’s “just like” \\(f\\). (Why the quotes on “just like”? You’ll see.) You can see that the values of \\(DF\\) are the same as the values of the original \\(f\\).
You can also go the other way: anti\-differentiating a function and then taking the derivative to get back to the original function.
```
h <- antiD( f(x) ~ x )
dh <- D(h(x) ~ x )
dh(1)
```
```
## [1] 0.5
```
```
dh(2)
```
```
## [1] 2
```
```
dh(3)
```
```
## [1] 4.5
```
As you can see, `antiD( )` undoes `D( )`, and `D( )` undoes `antiD( )`. It’s that easy. But there is one catch: for any function \\(f(x)\\) there are many anti\-derivatives of \\(f(x)\\).
8\.2 One variable becomes two arguments
---------------------------------------
It’s rarely the case that you will want to anti\-differentiate a function that you have just differentiated. One undoes the other, so there is little point except to illustrate in a textbook how differentiation and anti\-differentiation are related to one another. But it often happens that you are working with a function that describes the derivative of some unknown function, and you wish to find the unknown function.
This is often called “integrating” a function. “Integration” is a shorter and nicer term than “anti\-differentiation,” and is the more commonly used term. The function that’s produced by the process is generally called an “integral.” The terms “indefinite integral” and “definite integral” are often used to distinguish between the function produced by anti\-differentiation and the *value* of that function when evaluated at specific inputs. This will be confusing at first, but you’ll soon get a feeling for what’s going on.
As you know, a derivative tells you a **local** property of a function: how the function changes when one of the inputs is changed by a small amount. The derivative is a sort of slope. If you’ve ever stood on a hill, you know that you can tell the local slope without being able to see the whole hill; just feel what’s under your feet.
An anti\-derivative undoes a derivative, but what does it mean to “undo” a local property? The answer is that an anti\-derivative (or, in other words, an integral) tells you about some **global** or **distributed** properties of a function: not just the value at a point, but the value accumulated over a whole range of points. This global or distributed property of the anti\-derivative is what makes anti\-derivatives a bit more complicated than derivatives, but not much more so.
At the core of the problem is that there is more than one way to “undo” a derivative. Consider the following functions, each of which is different:
```
f1 <- makeFun(sin(x ^ 2) ~ x)
f2 <- makeFun(sin(x ^ 2) + 3 ~ x)
f3 <- makeFun(sin(x ^ 2) - 100 ~ x)
f1(1)
```
```
## [1] 0.841471
```
```
f2(1)
```
```
## [1] 3.841471
```
```
f3(1)
```
```
## [1] -99.15853
```
Despite the fact that the functions \\(f\_1(x)\\), \\(f\_2(x)\\), and \\(f\_3(x)\\), are different, they all have the same derivative.
```
df1 = D(f1(x) ~ x)
df2 = D(f2(x) ~ x)
df3 = D(f3(x) ~ x)
df1(1)
```
```
## [1] 1.080605
```
```
df2(1)
```
```
## [1] 1.080605
```
```
df3(1)
```
```
## [1] 1.080605
```
This raises a problem. When you “undo” the derivative of any of `df1`, `df2`, or `df3`, what should the answer be? Should you get \\(f\_1\\) or \\(f\_2\\) or \\(f\_3\\) or some other function? It appears that the antiderivative is, to some extent, indefinite.
The answer to this question is by no means a philosophical mystery. There’s a very definite answer. Or, rather, there are two answers that work out to be different faces of the same thing.
To start, it helps to review the traditional mathematical notation, so that it can be compared side\-by\-side with the computer notation. Given a function \\(f(x)\\), the derivative with respect to \\(x\\) is written \\(df/dx\\) and the anti\-derivative is written \\(\\int f(x) dx\\).
All of the functions that have the same derivative are similar. In fact, they are identical except for an additive constant. So the problem of indefiniteness of the antiderivative amounts just to an additive constant — the anti\-derivative of the derivative of a function will be the function give or take an additive constant:
\\\[ \\int \\frac{df}{dx} dx \= f(x) \+ C .\\]
So, as long as you are not concerned about additive constants, the anti\-derivative of the derivative of a function gives you back the original function.
8\.3 The integral
-----------------
The derivative tells you how a function changes locally. The anti\-derivative accumulates those local values to give you a global value; it considers not just the local properties of the function at a single particular input value but the values over a **range** of inputs.
Remember that the derivative of \\(f\\) is itself a function, and that function has the same arguments as \\(f\\). So, since `f(x)` was defined to have an argument named `x`, the function created by `D(f(x) ~ x)` also has an argument named `x` (and whatever other parameters are involved):
```
f
```
```
## function (x, A = 0.5)
## A * x^2
## <bytecode: 0x7fe6038205e0>
```
```
df
```
```
## function (x, A = 0.5)
## A * (2 * (x))
## <bytecode: 0x7fe603934720>
```
The anti\-derivative operation is a little bit different in this respect. When you use `antiD()`, the name of the function’s variable is replaced by *two* arguments: the actual name (in this example, \\(x\\)) and the constant \\(C\\):
```
antiD(f(x) ~ x)
antiD(df(x) ~ x)
```
The value of `C` sets, implicitly, the lower end of the range over which the accumulation is going to occur.
This is a point that is somewhat obscured by traditional mathematical notation, which allows you to write down statements that are not completely explicit. For example, in traditional notation, it’s quite accepted to write an integration statement like this:
\\\[ \\int x^2 dx \= \\frac{1}{3} x^3 . \\]
This looks like a function of \\(x\\). But that’s not the whole truth. In fact, the complete statement of the integral involve another argument: `C`:
\\\[\\int x^2 dx \= \\frac{1}{3} x^3 \+ C, \\]
So, really, the value of \\(\\int x^2 dx\\) is a function both of \\(x\\) and \\(C\\). In traditional notation, the \\(C\\) argument is often left out and the reader is expected to remember that \\(\\int x^2 dx\\) is an “indefinite integral.”
Another traditional style for writing an integral is
\\\[ \\int ^{\\mbox{to}}\_{\\mbox{from}} x^2 dx \= \\left. \\frac{1}{3} x^3
\\right\|^{to}\_{from} ,\\]
where the $. \|^{to}\_{from} $ means that you are to substitute the values of `from` and `to` in for \\(x\\). For instance:
\\\[ \\int ^{2}\_{\-1} x^2 dx \= \\left. \\frac{1}{3} x^3 \\right\|^{2}\_{\-1} \=
\\frac{1}{3} 2^3 \- \\frac{1}{3} (\-1\)^3 \= \\frac{9}{3} \= 3\\]
Notice that it doesn’t matter whether the function had been defined in terms of \\(x\\) or \\(y\\) or anything else. In the end, the indefinite integral is a function of `from` and `to`.
Notice how the calculation of the definite integral involves *two* applications of the anti\-derivative. The definite interval is the *difference* between the anti\-derivative evaluated at `to` and the anti\-derivate evaluated at `from`. But how do we know what value of \\(C\\) to provide when calculating a definite integral. The answer is simple: it doesn’t matter what \\(C\\) is so long as it is the same in both the `to` and `from` evaluations of the anti\-derivative. The \\(C\\) in the `from` calculation cancels out the \\(C\\) in the `to` calculation. Since the \\(C\\)’s cancel out, any value of \\(C\\) will do. In the software, we choose to use a default value of \\(C \= 0\\)
```
fun = antiD( x^2 ~ x )
fun
```
```
## function (x, C = 0)
## 1/3 * x^3 + C
```
When you evaluate the function at specific numerical values for those arguments, you end up with the “definite integral,” a number:
```
# This doesn't exist yet. FIX FIX FIX
fun(x = 2) - fun(x = -1)
```
For now, these are the essential things to remember:
1. The `antiD( )` function will compute an anti\-derivative.
2. Like the derivative, the anti\-derivative is always taken with respect to a variable, for instance `antiD( x^2 ~ x )`. That variable, here `x`, is called (sensibly enough) the “variable of integration.” You can also say, “the integral with respect to \\(x\\).”
3. The definite integral is a function of the variable of integration … sort of. To be more precise, the variable of integration appears as an argument in two guises since the definite integral involves two evaluations: one at \\(x \=\\) `to` and one at \\(x \=\\) `from`. The bounds defined by `from` and `to` are often called the “region of integration.”
The many vocabulary terms used reflect the different ways you might specify or not specify particular numerical values for `from` and `to`: “integral,” “anti\-derivative,” “indefinite integral,” and “definite integral.” Admittedly, this can be confusing, but that’s a consequence of something important: the integral is about the “global” or “distributed” properties of a function, the “whole.” In contrast, derivatives are about the “local” properties: the "part. The whole is generally more complicated than the part.
You’ll do well if you can remember this one fact about integrals: there are always two arguments that reflect the region of integration: `from` and `to`.
### 8\.3\.1 Exercises
#### 8\.3\.1\.1 Exercise 1
Find the numerical value of each of the following definite integrals.
1. \\(\\int^{5}\_{2} x^{1\.5} dx\\)
ANSWER:
\<\<\>\>\=
f \= antiD( x^1\.5 \~ x )
f(from\=2,to\=5\)
@
1. \\(\\int^{10}\_{0} sin( x^2 ) dx\\)
ANSWER:
\<\<\>\>\=
f \= antiD(sin(x^2\) \~ x)
f(from\=0,to\=10\)
@
1. \\(\\int^{4}\_{1} e^{2x} dx\\)
{0\.58,6\.32,20\.10,27\.29,53\.60,107\.9,*1486\.8*}
ANSWER:
```
f = integral(exp(2*x) ~ x)
f(from=1,to=4)
```
1. \\(\\int^{2}\_{\-2} e^{2x} dx\\)
{0\.58,6\.32,20\.10,*27\.29*,53\.60,107\.9,1486\.8}
ANSWER:
```
f = integral(exp(2*x) ~ x)
f(from=-2,to=2)
```
1. \\(\\int^{2}\_{\-2} e^{2 \| x \|} dx\\)
{0\.58,6\.32,20\.10,27\.29,*53\.60*,107\.9,1486\.8}
ANSWER:
```
f = integral(exp(abs(2*x)) ~ x)
f(from=-2,to=2)
```
#### 8\.3\.1\.2 Exercise 2
There’s a very simple relationship between \\(\\int^b\_a f(x) dx\\) and \\(\\int^a\_b f(x) dx\\) — integrating the same function \\(f\\), but reversing the values of `from` and `to`.
Create some functions, integrate them, and experiment with them to find the relationship.
1. They are the same value.
2. One is twice the value of the other.
3. *One is negative the other.*
4. One is the square of the other.
#### 8\.3\.1\.3 Exercise 3
The function being integrated can have additional variables or parameters beyond the variable of integration. To evaluate the definite integral, you need to specify values for those additional variables.
For example, a very important function in statistics and physics is the Gaussian, which has a bell\-shaped graph:
```
gaussian <-
makeFun((1/sqrt(2*pi*sigma^2)) *
exp( -(x-mean)^2/(2*sigma^2)) ~ x,
mean=2, sigma=2.5)
slice_plot(gaussian(x) ~ x, domain(x = -5:10)) %>%
slice_plot(gaussian(x, mean=0, sigma=1) ~ x, color="red")
```
As you can see, it’s a function of \\(x\\), but also of the parameters `mean` and `sigma`.
When you integrate this, you need to tell `antiD()` or `integral()` what the parameters are going to be called:
```
erf <- antiD(gaussian(x, mean=m, sigma=s) ~ x)
erf
```
Evaluate each of the following definite integrals:
1. \\(\\int^1\_0 \\mbox{erf}(x,m\=0,s\=1\) dx\\)
{0\.13,*0\.34*,0\.48,0\.50,0\.75,1\.00}
ANSWER:
```
erf(x = 1, m=0, s=1) - erf(x = 0, m=0, s=1)
```
The name `erf` is arbitrary. In mathematics, `erf` is the name of something called the ERror Function, just as `sin` is the name of the sine function. The [formal definition of `erf`](https://en.wikipedia.org/wiki/Error_function) is a bit different than the `erf` presented here, but the name `erf` is so much fun that I wanted to include it in the book. The real `erf` is
\\\[ \\mbox{erf}\_{formal}(x) \= 2\\ \\mbox{erf}\_{here}(x) \- 1\.\\]
1. \\(\\int^2\_0 f(x,m\=0,s\=1\) dx\\)
{0\.13,0\.34,*0\.48*,0\.50,0\.75,1\.00}
ANSWER:
```
erf(x = 2, m=0, s=1) - erf(x = 0, m=0, s=1)
```
1. \\(\\int^2\_0 f(x,m\=0,s\=2\) dx\\)
{\-0\.48,*\-0\.34*,\-0\.13, 0\.13, 0\.34, 0\.48}
ANSWER:
```
erf(x = 0, m=0, s=2) - erf(x = 2, m=0, s=2)
```
1. \\(\\int^3\_{\-\\infty} f(x,m\=3,s\=10\) dx\\). (Hint: The mathematical \\(\-\\infty\\) is represented as on the computer.)
{0\.13,0\.34,0\.48,*0\.50*,0\.75,1\.00}
ANSWER:
```
erf(x = -Inf, m=3, s=10) - erf(x = 3, m=3, s=10)
```
1. \\(\\int^{\-\\infty}\_{\\infty} f(x,m\=3,s\=10\) dx\\)
{0\.13,0\.34,0\.48,0\.50,0\.75,*1\.00*}
ANSWER:
```
erf(x = Inf, m=3, s=10) - erf(x = -Inf, m=3, s=10)
```
### 8\.3\.1 Exercises
#### 8\.3\.1\.1 Exercise 1
Find the numerical value of each of the following definite integrals.
1. \\(\\int^{5}\_{2} x^{1\.5} dx\\)
ANSWER:
\<\<\>\>\=
f \= antiD( x^1\.5 \~ x )
f(from\=2,to\=5\)
@
1. \\(\\int^{10}\_{0} sin( x^2 ) dx\\)
ANSWER:
\<\<\>\>\=
f \= antiD(sin(x^2\) \~ x)
f(from\=0,to\=10\)
@
1. \\(\\int^{4}\_{1} e^{2x} dx\\)
{0\.58,6\.32,20\.10,27\.29,53\.60,107\.9,*1486\.8*}
ANSWER:
```
f = integral(exp(2*x) ~ x)
f(from=1,to=4)
```
1. \\(\\int^{2}\_{\-2} e^{2x} dx\\)
{0\.58,6\.32,20\.10,*27\.29*,53\.60,107\.9,1486\.8}
ANSWER:
```
f = integral(exp(2*x) ~ x)
f(from=-2,to=2)
```
1. \\(\\int^{2}\_{\-2} e^{2 \| x \|} dx\\)
{0\.58,6\.32,20\.10,27\.29,*53\.60*,107\.9,1486\.8}
ANSWER:
```
f = integral(exp(abs(2*x)) ~ x)
f(from=-2,to=2)
```
#### 8\.3\.1\.2 Exercise 2
There’s a very simple relationship between \\(\\int^b\_a f(x) dx\\) and \\(\\int^a\_b f(x) dx\\) — integrating the same function \\(f\\), but reversing the values of `from` and `to`.
Create some functions, integrate them, and experiment with them to find the relationship.
1. They are the same value.
2. One is twice the value of the other.
3. *One is negative the other.*
4. One is the square of the other.
#### 8\.3\.1\.3 Exercise 3
The function being integrated can have additional variables or parameters beyond the variable of integration. To evaluate the definite integral, you need to specify values for those additional variables.
For example, a very important function in statistics and physics is the Gaussian, which has a bell\-shaped graph:
```
gaussian <-
makeFun((1/sqrt(2*pi*sigma^2)) *
exp( -(x-mean)^2/(2*sigma^2)) ~ x,
mean=2, sigma=2.5)
slice_plot(gaussian(x) ~ x, domain(x = -5:10)) %>%
slice_plot(gaussian(x, mean=0, sigma=1) ~ x, color="red")
```
As you can see, it’s a function of \\(x\\), but also of the parameters `mean` and `sigma`.
When you integrate this, you need to tell `antiD()` or `integral()` what the parameters are going to be called:
```
erf <- antiD(gaussian(x, mean=m, sigma=s) ~ x)
erf
```
Evaluate each of the following definite integrals:
1. \\(\\int^1\_0 \\mbox{erf}(x,m\=0,s\=1\) dx\\)
{0\.13,*0\.34*,0\.48,0\.50,0\.75,1\.00}
ANSWER:
```
erf(x = 1, m=0, s=1) - erf(x = 0, m=0, s=1)
```
The name `erf` is arbitrary. In mathematics, `erf` is the name of something called the ERror Function, just as `sin` is the name of the sine function. The [formal definition of `erf`](https://en.wikipedia.org/wiki/Error_function) is a bit different than the `erf` presented here, but the name `erf` is so much fun that I wanted to include it in the book. The real `erf` is
\\\[ \\mbox{erf}\_{formal}(x) \= 2\\ \\mbox{erf}\_{here}(x) \- 1\.\\]
1. \\(\\int^2\_0 f(x,m\=0,s\=1\) dx\\)
{0\.13,0\.34,*0\.48*,0\.50,0\.75,1\.00}
ANSWER:
```
erf(x = 2, m=0, s=1) - erf(x = 0, m=0, s=1)
```
1. \\(\\int^2\_0 f(x,m\=0,s\=2\) dx\\)
{\-0\.48,*\-0\.34*,\-0\.13, 0\.13, 0\.34, 0\.48}
ANSWER:
```
erf(x = 0, m=0, s=2) - erf(x = 2, m=0, s=2)
```
1. \\(\\int^3\_{\-\\infty} f(x,m\=3,s\=10\) dx\\). (Hint: The mathematical \\(\-\\infty\\) is represented as on the computer.)
{0\.13,0\.34,0\.48,*0\.50*,0\.75,1\.00}
ANSWER:
```
erf(x = -Inf, m=3, s=10) - erf(x = 3, m=3, s=10)
```
1. \\(\\int^{\-\\infty}\_{\\infty} f(x,m\=3,s\=10\) dx\\)
{0\.13,0\.34,0\.48,0\.50,0\.75,*1\.00*}
ANSWER:
```
erf(x = Inf, m=3, s=10) - erf(x = -Inf, m=3, s=10)
```
#### 8\.3\.1\.1 Exercise 1
Find the numerical value of each of the following definite integrals.
1. \\(\\int^{5}\_{2} x^{1\.5} dx\\)
ANSWER:
\<\<\>\>\=
f \= antiD( x^1\.5 \~ x )
f(from\=2,to\=5\)
@
1. \\(\\int^{10}\_{0} sin( x^2 ) dx\\)
ANSWER:
\<\<\>\>\=
f \= antiD(sin(x^2\) \~ x)
f(from\=0,to\=10\)
@
1. \\(\\int^{4}\_{1} e^{2x} dx\\)
{0\.58,6\.32,20\.10,27\.29,53\.60,107\.9,*1486\.8*}
ANSWER:
```
f = integral(exp(2*x) ~ x)
f(from=1,to=4)
```
1. \\(\\int^{2}\_{\-2} e^{2x} dx\\)
{0\.58,6\.32,20\.10,*27\.29*,53\.60,107\.9,1486\.8}
ANSWER:
```
f = integral(exp(2*x) ~ x)
f(from=-2,to=2)
```
1. \\(\\int^{2}\_{\-2} e^{2 \| x \|} dx\\)
{0\.58,6\.32,20\.10,27\.29,*53\.60*,107\.9,1486\.8}
ANSWER:
```
f = integral(exp(abs(2*x)) ~ x)
f(from=-2,to=2)
```
#### 8\.3\.1\.2 Exercise 2
There’s a very simple relationship between \\(\\int^b\_a f(x) dx\\) and \\(\\int^a\_b f(x) dx\\) — integrating the same function \\(f\\), but reversing the values of `from` and `to`.
Create some functions, integrate them, and experiment with them to find the relationship.
1. They are the same value.
2. One is twice the value of the other.
3. *One is negative the other.*
4. One is the square of the other.
#### 8\.3\.1\.3 Exercise 3
The function being integrated can have additional variables or parameters beyond the variable of integration. To evaluate the definite integral, you need to specify values for those additional variables.
For example, a very important function in statistics and physics is the Gaussian, which has a bell\-shaped graph:
```
gaussian <-
makeFun((1/sqrt(2*pi*sigma^2)) *
exp( -(x-mean)^2/(2*sigma^2)) ~ x,
mean=2, sigma=2.5)
slice_plot(gaussian(x) ~ x, domain(x = -5:10)) %>%
slice_plot(gaussian(x, mean=0, sigma=1) ~ x, color="red")
```
As you can see, it’s a function of \\(x\\), but also of the parameters `mean` and `sigma`.
When you integrate this, you need to tell `antiD()` or `integral()` what the parameters are going to be called:
```
erf <- antiD(gaussian(x, mean=m, sigma=s) ~ x)
erf
```
Evaluate each of the following definite integrals:
1. \\(\\int^1\_0 \\mbox{erf}(x,m\=0,s\=1\) dx\\)
{0\.13,*0\.34*,0\.48,0\.50,0\.75,1\.00}
ANSWER:
```
erf(x = 1, m=0, s=1) - erf(x = 0, m=0, s=1)
```
The name `erf` is arbitrary. In mathematics, `erf` is the name of something called the ERror Function, just as `sin` is the name of the sine function. The [formal definition of `erf`](https://en.wikipedia.org/wiki/Error_function) is a bit different than the `erf` presented here, but the name `erf` is so much fun that I wanted to include it in the book. The real `erf` is
\\\[ \\mbox{erf}\_{formal}(x) \= 2\\ \\mbox{erf}\_{here}(x) \- 1\.\\]
1. \\(\\int^2\_0 f(x,m\=0,s\=1\) dx\\)
{0\.13,0\.34,*0\.48*,0\.50,0\.75,1\.00}
ANSWER:
```
erf(x = 2, m=0, s=1) - erf(x = 0, m=0, s=1)
```
1. \\(\\int^2\_0 f(x,m\=0,s\=2\) dx\\)
{\-0\.48,*\-0\.34*,\-0\.13, 0\.13, 0\.34, 0\.48}
ANSWER:
```
erf(x = 0, m=0, s=2) - erf(x = 2, m=0, s=2)
```
1. \\(\\int^3\_{\-\\infty} f(x,m\=3,s\=10\) dx\\). (Hint: The mathematical \\(\-\\infty\\) is represented as on the computer.)
{0\.13,0\.34,0\.48,*0\.50*,0\.75,1\.00}
ANSWER:
```
erf(x = -Inf, m=3, s=10) - erf(x = 3, m=3, s=10)
```
1. \\(\\int^{\-\\infty}\_{\\infty} f(x,m\=3,s\=10\) dx\\)
{0\.13,0\.34,0\.48,0\.50,0\.75,*1\.00*}
ANSWER:
```
erf(x = Inf, m=3, s=10) - erf(x = -Inf, m=3, s=10)
```
| Field Specific |
dtkaplan.github.io | https://dtkaplan.github.io/RforCalculus/dynamics.html |
Chapter 9 Dynamics
==================
A basic strategy in calculus is to divide a challenging problem into easier bits, and then put together the bits to find the overall solution. Thus, areas are reduced to integrating heights. Volumes come from integrating areas. Differential equations provide an important and compelling setting for illustrating the calculus strategy, while also providing insight into modeling approaches and a better understanding of real\-world phenomena. A differential equation relates the instantaneous “state” of a system to the instantaneous change of state.
9\.1 Solving differential equations
-----------------------------------
“Solving” a differential equation amounts to finding the value of the state as a function of independent variables. In an “ordinary differential equations,” there is only one independent variable, typically called time. In a “partial differential equation,” there are two or more dependent variables, for example, time and space.
The `integrateODE()` function solves an ordinary differential equation starting at a given initial condition of the state.
To illustrate, here is the differential equation corresponding to logistic growth:
\\\[\\frac{dx}{dt} \= r x (1\-x/K).\\]
There is a state \\(x\\). The equation describes how the change in state over time, \\(dx/dt\\) is a function of the state. The typical application of the logistic equation is to limited population growth; for \\(x \< K\\) the population grows while for \\(x \> K\\) the population decays. The state \\(x \= K\\) is a “stable equilibrium.” It’s an equilbrium because, when \\(x \= K\\), the change of state is nil: \\(dx/dt \= 0\\). It’s stable, because a slight change in state will incur growth or decay that brings the system back to the equilibrium. The state \\(x \= 0\\) is an unstable equilibrium.
The algebraic solution to this equation is a staple of calculus books.[4](dynamics.html#fn4) It is
\\\[x(t) \= \\frac{K x(0\)}{x(0\) \+ (K − x(0\)e^{−rt})}\\]
The solution gives the state as a function of time, \\(x(t)\\), whereas the differential equation gives the change in state as a function of the state itself. The initial value of the state (the “initial condition”) is \\(x(0\)\\), that is, x at time zero.
The logistic equation is much beloved because of this algebraic solution. Equations that are very closely related in their phenomenology, do not have analytic solutions.
The `integrateODE()` function takes the differential equation as an input, together with the initial value of the state. Numerical values for all parameters must be specified, as they would in any case to draw a graph of the solution. In addition, must specify the range of time for which you want the function \\(x(t)\\). For example, here’s the solution for time running from 0 to 20\.
```
soln <- integrateODE(dx ~ r * x * (1 - x / K),
x = 1, K = 10, r = 0.5,
tdur = list(from=0, to=20))
```
The object that is created by `integrateODE()` is a function of time. Or, rather, it is a set of solutions, one for each of the state variables. In the logistic equation, there is only one state variable \\(x\\). Finding the value of \\(x\\) at time \\(t\\) means evaluating the solution function at that \\(t\\). Here are the values at \\(t \= 0,1,...,5\\).
```
soln$x(0:5)
```
```
## [1] 1.000000 1.548281 2.319693 3.324279 4.508531 5.751209
```
Often, you will plot out the solution against time:
```
slice_plot(soln$x(t) ~ t, domain(t=0:20))
```
9\.2 Systems of differential equations
--------------------------------------
Differential equation systems with more than one state variable can be handled as well. To illustrate, here is the SIR model of the spread of epidemics, in which the state is the number of susceptibles \\(S\\) and the number of infectives \\(I\\) in the population. Susceptibles become infective by meeting an infective, infectives recover and leave the system. There is one equation for the change in \\(S\\) and a corresponding equation for the change in \\(I\\). The initial \\(I \= 1\\), corresponding to the start of the epidemic.
```
epi <- integrateODE(dS ~ -a * S * I,
dI ~ a * S * I - b * I,
a = 0.0026, b = 0.5, S=762, I = 1,
tdur = 20)
```
This system of two differential equations is solved to produce two functions, \\(S(t)\\) and \\(I(t)\\).
```
slice_plot(epi$S(t) ~ t, domain(t=0:20)) %>%
slice_plot(epi$I(t) ~ t, color = "red")
```
In the solution, you can see the epidemic grow to a peak near \\(t \= 5\\). At this point, the number of susceptibles has fallen so sharply that the number of infectives starts to fall as well. In the end, almost every susceptible has been infected.
### 9\.2\.1 Example: Diving from the high board
Consider a diver as she jumps off the 5\-meter high board and plunges into the water. In particular, suppose you want to understand the forces at work. To do so, you construct a dynamical model with state variables \\(v\\) (velocity) and \\(x\\) (position). As you may remember from physics, a falling object will accelerate downward at an acceleration of 9\.8 meters per sec2. We’ll specify that the initial jump on the board is upward at a velocity of 1 meter per sec.
```
dive <- integrateODE(dv ~ -9.8, dx ~ v,
v = 1, x = 5, tdur = 1.2)
slice_plot(dive$x(t) ~ t, domain(t = range(0, 1.2))) %>%
gf_labs(y = "Height (m)", x = "time (s)")
```
The diver hits the water at about \\(t \= 1\.1\\) s. Of course, once in the water, the diver is no longer accelerating downward, so the model isn’t valid for \\(x \< 0\\).
What’s nice about the differential equation format is that it’s easy to add features like the buoyancy of water and drag of the water. We’ll do that here by changing the acceleration (the \\(dv\\) term) so that when \\(x \< 0\\) the acceleration is slightly positive (buoyant) with a drag term proportional to \\(v^2\\) in the direction opposed to the motion.
```
diveFloat <- integrateODE(
dv ~ ifelse(x < 0, 1 - 2 * sign(v) * v^2, -9.8),
dx ~ v,
v = 1, x = 5, tdur = 10)
slice_plot(diveFloat$x(t) ~ t, domain(t = 0:10)) %>%
gf_labs(ylab="Height (m)", xlab="time (s)")
```
According to the model, the diver resurfaces at about 5 seconds, and then bobs in the water.
### 9\.2\.2 Exercises
#### 9\.2\.2\.1 Exercise 1
FIX FIX FIX Get the exercises from the final version of the book FIX FIX FIX
An equation for exponential
growth. What’s the value at time \\(t \= 10\\)?
#### 9\.2\.2\.2 Exercise 2
An equation for logistic growth.
What’s the value at time \\(t \= 10\\)?
#### 9\.2\.2\.3 Exercise 3
A phase plane problem.
#### 9\.2\.2\.4 Exercise 4
Linear phase plane. Ask about different parameters. Is the system stable or unstable; oscillatory or not?
#### 9\.2\.2\.5 Exercise 5
Or maybe move to an activity. The diving model. Vary the parameters until the maximum depth is some specified value.
9\.1 Solving differential equations
-----------------------------------
“Solving” a differential equation amounts to finding the value of the state as a function of independent variables. In an “ordinary differential equations,” there is only one independent variable, typically called time. In a “partial differential equation,” there are two or more dependent variables, for example, time and space.
The `integrateODE()` function solves an ordinary differential equation starting at a given initial condition of the state.
To illustrate, here is the differential equation corresponding to logistic growth:
\\\[\\frac{dx}{dt} \= r x (1\-x/K).\\]
There is a state \\(x\\). The equation describes how the change in state over time, \\(dx/dt\\) is a function of the state. The typical application of the logistic equation is to limited population growth; for \\(x \< K\\) the population grows while for \\(x \> K\\) the population decays. The state \\(x \= K\\) is a “stable equilibrium.” It’s an equilbrium because, when \\(x \= K\\), the change of state is nil: \\(dx/dt \= 0\\). It’s stable, because a slight change in state will incur growth or decay that brings the system back to the equilibrium. The state \\(x \= 0\\) is an unstable equilibrium.
The algebraic solution to this equation is a staple of calculus books.[4](dynamics.html#fn4) It is
\\\[x(t) \= \\frac{K x(0\)}{x(0\) \+ (K − x(0\)e^{−rt})}\\]
The solution gives the state as a function of time, \\(x(t)\\), whereas the differential equation gives the change in state as a function of the state itself. The initial value of the state (the “initial condition”) is \\(x(0\)\\), that is, x at time zero.
The logistic equation is much beloved because of this algebraic solution. Equations that are very closely related in their phenomenology, do not have analytic solutions.
The `integrateODE()` function takes the differential equation as an input, together with the initial value of the state. Numerical values for all parameters must be specified, as they would in any case to draw a graph of the solution. In addition, must specify the range of time for which you want the function \\(x(t)\\). For example, here’s the solution for time running from 0 to 20\.
```
soln <- integrateODE(dx ~ r * x * (1 - x / K),
x = 1, K = 10, r = 0.5,
tdur = list(from=0, to=20))
```
The object that is created by `integrateODE()` is a function of time. Or, rather, it is a set of solutions, one for each of the state variables. In the logistic equation, there is only one state variable \\(x\\). Finding the value of \\(x\\) at time \\(t\\) means evaluating the solution function at that \\(t\\). Here are the values at \\(t \= 0,1,...,5\\).
```
soln$x(0:5)
```
```
## [1] 1.000000 1.548281 2.319693 3.324279 4.508531 5.751209
```
Often, you will plot out the solution against time:
```
slice_plot(soln$x(t) ~ t, domain(t=0:20))
```
9\.2 Systems of differential equations
--------------------------------------
Differential equation systems with more than one state variable can be handled as well. To illustrate, here is the SIR model of the spread of epidemics, in which the state is the number of susceptibles \\(S\\) and the number of infectives \\(I\\) in the population. Susceptibles become infective by meeting an infective, infectives recover and leave the system. There is one equation for the change in \\(S\\) and a corresponding equation for the change in \\(I\\). The initial \\(I \= 1\\), corresponding to the start of the epidemic.
```
epi <- integrateODE(dS ~ -a * S * I,
dI ~ a * S * I - b * I,
a = 0.0026, b = 0.5, S=762, I = 1,
tdur = 20)
```
This system of two differential equations is solved to produce two functions, \\(S(t)\\) and \\(I(t)\\).
```
slice_plot(epi$S(t) ~ t, domain(t=0:20)) %>%
slice_plot(epi$I(t) ~ t, color = "red")
```
In the solution, you can see the epidemic grow to a peak near \\(t \= 5\\). At this point, the number of susceptibles has fallen so sharply that the number of infectives starts to fall as well. In the end, almost every susceptible has been infected.
### 9\.2\.1 Example: Diving from the high board
Consider a diver as she jumps off the 5\-meter high board and plunges into the water. In particular, suppose you want to understand the forces at work. To do so, you construct a dynamical model with state variables \\(v\\) (velocity) and \\(x\\) (position). As you may remember from physics, a falling object will accelerate downward at an acceleration of 9\.8 meters per sec2. We’ll specify that the initial jump on the board is upward at a velocity of 1 meter per sec.
```
dive <- integrateODE(dv ~ -9.8, dx ~ v,
v = 1, x = 5, tdur = 1.2)
slice_plot(dive$x(t) ~ t, domain(t = range(0, 1.2))) %>%
gf_labs(y = "Height (m)", x = "time (s)")
```
The diver hits the water at about \\(t \= 1\.1\\) s. Of course, once in the water, the diver is no longer accelerating downward, so the model isn’t valid for \\(x \< 0\\).
What’s nice about the differential equation format is that it’s easy to add features like the buoyancy of water and drag of the water. We’ll do that here by changing the acceleration (the \\(dv\\) term) so that when \\(x \< 0\\) the acceleration is slightly positive (buoyant) with a drag term proportional to \\(v^2\\) in the direction opposed to the motion.
```
diveFloat <- integrateODE(
dv ~ ifelse(x < 0, 1 - 2 * sign(v) * v^2, -9.8),
dx ~ v,
v = 1, x = 5, tdur = 10)
slice_plot(diveFloat$x(t) ~ t, domain(t = 0:10)) %>%
gf_labs(ylab="Height (m)", xlab="time (s)")
```
According to the model, the diver resurfaces at about 5 seconds, and then bobs in the water.
### 9\.2\.2 Exercises
#### 9\.2\.2\.1 Exercise 1
FIX FIX FIX Get the exercises from the final version of the book FIX FIX FIX
An equation for exponential
growth. What’s the value at time \\(t \= 10\\)?
#### 9\.2\.2\.2 Exercise 2
An equation for logistic growth.
What’s the value at time \\(t \= 10\\)?
#### 9\.2\.2\.3 Exercise 3
A phase plane problem.
#### 9\.2\.2\.4 Exercise 4
Linear phase plane. Ask about different parameters. Is the system stable or unstable; oscillatory or not?
#### 9\.2\.2\.5 Exercise 5
Or maybe move to an activity. The diving model. Vary the parameters until the maximum depth is some specified value.
### 9\.2\.1 Example: Diving from the high board
Consider a diver as she jumps off the 5\-meter high board and plunges into the water. In particular, suppose you want to understand the forces at work. To do so, you construct a dynamical model with state variables \\(v\\) (velocity) and \\(x\\) (position). As you may remember from physics, a falling object will accelerate downward at an acceleration of 9\.8 meters per sec2. We’ll specify that the initial jump on the board is upward at a velocity of 1 meter per sec.
```
dive <- integrateODE(dv ~ -9.8, dx ~ v,
v = 1, x = 5, tdur = 1.2)
slice_plot(dive$x(t) ~ t, domain(t = range(0, 1.2))) %>%
gf_labs(y = "Height (m)", x = "time (s)")
```
The diver hits the water at about \\(t \= 1\.1\\) s. Of course, once in the water, the diver is no longer accelerating downward, so the model isn’t valid for \\(x \< 0\\).
What’s nice about the differential equation format is that it’s easy to add features like the buoyancy of water and drag of the water. We’ll do that here by changing the acceleration (the \\(dv\\) term) so that when \\(x \< 0\\) the acceleration is slightly positive (buoyant) with a drag term proportional to \\(v^2\\) in the direction opposed to the motion.
```
diveFloat <- integrateODE(
dv ~ ifelse(x < 0, 1 - 2 * sign(v) * v^2, -9.8),
dx ~ v,
v = 1, x = 5, tdur = 10)
slice_plot(diveFloat$x(t) ~ t, domain(t = 0:10)) %>%
gf_labs(ylab="Height (m)", xlab="time (s)")
```
According to the model, the diver resurfaces at about 5 seconds, and then bobs in the water.
### 9\.2\.2 Exercises
#### 9\.2\.2\.1 Exercise 1
FIX FIX FIX Get the exercises from the final version of the book FIX FIX FIX
An equation for exponential
growth. What’s the value at time \\(t \= 10\\)?
#### 9\.2\.2\.2 Exercise 2
An equation for logistic growth.
What’s the value at time \\(t \= 10\\)?
#### 9\.2\.2\.3 Exercise 3
A phase plane problem.
#### 9\.2\.2\.4 Exercise 4
Linear phase plane. Ask about different parameters. Is the system stable or unstable; oscillatory or not?
#### 9\.2\.2\.5 Exercise 5
Or maybe move to an activity. The diving model. Vary the parameters until the maximum depth is some specified value.
#### 9\.2\.2\.1 Exercise 1
FIX FIX FIX Get the exercises from the final version of the book FIX FIX FIX
An equation for exponential
growth. What’s the value at time \\(t \= 10\\)?
#### 9\.2\.2\.2 Exercise 2
An equation for logistic growth.
What’s the value at time \\(t \= 10\\)?
#### 9\.2\.2\.3 Exercise 3
A phase plane problem.
#### 9\.2\.2\.4 Exercise 4
Linear phase plane. Ask about different parameters. Is the system stable or unstable; oscillatory or not?
#### 9\.2\.2\.5 Exercise 5
Or maybe move to an activity. The diving model. Vary the parameters until the maximum depth is some specified value.
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/coding_club.html | Data Visualization |
|
psyteachr.github.io | https://psyteachr.github.io/coding_club.html | Field Specific |
|
psyteachr.github.io | https://psyteachr.github.io/coding_club.html | Field Specific |
|
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/index.html |
Overview
========
This course provides an overview of skills needed for reproducible research and open science using the statistical programming language R. Students will learn about data visualisation, data tidying and wrangling, archiving, iteration and functions, probability and data simulations, general linear models, and reproducible workflows. Learning is reinforced through weekly assignments that involve working with different types of data.
0\.1 Course Aims
----------------
This course aims to teach students the basic principles of reproducible research and to provide practical training in data processing and analysis in the statistical programming language R.
0\.2 Intended Learning Outcomes
-------------------------------
By the end of this course students will be able to:
* Write scripts in R to organise and transform data sets using best accepted practices
* Explain basics of probability and its role in statistical inference
* Critically analyse data and report descriptive and inferential statistics in a reproducible manner
0\.3 Course Resources
---------------------
* [Data Skills Videos](https://www.youtube.com/playlist?list=PLA2iRWVwbpTIweEBHD2dOKjZHK1atRmXt)
Each chapter has several short video lectures for the main learning outcomes at the playlist . The videos are captioned and watching with the captioning on is a useful way to learn the jargon of computational reproducibility. If you cannot access YouTube, the videos are available on the course Teams and Moodle sites or by request from the instructor.
* [dataskills](https://github.com/psyteachr/msc-data-skills)
This is a custom R package for this course. You can install it with the code below. It will download all of the packages that are used in the book, along with an offline copy of this book, the shiny apps used in the book, and the exercises.
```
devtools::install_github("psyteachr/msc-data-skills")
```
* [glossary](https://github.com/psyteachr/glossary)
Coding and statistics both have a lot of specialist terms. Throughout this book, jargon will be linked to the glossary.
0\.4 Course Outline
-------------------
The overview below lists the beginner learning outcomes only. Some lessons have additional learning outcomes for intermediate or advanced students.
1. [Getting Started](intro.html#intro)
1. Understand the components of the [RStudio IDE](intro.html#rstudio_ide)
2. Type commands into the [console](intro.html#console)
3. Understand [function syntax](intro.html#function_syx)
4. Install a [package](intro.html#install-package)
5. Organise a [project](intro.html#projects)
6. Create and compile an [Rmarkdown document](intro.html#rmarkdown)
2. [Working with Data](data.html#data)
1. Load [built\-in datasets](data.html#builtin)
2. [Import data](data.html#import_data) from CSV and Excel files
3. Create a [data table](data.html#tables)
4. Understand the use the [basic data types](data.html#data_types)
5. Understand and use the [basic container types](data.html#containers) (list, vector)
6. Use [vectorized operations](data.html#vectorized_ops)
7. Be able to [troubleshoot](#Troubleshooting) common data import problems
3. [Data Visualisation](ggplot.html#ggplot)
1. Understand what types of graphs are best for [different types of data](ggplot.html#vartypes)
2. Create common types of graphs with ggplot2
3. Set custom [labels](ggplot.html#custom-labels), [colours](ggplot.html#custom-colours), and [themes](ggplot.html#themes)
4. [Combine plots](combo_plots) on the same plot, as facets, or as a grid using cowplot
5. [Save plots](ggplot.html#ggsave) as an image file
4. [Tidy Data](tidyr.html#tidyr)
1. Understand the concept of [tidy data](tidyr.html#tidy-data)
2. Be able to convert between long and wide formats using [pivot functions](tidyr.html#pivot)
3. Be able to use the 4 basic [`tidyr` verbs](tidyr.html#tidy-verbs)
4. Be able to chain functions using [pipes](tidyr.html#pipes)
5. [Data Wrangling](dplyr.html#dplyr)
1. Be able to use the 6 main dplyr one\-table verbs: [`select()`](dplyr.html#select), [`filter()`](dplyr.html#filter), [`arrange()`](dplyr.html#arrange), [`mutate()`](dplyr.html#mutate), [`summarise()`](dplyr.html#summarise), [`group_by()`](dplyr.html#group_by)
2. Be able to [wrangle data by chaining tidyr and dplyr functions](dplyr.html#all-together)
3. Be able to use these additional one\-table verbs: [`rename()`](dplyr.html#rename), [`distinct()`](dplyr.html#distinct), [`count()`](dplyr.html#count), [`slice()`](dplyr.html#slice), [`pull()`](dplyr.html#pull)
6. [Data Relations](joins.html#joins)
1. Be able to use the 4 mutating join verbs: [`left_join()`](joins.html#left_join), [`right_join()`](joins.html#right_join), [`inner_join()`](joins.html#inner_join), [`full_join()`](joins.html#full_join)
2. Be able to use the 2 filtering join verbs: [`semi_join()`](joins.html#semi_join), [`anti_join()`](joins.html#anti_join)
3. Be able to use the 2 binding join verbs: [`bind_rows()`](joins.html#bind_rows), [`bind_cols()`](joins.html#bind_cols)
4. Be able to use the 3 set operations: [`intersect()`](joins.html#intersect), [`union()`](joins.html#union), [`setdiff()`](joins.html#setdiff)
7. [Iteration \& Functions](func.html#func)
1. Work with [iteration functions](func.html#iteration-functions): `rep()`, `seq()`, and `replicate()`
2. Use [`map()` and `apply()` functions](func.html#map-apply)
3. Write your own [custom functions](func.html#custom-functions) with `function()`
4. Set [default values](func.html#defaults) for the arguments in your functions
8. [Probability \& Simulation](sim.html#sim)
1. Generate and plot data randomly sampled from common distributions: uniform, binomial, normal, poisson
2. Generate related variables from a [multivariate](sim.html#mvdist) distribution
3. Define the following statistical terms: [p\-value](sim.html#p-value), [alpha](sim.html#alpha), [power](sim.html#power), smallest effect size of interest ([SESOI](#sesoi)), [false positive](sim.html#false-pos) (type I error), [false negative](#false-neg) (type II error), confidence interval ([CI](#conf-inf))
4. Test sampled distributions against a null hypothesis using: [exact binomial test](sim.html#exact-binom), [t\-test](sim.html#t-test) (1\-sample, independent samples, paired samples), [correlation](sim.html#correlation) (pearson, kendall and spearman)
5. [Calculate power](sim.html#calc-power-binom) using iteration and a sampling function
9. [Introduction to GLM](glm.html#glm)
1. Define the [components](glm.html#glm-components) of the GLM
2. [Simulate data](glm.html#sim-glm) using GLM equations
3. Identify the model parameters that correspond to the data\-generation parameters
4. Understand and plot [residuals](glm.html#residuals)
5. [Predict new values](glm.html#predict) using the model
6. Explain the differences among [coding schemes](glm.html#coding-schemes)
10. [Reproducible Workflows](repro.html#repro)
1. Create a reproducible script in R Markdown
2. Edit the YAML header to add table of contents and other options
3. Include a table
4. Include a figure
5. Use `source()` to include code from an external file
6. Report the output of an analysis using inline R
0\.5 Formative Exercises
------------------------
Exercises are available at the end of each lesson’s webpage. These are not marked or mandatory, but if you can work through each of these (using web resources, of course), you will easily complete the marked assessments.
Download all [exercises and data files](exercises/msc-data-skills-exercises.zip) below as a ZIP archive.
* [01 intro](exercises/01_intro_exercise.Rmd): Intro to R, functions, R markdown
* [02 data](exercises/02_data_exercise.Rmd): Vectors, tabular data, data import, pipes
* [03 ggplot](exercises/03_ggplot_exercise.Rmd): Data visualisation
* [04 tidyr](exercises/04_tidyr_exercise.Rmd): Tidy Data
* [05 dplyr](exercises/05_dplyr_exercise.Rmd): Data wrangling
* [06 joins](exercises/06_joins_exercise.Rmd): Data relations
* [07 functions](exercises/07_func_exercise.Rmd): Functions and iteration
* [08 simulation](exercises/08_sim_exercise.Rmd): Simulation
* [09 glm](exercises/09_glm_exercise.Rmd): GLM
0\.6 I found a bug!
-------------------
This book is a work in progress, so you might find errors. Please help me fix them! The best way is to open an [issue on github](https://github.com/PsyTeachR/msc-data-skills/issues) that describes the error, but you can also mention it on the class Teams forum or [email Lisa](mailto:[email protected]?subject=msc-data-skills).
0\.7 Other Resources
--------------------
* [Learning Statistics with R](https://learningstatisticswithr-bookdown.netlify.com) by Navarro
* [R for Data Science](http://r4ds.had.co.nz) by Grolemund and Wickham
* [swirl](http://swirlstats.com)
* [R for Reproducible Scientific Analysis](http://swcarpentry.github.io/r-novice-gapminder/)
* [codeschool.com](http://tryr.codeschool.com)
* [datacamp](https://www.datacamp.com/courses/free-introduction-to-r)
* [Improving your statistical inferences](https://www.coursera.org/learn/statistical-inferences/) on Coursera
* You can access several cheatsheets in RStudio under the `Help` menu, or get the most recent [RStudio Cheat Sheets](https://www.rstudio.com/resources/cheatsheets/)
* [Style guide for R programming](http://style.tidyverse.org)
* [\#rstats on twitter](https://twitter.com/search?q=%2523rstats) highly recommended!
0\.1 Course Aims
----------------
This course aims to teach students the basic principles of reproducible research and to provide practical training in data processing and analysis in the statistical programming language R.
0\.2 Intended Learning Outcomes
-------------------------------
By the end of this course students will be able to:
* Write scripts in R to organise and transform data sets using best accepted practices
* Explain basics of probability and its role in statistical inference
* Critically analyse data and report descriptive and inferential statistics in a reproducible manner
0\.3 Course Resources
---------------------
* [Data Skills Videos](https://www.youtube.com/playlist?list=PLA2iRWVwbpTIweEBHD2dOKjZHK1atRmXt)
Each chapter has several short video lectures for the main learning outcomes at the playlist . The videos are captioned and watching with the captioning on is a useful way to learn the jargon of computational reproducibility. If you cannot access YouTube, the videos are available on the course Teams and Moodle sites or by request from the instructor.
* [dataskills](https://github.com/psyteachr/msc-data-skills)
This is a custom R package for this course. You can install it with the code below. It will download all of the packages that are used in the book, along with an offline copy of this book, the shiny apps used in the book, and the exercises.
```
devtools::install_github("psyteachr/msc-data-skills")
```
* [glossary](https://github.com/psyteachr/glossary)
Coding and statistics both have a lot of specialist terms. Throughout this book, jargon will be linked to the glossary.
0\.4 Course Outline
-------------------
The overview below lists the beginner learning outcomes only. Some lessons have additional learning outcomes for intermediate or advanced students.
1. [Getting Started](intro.html#intro)
1. Understand the components of the [RStudio IDE](intro.html#rstudio_ide)
2. Type commands into the [console](intro.html#console)
3. Understand [function syntax](intro.html#function_syx)
4. Install a [package](intro.html#install-package)
5. Organise a [project](intro.html#projects)
6. Create and compile an [Rmarkdown document](intro.html#rmarkdown)
2. [Working with Data](data.html#data)
1. Load [built\-in datasets](data.html#builtin)
2. [Import data](data.html#import_data) from CSV and Excel files
3. Create a [data table](data.html#tables)
4. Understand the use the [basic data types](data.html#data_types)
5. Understand and use the [basic container types](data.html#containers) (list, vector)
6. Use [vectorized operations](data.html#vectorized_ops)
7. Be able to [troubleshoot](#Troubleshooting) common data import problems
3. [Data Visualisation](ggplot.html#ggplot)
1. Understand what types of graphs are best for [different types of data](ggplot.html#vartypes)
2. Create common types of graphs with ggplot2
3. Set custom [labels](ggplot.html#custom-labels), [colours](ggplot.html#custom-colours), and [themes](ggplot.html#themes)
4. [Combine plots](combo_plots) on the same plot, as facets, or as a grid using cowplot
5. [Save plots](ggplot.html#ggsave) as an image file
4. [Tidy Data](tidyr.html#tidyr)
1. Understand the concept of [tidy data](tidyr.html#tidy-data)
2. Be able to convert between long and wide formats using [pivot functions](tidyr.html#pivot)
3. Be able to use the 4 basic [`tidyr` verbs](tidyr.html#tidy-verbs)
4. Be able to chain functions using [pipes](tidyr.html#pipes)
5. [Data Wrangling](dplyr.html#dplyr)
1. Be able to use the 6 main dplyr one\-table verbs: [`select()`](dplyr.html#select), [`filter()`](dplyr.html#filter), [`arrange()`](dplyr.html#arrange), [`mutate()`](dplyr.html#mutate), [`summarise()`](dplyr.html#summarise), [`group_by()`](dplyr.html#group_by)
2. Be able to [wrangle data by chaining tidyr and dplyr functions](dplyr.html#all-together)
3. Be able to use these additional one\-table verbs: [`rename()`](dplyr.html#rename), [`distinct()`](dplyr.html#distinct), [`count()`](dplyr.html#count), [`slice()`](dplyr.html#slice), [`pull()`](dplyr.html#pull)
6. [Data Relations](joins.html#joins)
1. Be able to use the 4 mutating join verbs: [`left_join()`](joins.html#left_join), [`right_join()`](joins.html#right_join), [`inner_join()`](joins.html#inner_join), [`full_join()`](joins.html#full_join)
2. Be able to use the 2 filtering join verbs: [`semi_join()`](joins.html#semi_join), [`anti_join()`](joins.html#anti_join)
3. Be able to use the 2 binding join verbs: [`bind_rows()`](joins.html#bind_rows), [`bind_cols()`](joins.html#bind_cols)
4. Be able to use the 3 set operations: [`intersect()`](joins.html#intersect), [`union()`](joins.html#union), [`setdiff()`](joins.html#setdiff)
7. [Iteration \& Functions](func.html#func)
1. Work with [iteration functions](func.html#iteration-functions): `rep()`, `seq()`, and `replicate()`
2. Use [`map()` and `apply()` functions](func.html#map-apply)
3. Write your own [custom functions](func.html#custom-functions) with `function()`
4. Set [default values](func.html#defaults) for the arguments in your functions
8. [Probability \& Simulation](sim.html#sim)
1. Generate and plot data randomly sampled from common distributions: uniform, binomial, normal, poisson
2. Generate related variables from a [multivariate](sim.html#mvdist) distribution
3. Define the following statistical terms: [p\-value](sim.html#p-value), [alpha](sim.html#alpha), [power](sim.html#power), smallest effect size of interest ([SESOI](#sesoi)), [false positive](sim.html#false-pos) (type I error), [false negative](#false-neg) (type II error), confidence interval ([CI](#conf-inf))
4. Test sampled distributions against a null hypothesis using: [exact binomial test](sim.html#exact-binom), [t\-test](sim.html#t-test) (1\-sample, independent samples, paired samples), [correlation](sim.html#correlation) (pearson, kendall and spearman)
5. [Calculate power](sim.html#calc-power-binom) using iteration and a sampling function
9. [Introduction to GLM](glm.html#glm)
1. Define the [components](glm.html#glm-components) of the GLM
2. [Simulate data](glm.html#sim-glm) using GLM equations
3. Identify the model parameters that correspond to the data\-generation parameters
4. Understand and plot [residuals](glm.html#residuals)
5. [Predict new values](glm.html#predict) using the model
6. Explain the differences among [coding schemes](glm.html#coding-schemes)
10. [Reproducible Workflows](repro.html#repro)
1. Create a reproducible script in R Markdown
2. Edit the YAML header to add table of contents and other options
3. Include a table
4. Include a figure
5. Use `source()` to include code from an external file
6. Report the output of an analysis using inline R
0\.5 Formative Exercises
------------------------
Exercises are available at the end of each lesson’s webpage. These are not marked or mandatory, but if you can work through each of these (using web resources, of course), you will easily complete the marked assessments.
Download all [exercises and data files](exercises/msc-data-skills-exercises.zip) below as a ZIP archive.
* [01 intro](exercises/01_intro_exercise.Rmd): Intro to R, functions, R markdown
* [02 data](exercises/02_data_exercise.Rmd): Vectors, tabular data, data import, pipes
* [03 ggplot](exercises/03_ggplot_exercise.Rmd): Data visualisation
* [04 tidyr](exercises/04_tidyr_exercise.Rmd): Tidy Data
* [05 dplyr](exercises/05_dplyr_exercise.Rmd): Data wrangling
* [06 joins](exercises/06_joins_exercise.Rmd): Data relations
* [07 functions](exercises/07_func_exercise.Rmd): Functions and iteration
* [08 simulation](exercises/08_sim_exercise.Rmd): Simulation
* [09 glm](exercises/09_glm_exercise.Rmd): GLM
0\.6 I found a bug!
-------------------
This book is a work in progress, so you might find errors. Please help me fix them! The best way is to open an [issue on github](https://github.com/PsyTeachR/msc-data-skills/issues) that describes the error, but you can also mention it on the class Teams forum or [email Lisa](mailto:[email protected]?subject=msc-data-skills).
0\.7 Other Resources
--------------------
* [Learning Statistics with R](https://learningstatisticswithr-bookdown.netlify.com) by Navarro
* [R for Data Science](http://r4ds.had.co.nz) by Grolemund and Wickham
* [swirl](http://swirlstats.com)
* [R for Reproducible Scientific Analysis](http://swcarpentry.github.io/r-novice-gapminder/)
* [codeschool.com](http://tryr.codeschool.com)
* [datacamp](https://www.datacamp.com/courses/free-introduction-to-r)
* [Improving your statistical inferences](https://www.coursera.org/learn/statistical-inferences/) on Coursera
* You can access several cheatsheets in RStudio under the `Help` menu, or get the most recent [RStudio Cheat Sheets](https://www.rstudio.com/resources/cheatsheets/)
* [Style guide for R programming](http://style.tidyverse.org)
* [\#rstats on twitter](https://twitter.com/search?q=%2523rstats) highly recommended!
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/index.html |
Overview
========
This course provides an overview of skills needed for reproducible research and open science using the statistical programming language R. Students will learn about data visualisation, data tidying and wrangling, archiving, iteration and functions, probability and data simulations, general linear models, and reproducible workflows. Learning is reinforced through weekly assignments that involve working with different types of data.
0\.1 Course Aims
----------------
This course aims to teach students the basic principles of reproducible research and to provide practical training in data processing and analysis in the statistical programming language R.
0\.2 Intended Learning Outcomes
-------------------------------
By the end of this course students will be able to:
* Write scripts in R to organise and transform data sets using best accepted practices
* Explain basics of probability and its role in statistical inference
* Critically analyse data and report descriptive and inferential statistics in a reproducible manner
0\.3 Course Resources
---------------------
* [Data Skills Videos](https://www.youtube.com/playlist?list=PLA2iRWVwbpTIweEBHD2dOKjZHK1atRmXt)
Each chapter has several short video lectures for the main learning outcomes at the playlist . The videos are captioned and watching with the captioning on is a useful way to learn the jargon of computational reproducibility. If you cannot access YouTube, the videos are available on the course Teams and Moodle sites or by request from the instructor.
* [dataskills](https://github.com/psyteachr/msc-data-skills)
This is a custom R package for this course. You can install it with the code below. It will download all of the packages that are used in the book, along with an offline copy of this book, the shiny apps used in the book, and the exercises.
```
devtools::install_github("psyteachr/msc-data-skills")
```
* [glossary](https://github.com/psyteachr/glossary)
Coding and statistics both have a lot of specialist terms. Throughout this book, jargon will be linked to the glossary.
0\.4 Course Outline
-------------------
The overview below lists the beginner learning outcomes only. Some lessons have additional learning outcomes for intermediate or advanced students.
1. [Getting Started](intro.html#intro)
1. Understand the components of the [RStudio IDE](intro.html#rstudio_ide)
2. Type commands into the [console](intro.html#console)
3. Understand [function syntax](intro.html#function_syx)
4. Install a [package](intro.html#install-package)
5. Organise a [project](intro.html#projects)
6. Create and compile an [Rmarkdown document](intro.html#rmarkdown)
2. [Working with Data](data.html#data)
1. Load [built\-in datasets](data.html#builtin)
2. [Import data](data.html#import_data) from CSV and Excel files
3. Create a [data table](data.html#tables)
4. Understand the use the [basic data types](data.html#data_types)
5. Understand and use the [basic container types](data.html#containers) (list, vector)
6. Use [vectorized operations](data.html#vectorized_ops)
7. Be able to [troubleshoot](#Troubleshooting) common data import problems
3. [Data Visualisation](ggplot.html#ggplot)
1. Understand what types of graphs are best for [different types of data](ggplot.html#vartypes)
2. Create common types of graphs with ggplot2
3. Set custom [labels](ggplot.html#custom-labels), [colours](ggplot.html#custom-colours), and [themes](ggplot.html#themes)
4. [Combine plots](combo_plots) on the same plot, as facets, or as a grid using cowplot
5. [Save plots](ggplot.html#ggsave) as an image file
4. [Tidy Data](tidyr.html#tidyr)
1. Understand the concept of [tidy data](tidyr.html#tidy-data)
2. Be able to convert between long and wide formats using [pivot functions](tidyr.html#pivot)
3. Be able to use the 4 basic [`tidyr` verbs](tidyr.html#tidy-verbs)
4. Be able to chain functions using [pipes](tidyr.html#pipes)
5. [Data Wrangling](dplyr.html#dplyr)
1. Be able to use the 6 main dplyr one\-table verbs: [`select()`](dplyr.html#select), [`filter()`](dplyr.html#filter), [`arrange()`](dplyr.html#arrange), [`mutate()`](dplyr.html#mutate), [`summarise()`](dplyr.html#summarise), [`group_by()`](dplyr.html#group_by)
2. Be able to [wrangle data by chaining tidyr and dplyr functions](dplyr.html#all-together)
3. Be able to use these additional one\-table verbs: [`rename()`](dplyr.html#rename), [`distinct()`](dplyr.html#distinct), [`count()`](dplyr.html#count), [`slice()`](dplyr.html#slice), [`pull()`](dplyr.html#pull)
6. [Data Relations](joins.html#joins)
1. Be able to use the 4 mutating join verbs: [`left_join()`](joins.html#left_join), [`right_join()`](joins.html#right_join), [`inner_join()`](joins.html#inner_join), [`full_join()`](joins.html#full_join)
2. Be able to use the 2 filtering join verbs: [`semi_join()`](joins.html#semi_join), [`anti_join()`](joins.html#anti_join)
3. Be able to use the 2 binding join verbs: [`bind_rows()`](joins.html#bind_rows), [`bind_cols()`](joins.html#bind_cols)
4. Be able to use the 3 set operations: [`intersect()`](joins.html#intersect), [`union()`](joins.html#union), [`setdiff()`](joins.html#setdiff)
7. [Iteration \& Functions](func.html#func)
1. Work with [iteration functions](func.html#iteration-functions): `rep()`, `seq()`, and `replicate()`
2. Use [`map()` and `apply()` functions](func.html#map-apply)
3. Write your own [custom functions](func.html#custom-functions) with `function()`
4. Set [default values](func.html#defaults) for the arguments in your functions
8. [Probability \& Simulation](sim.html#sim)
1. Generate and plot data randomly sampled from common distributions: uniform, binomial, normal, poisson
2. Generate related variables from a [multivariate](sim.html#mvdist) distribution
3. Define the following statistical terms: [p\-value](sim.html#p-value), [alpha](sim.html#alpha), [power](sim.html#power), smallest effect size of interest ([SESOI](#sesoi)), [false positive](sim.html#false-pos) (type I error), [false negative](#false-neg) (type II error), confidence interval ([CI](#conf-inf))
4. Test sampled distributions against a null hypothesis using: [exact binomial test](sim.html#exact-binom), [t\-test](sim.html#t-test) (1\-sample, independent samples, paired samples), [correlation](sim.html#correlation) (pearson, kendall and spearman)
5. [Calculate power](sim.html#calc-power-binom) using iteration and a sampling function
9. [Introduction to GLM](glm.html#glm)
1. Define the [components](glm.html#glm-components) of the GLM
2. [Simulate data](glm.html#sim-glm) using GLM equations
3. Identify the model parameters that correspond to the data\-generation parameters
4. Understand and plot [residuals](glm.html#residuals)
5. [Predict new values](glm.html#predict) using the model
6. Explain the differences among [coding schemes](glm.html#coding-schemes)
10. [Reproducible Workflows](repro.html#repro)
1. Create a reproducible script in R Markdown
2. Edit the YAML header to add table of contents and other options
3. Include a table
4. Include a figure
5. Use `source()` to include code from an external file
6. Report the output of an analysis using inline R
0\.5 Formative Exercises
------------------------
Exercises are available at the end of each lesson’s webpage. These are not marked or mandatory, but if you can work through each of these (using web resources, of course), you will easily complete the marked assessments.
Download all [exercises and data files](exercises/msc-data-skills-exercises.zip) below as a ZIP archive.
* [01 intro](exercises/01_intro_exercise.Rmd): Intro to R, functions, R markdown
* [02 data](exercises/02_data_exercise.Rmd): Vectors, tabular data, data import, pipes
* [03 ggplot](exercises/03_ggplot_exercise.Rmd): Data visualisation
* [04 tidyr](exercises/04_tidyr_exercise.Rmd): Tidy Data
* [05 dplyr](exercises/05_dplyr_exercise.Rmd): Data wrangling
* [06 joins](exercises/06_joins_exercise.Rmd): Data relations
* [07 functions](exercises/07_func_exercise.Rmd): Functions and iteration
* [08 simulation](exercises/08_sim_exercise.Rmd): Simulation
* [09 glm](exercises/09_glm_exercise.Rmd): GLM
0\.6 I found a bug!
-------------------
This book is a work in progress, so you might find errors. Please help me fix them! The best way is to open an [issue on github](https://github.com/PsyTeachR/msc-data-skills/issues) that describes the error, but you can also mention it on the class Teams forum or [email Lisa](mailto:[email protected]?subject=msc-data-skills).
0\.7 Other Resources
--------------------
* [Learning Statistics with R](https://learningstatisticswithr-bookdown.netlify.com) by Navarro
* [R for Data Science](http://r4ds.had.co.nz) by Grolemund and Wickham
* [swirl](http://swirlstats.com)
* [R for Reproducible Scientific Analysis](http://swcarpentry.github.io/r-novice-gapminder/)
* [codeschool.com](http://tryr.codeschool.com)
* [datacamp](https://www.datacamp.com/courses/free-introduction-to-r)
* [Improving your statistical inferences](https://www.coursera.org/learn/statistical-inferences/) on Coursera
* You can access several cheatsheets in RStudio under the `Help` menu, or get the most recent [RStudio Cheat Sheets](https://www.rstudio.com/resources/cheatsheets/)
* [Style guide for R programming](http://style.tidyverse.org)
* [\#rstats on twitter](https://twitter.com/search?q=%2523rstats) highly recommended!
0\.1 Course Aims
----------------
This course aims to teach students the basic principles of reproducible research and to provide practical training in data processing and analysis in the statistical programming language R.
0\.2 Intended Learning Outcomes
-------------------------------
By the end of this course students will be able to:
* Write scripts in R to organise and transform data sets using best accepted practices
* Explain basics of probability and its role in statistical inference
* Critically analyse data and report descriptive and inferential statistics in a reproducible manner
0\.3 Course Resources
---------------------
* [Data Skills Videos](https://www.youtube.com/playlist?list=PLA2iRWVwbpTIweEBHD2dOKjZHK1atRmXt)
Each chapter has several short video lectures for the main learning outcomes at the playlist . The videos are captioned and watching with the captioning on is a useful way to learn the jargon of computational reproducibility. If you cannot access YouTube, the videos are available on the course Teams and Moodle sites or by request from the instructor.
* [dataskills](https://github.com/psyteachr/msc-data-skills)
This is a custom R package for this course. You can install it with the code below. It will download all of the packages that are used in the book, along with an offline copy of this book, the shiny apps used in the book, and the exercises.
```
devtools::install_github("psyteachr/msc-data-skills")
```
* [glossary](https://github.com/psyteachr/glossary)
Coding and statistics both have a lot of specialist terms. Throughout this book, jargon will be linked to the glossary.
0\.4 Course Outline
-------------------
The overview below lists the beginner learning outcomes only. Some lessons have additional learning outcomes for intermediate or advanced students.
1. [Getting Started](intro.html#intro)
1. Understand the components of the [RStudio IDE](intro.html#rstudio_ide)
2. Type commands into the [console](intro.html#console)
3. Understand [function syntax](intro.html#function_syx)
4. Install a [package](intro.html#install-package)
5. Organise a [project](intro.html#projects)
6. Create and compile an [Rmarkdown document](intro.html#rmarkdown)
2. [Working with Data](data.html#data)
1. Load [built\-in datasets](data.html#builtin)
2. [Import data](data.html#import_data) from CSV and Excel files
3. Create a [data table](data.html#tables)
4. Understand the use the [basic data types](data.html#data_types)
5. Understand and use the [basic container types](data.html#containers) (list, vector)
6. Use [vectorized operations](data.html#vectorized_ops)
7. Be able to [troubleshoot](#Troubleshooting) common data import problems
3. [Data Visualisation](ggplot.html#ggplot)
1. Understand what types of graphs are best for [different types of data](ggplot.html#vartypes)
2. Create common types of graphs with ggplot2
3. Set custom [labels](ggplot.html#custom-labels), [colours](ggplot.html#custom-colours), and [themes](ggplot.html#themes)
4. [Combine plots](combo_plots) on the same plot, as facets, or as a grid using cowplot
5. [Save plots](ggplot.html#ggsave) as an image file
4. [Tidy Data](tidyr.html#tidyr)
1. Understand the concept of [tidy data](tidyr.html#tidy-data)
2. Be able to convert between long and wide formats using [pivot functions](tidyr.html#pivot)
3. Be able to use the 4 basic [`tidyr` verbs](tidyr.html#tidy-verbs)
4. Be able to chain functions using [pipes](tidyr.html#pipes)
5. [Data Wrangling](dplyr.html#dplyr)
1. Be able to use the 6 main dplyr one\-table verbs: [`select()`](dplyr.html#select), [`filter()`](dplyr.html#filter), [`arrange()`](dplyr.html#arrange), [`mutate()`](dplyr.html#mutate), [`summarise()`](dplyr.html#summarise), [`group_by()`](dplyr.html#group_by)
2. Be able to [wrangle data by chaining tidyr and dplyr functions](dplyr.html#all-together)
3. Be able to use these additional one\-table verbs: [`rename()`](dplyr.html#rename), [`distinct()`](dplyr.html#distinct), [`count()`](dplyr.html#count), [`slice()`](dplyr.html#slice), [`pull()`](dplyr.html#pull)
6. [Data Relations](joins.html#joins)
1. Be able to use the 4 mutating join verbs: [`left_join()`](joins.html#left_join), [`right_join()`](joins.html#right_join), [`inner_join()`](joins.html#inner_join), [`full_join()`](joins.html#full_join)
2. Be able to use the 2 filtering join verbs: [`semi_join()`](joins.html#semi_join), [`anti_join()`](joins.html#anti_join)
3. Be able to use the 2 binding join verbs: [`bind_rows()`](joins.html#bind_rows), [`bind_cols()`](joins.html#bind_cols)
4. Be able to use the 3 set operations: [`intersect()`](joins.html#intersect), [`union()`](joins.html#union), [`setdiff()`](joins.html#setdiff)
7. [Iteration \& Functions](func.html#func)
1. Work with [iteration functions](func.html#iteration-functions): `rep()`, `seq()`, and `replicate()`
2. Use [`map()` and `apply()` functions](func.html#map-apply)
3. Write your own [custom functions](func.html#custom-functions) with `function()`
4. Set [default values](func.html#defaults) for the arguments in your functions
8. [Probability \& Simulation](sim.html#sim)
1. Generate and plot data randomly sampled from common distributions: uniform, binomial, normal, poisson
2. Generate related variables from a [multivariate](sim.html#mvdist) distribution
3. Define the following statistical terms: [p\-value](sim.html#p-value), [alpha](sim.html#alpha), [power](sim.html#power), smallest effect size of interest ([SESOI](#sesoi)), [false positive](sim.html#false-pos) (type I error), [false negative](#false-neg) (type II error), confidence interval ([CI](#conf-inf))
4. Test sampled distributions against a null hypothesis using: [exact binomial test](sim.html#exact-binom), [t\-test](sim.html#t-test) (1\-sample, independent samples, paired samples), [correlation](sim.html#correlation) (pearson, kendall and spearman)
5. [Calculate power](sim.html#calc-power-binom) using iteration and a sampling function
9. [Introduction to GLM](glm.html#glm)
1. Define the [components](glm.html#glm-components) of the GLM
2. [Simulate data](glm.html#sim-glm) using GLM equations
3. Identify the model parameters that correspond to the data\-generation parameters
4. Understand and plot [residuals](glm.html#residuals)
5. [Predict new values](glm.html#predict) using the model
6. Explain the differences among [coding schemes](glm.html#coding-schemes)
10. [Reproducible Workflows](repro.html#repro)
1. Create a reproducible script in R Markdown
2. Edit the YAML header to add table of contents and other options
3. Include a table
4. Include a figure
5. Use `source()` to include code from an external file
6. Report the output of an analysis using inline R
0\.5 Formative Exercises
------------------------
Exercises are available at the end of each lesson’s webpage. These are not marked or mandatory, but if you can work through each of these (using web resources, of course), you will easily complete the marked assessments.
Download all [exercises and data files](exercises/msc-data-skills-exercises.zip) below as a ZIP archive.
* [01 intro](exercises/01_intro_exercise.Rmd): Intro to R, functions, R markdown
* [02 data](exercises/02_data_exercise.Rmd): Vectors, tabular data, data import, pipes
* [03 ggplot](exercises/03_ggplot_exercise.Rmd): Data visualisation
* [04 tidyr](exercises/04_tidyr_exercise.Rmd): Tidy Data
* [05 dplyr](exercises/05_dplyr_exercise.Rmd): Data wrangling
* [06 joins](exercises/06_joins_exercise.Rmd): Data relations
* [07 functions](exercises/07_func_exercise.Rmd): Functions and iteration
* [08 simulation](exercises/08_sim_exercise.Rmd): Simulation
* [09 glm](exercises/09_glm_exercise.Rmd): GLM
0\.6 I found a bug!
-------------------
This book is a work in progress, so you might find errors. Please help me fix them! The best way is to open an [issue on github](https://github.com/PsyTeachR/msc-data-skills/issues) that describes the error, but you can also mention it on the class Teams forum or [email Lisa](mailto:[email protected]?subject=msc-data-skills).
0\.7 Other Resources
--------------------
* [Learning Statistics with R](https://learningstatisticswithr-bookdown.netlify.com) by Navarro
* [R for Data Science](http://r4ds.had.co.nz) by Grolemund and Wickham
* [swirl](http://swirlstats.com)
* [R for Reproducible Scientific Analysis](http://swcarpentry.github.io/r-novice-gapminder/)
* [codeschool.com](http://tryr.codeschool.com)
* [datacamp](https://www.datacamp.com/courses/free-introduction-to-r)
* [Improving your statistical inferences](https://www.coursera.org/learn/statistical-inferences/) on Coursera
* You can access several cheatsheets in RStudio under the `Help` menu, or get the most recent [RStudio Cheat Sheets](https://www.rstudio.com/resources/cheatsheets/)
* [Style guide for R programming](http://style.tidyverse.org)
* [\#rstats on twitter](https://twitter.com/search?q=%2523rstats) highly recommended!
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/index.html |
Overview
========
This course provides an overview of skills needed for reproducible research and open science using the statistical programming language R. Students will learn about data visualisation, data tidying and wrangling, archiving, iteration and functions, probability and data simulations, general linear models, and reproducible workflows. Learning is reinforced through weekly assignments that involve working with different types of data.
0\.1 Course Aims
----------------
This course aims to teach students the basic principles of reproducible research and to provide practical training in data processing and analysis in the statistical programming language R.
0\.2 Intended Learning Outcomes
-------------------------------
By the end of this course students will be able to:
* Write scripts in R to organise and transform data sets using best accepted practices
* Explain basics of probability and its role in statistical inference
* Critically analyse data and report descriptive and inferential statistics in a reproducible manner
0\.3 Course Resources
---------------------
* [Data Skills Videos](https://www.youtube.com/playlist?list=PLA2iRWVwbpTIweEBHD2dOKjZHK1atRmXt)
Each chapter has several short video lectures for the main learning outcomes at the playlist . The videos are captioned and watching with the captioning on is a useful way to learn the jargon of computational reproducibility. If you cannot access YouTube, the videos are available on the course Teams and Moodle sites or by request from the instructor.
* [dataskills](https://github.com/psyteachr/msc-data-skills)
This is a custom R package for this course. You can install it with the code below. It will download all of the packages that are used in the book, along with an offline copy of this book, the shiny apps used in the book, and the exercises.
```
devtools::install_github("psyteachr/msc-data-skills")
```
* [glossary](https://github.com/psyteachr/glossary)
Coding and statistics both have a lot of specialist terms. Throughout this book, jargon will be linked to the glossary.
0\.4 Course Outline
-------------------
The overview below lists the beginner learning outcomes only. Some lessons have additional learning outcomes for intermediate or advanced students.
1. [Getting Started](intro.html#intro)
1. Understand the components of the [RStudio IDE](intro.html#rstudio_ide)
2. Type commands into the [console](intro.html#console)
3. Understand [function syntax](intro.html#function_syx)
4. Install a [package](intro.html#install-package)
5. Organise a [project](intro.html#projects)
6. Create and compile an [Rmarkdown document](intro.html#rmarkdown)
2. [Working with Data](data.html#data)
1. Load [built\-in datasets](data.html#builtin)
2. [Import data](data.html#import_data) from CSV and Excel files
3. Create a [data table](data.html#tables)
4. Understand the use the [basic data types](data.html#data_types)
5. Understand and use the [basic container types](data.html#containers) (list, vector)
6. Use [vectorized operations](data.html#vectorized_ops)
7. Be able to [troubleshoot](#Troubleshooting) common data import problems
3. [Data Visualisation](ggplot.html#ggplot)
1. Understand what types of graphs are best for [different types of data](ggplot.html#vartypes)
2. Create common types of graphs with ggplot2
3. Set custom [labels](ggplot.html#custom-labels), [colours](ggplot.html#custom-colours), and [themes](ggplot.html#themes)
4. [Combine plots](combo_plots) on the same plot, as facets, or as a grid using cowplot
5. [Save plots](ggplot.html#ggsave) as an image file
4. [Tidy Data](tidyr.html#tidyr)
1. Understand the concept of [tidy data](tidyr.html#tidy-data)
2. Be able to convert between long and wide formats using [pivot functions](tidyr.html#pivot)
3. Be able to use the 4 basic [`tidyr` verbs](tidyr.html#tidy-verbs)
4. Be able to chain functions using [pipes](tidyr.html#pipes)
5. [Data Wrangling](dplyr.html#dplyr)
1. Be able to use the 6 main dplyr one\-table verbs: [`select()`](dplyr.html#select), [`filter()`](dplyr.html#filter), [`arrange()`](dplyr.html#arrange), [`mutate()`](dplyr.html#mutate), [`summarise()`](dplyr.html#summarise), [`group_by()`](dplyr.html#group_by)
2. Be able to [wrangle data by chaining tidyr and dplyr functions](dplyr.html#all-together)
3. Be able to use these additional one\-table verbs: [`rename()`](dplyr.html#rename), [`distinct()`](dplyr.html#distinct), [`count()`](dplyr.html#count), [`slice()`](dplyr.html#slice), [`pull()`](dplyr.html#pull)
6. [Data Relations](joins.html#joins)
1. Be able to use the 4 mutating join verbs: [`left_join()`](joins.html#left_join), [`right_join()`](joins.html#right_join), [`inner_join()`](joins.html#inner_join), [`full_join()`](joins.html#full_join)
2. Be able to use the 2 filtering join verbs: [`semi_join()`](joins.html#semi_join), [`anti_join()`](joins.html#anti_join)
3. Be able to use the 2 binding join verbs: [`bind_rows()`](joins.html#bind_rows), [`bind_cols()`](joins.html#bind_cols)
4. Be able to use the 3 set operations: [`intersect()`](joins.html#intersect), [`union()`](joins.html#union), [`setdiff()`](joins.html#setdiff)
7. [Iteration \& Functions](func.html#func)
1. Work with [iteration functions](func.html#iteration-functions): `rep()`, `seq()`, and `replicate()`
2. Use [`map()` and `apply()` functions](func.html#map-apply)
3. Write your own [custom functions](func.html#custom-functions) with `function()`
4. Set [default values](func.html#defaults) for the arguments in your functions
8. [Probability \& Simulation](sim.html#sim)
1. Generate and plot data randomly sampled from common distributions: uniform, binomial, normal, poisson
2. Generate related variables from a [multivariate](sim.html#mvdist) distribution
3. Define the following statistical terms: [p\-value](sim.html#p-value), [alpha](sim.html#alpha), [power](sim.html#power), smallest effect size of interest ([SESOI](#sesoi)), [false positive](sim.html#false-pos) (type I error), [false negative](#false-neg) (type II error), confidence interval ([CI](#conf-inf))
4. Test sampled distributions against a null hypothesis using: [exact binomial test](sim.html#exact-binom), [t\-test](sim.html#t-test) (1\-sample, independent samples, paired samples), [correlation](sim.html#correlation) (pearson, kendall and spearman)
5. [Calculate power](sim.html#calc-power-binom) using iteration and a sampling function
9. [Introduction to GLM](glm.html#glm)
1. Define the [components](glm.html#glm-components) of the GLM
2. [Simulate data](glm.html#sim-glm) using GLM equations
3. Identify the model parameters that correspond to the data\-generation parameters
4. Understand and plot [residuals](glm.html#residuals)
5. [Predict new values](glm.html#predict) using the model
6. Explain the differences among [coding schemes](glm.html#coding-schemes)
10. [Reproducible Workflows](repro.html#repro)
1. Create a reproducible script in R Markdown
2. Edit the YAML header to add table of contents and other options
3. Include a table
4. Include a figure
5. Use `source()` to include code from an external file
6. Report the output of an analysis using inline R
0\.5 Formative Exercises
------------------------
Exercises are available at the end of each lesson’s webpage. These are not marked or mandatory, but if you can work through each of these (using web resources, of course), you will easily complete the marked assessments.
Download all [exercises and data files](exercises/msc-data-skills-exercises.zip) below as a ZIP archive.
* [01 intro](exercises/01_intro_exercise.Rmd): Intro to R, functions, R markdown
* [02 data](exercises/02_data_exercise.Rmd): Vectors, tabular data, data import, pipes
* [03 ggplot](exercises/03_ggplot_exercise.Rmd): Data visualisation
* [04 tidyr](exercises/04_tidyr_exercise.Rmd): Tidy Data
* [05 dplyr](exercises/05_dplyr_exercise.Rmd): Data wrangling
* [06 joins](exercises/06_joins_exercise.Rmd): Data relations
* [07 functions](exercises/07_func_exercise.Rmd): Functions and iteration
* [08 simulation](exercises/08_sim_exercise.Rmd): Simulation
* [09 glm](exercises/09_glm_exercise.Rmd): GLM
0\.6 I found a bug!
-------------------
This book is a work in progress, so you might find errors. Please help me fix them! The best way is to open an [issue on github](https://github.com/PsyTeachR/msc-data-skills/issues) that describes the error, but you can also mention it on the class Teams forum or [email Lisa](mailto:[email protected]?subject=msc-data-skills).
0\.7 Other Resources
--------------------
* [Learning Statistics with R](https://learningstatisticswithr-bookdown.netlify.com) by Navarro
* [R for Data Science](http://r4ds.had.co.nz) by Grolemund and Wickham
* [swirl](http://swirlstats.com)
* [R for Reproducible Scientific Analysis](http://swcarpentry.github.io/r-novice-gapminder/)
* [codeschool.com](http://tryr.codeschool.com)
* [datacamp](https://www.datacamp.com/courses/free-introduction-to-r)
* [Improving your statistical inferences](https://www.coursera.org/learn/statistical-inferences/) on Coursera
* You can access several cheatsheets in RStudio under the `Help` menu, or get the most recent [RStudio Cheat Sheets](https://www.rstudio.com/resources/cheatsheets/)
* [Style guide for R programming](http://style.tidyverse.org)
* [\#rstats on twitter](https://twitter.com/search?q=%2523rstats) highly recommended!
0\.1 Course Aims
----------------
This course aims to teach students the basic principles of reproducible research and to provide practical training in data processing and analysis in the statistical programming language R.
0\.2 Intended Learning Outcomes
-------------------------------
By the end of this course students will be able to:
* Write scripts in R to organise and transform data sets using best accepted practices
* Explain basics of probability and its role in statistical inference
* Critically analyse data and report descriptive and inferential statistics in a reproducible manner
0\.3 Course Resources
---------------------
* [Data Skills Videos](https://www.youtube.com/playlist?list=PLA2iRWVwbpTIweEBHD2dOKjZHK1atRmXt)
Each chapter has several short video lectures for the main learning outcomes at the playlist . The videos are captioned and watching with the captioning on is a useful way to learn the jargon of computational reproducibility. If you cannot access YouTube, the videos are available on the course Teams and Moodle sites or by request from the instructor.
* [dataskills](https://github.com/psyteachr/msc-data-skills)
This is a custom R package for this course. You can install it with the code below. It will download all of the packages that are used in the book, along with an offline copy of this book, the shiny apps used in the book, and the exercises.
```
devtools::install_github("psyteachr/msc-data-skills")
```
* [glossary](https://github.com/psyteachr/glossary)
Coding and statistics both have a lot of specialist terms. Throughout this book, jargon will be linked to the glossary.
0\.4 Course Outline
-------------------
The overview below lists the beginner learning outcomes only. Some lessons have additional learning outcomes for intermediate or advanced students.
1. [Getting Started](intro.html#intro)
1. Understand the components of the [RStudio IDE](intro.html#rstudio_ide)
2. Type commands into the [console](intro.html#console)
3. Understand [function syntax](intro.html#function_syx)
4. Install a [package](intro.html#install-package)
5. Organise a [project](intro.html#projects)
6. Create and compile an [Rmarkdown document](intro.html#rmarkdown)
2. [Working with Data](data.html#data)
1. Load [built\-in datasets](data.html#builtin)
2. [Import data](data.html#import_data) from CSV and Excel files
3. Create a [data table](data.html#tables)
4. Understand the use the [basic data types](data.html#data_types)
5. Understand and use the [basic container types](data.html#containers) (list, vector)
6. Use [vectorized operations](data.html#vectorized_ops)
7. Be able to [troubleshoot](#Troubleshooting) common data import problems
3. [Data Visualisation](ggplot.html#ggplot)
1. Understand what types of graphs are best for [different types of data](ggplot.html#vartypes)
2. Create common types of graphs with ggplot2
3. Set custom [labels](ggplot.html#custom-labels), [colours](ggplot.html#custom-colours), and [themes](ggplot.html#themes)
4. [Combine plots](combo_plots) on the same plot, as facets, or as a grid using cowplot
5. [Save plots](ggplot.html#ggsave) as an image file
4. [Tidy Data](tidyr.html#tidyr)
1. Understand the concept of [tidy data](tidyr.html#tidy-data)
2. Be able to convert between long and wide formats using [pivot functions](tidyr.html#pivot)
3. Be able to use the 4 basic [`tidyr` verbs](tidyr.html#tidy-verbs)
4. Be able to chain functions using [pipes](tidyr.html#pipes)
5. [Data Wrangling](dplyr.html#dplyr)
1. Be able to use the 6 main dplyr one\-table verbs: [`select()`](dplyr.html#select), [`filter()`](dplyr.html#filter), [`arrange()`](dplyr.html#arrange), [`mutate()`](dplyr.html#mutate), [`summarise()`](dplyr.html#summarise), [`group_by()`](dplyr.html#group_by)
2. Be able to [wrangle data by chaining tidyr and dplyr functions](dplyr.html#all-together)
3. Be able to use these additional one\-table verbs: [`rename()`](dplyr.html#rename), [`distinct()`](dplyr.html#distinct), [`count()`](dplyr.html#count), [`slice()`](dplyr.html#slice), [`pull()`](dplyr.html#pull)
6. [Data Relations](joins.html#joins)
1. Be able to use the 4 mutating join verbs: [`left_join()`](joins.html#left_join), [`right_join()`](joins.html#right_join), [`inner_join()`](joins.html#inner_join), [`full_join()`](joins.html#full_join)
2. Be able to use the 2 filtering join verbs: [`semi_join()`](joins.html#semi_join), [`anti_join()`](joins.html#anti_join)
3. Be able to use the 2 binding join verbs: [`bind_rows()`](joins.html#bind_rows), [`bind_cols()`](joins.html#bind_cols)
4. Be able to use the 3 set operations: [`intersect()`](joins.html#intersect), [`union()`](joins.html#union), [`setdiff()`](joins.html#setdiff)
7. [Iteration \& Functions](func.html#func)
1. Work with [iteration functions](func.html#iteration-functions): `rep()`, `seq()`, and `replicate()`
2. Use [`map()` and `apply()` functions](func.html#map-apply)
3. Write your own [custom functions](func.html#custom-functions) with `function()`
4. Set [default values](func.html#defaults) for the arguments in your functions
8. [Probability \& Simulation](sim.html#sim)
1. Generate and plot data randomly sampled from common distributions: uniform, binomial, normal, poisson
2. Generate related variables from a [multivariate](sim.html#mvdist) distribution
3. Define the following statistical terms: [p\-value](sim.html#p-value), [alpha](sim.html#alpha), [power](sim.html#power), smallest effect size of interest ([SESOI](#sesoi)), [false positive](sim.html#false-pos) (type I error), [false negative](#false-neg) (type II error), confidence interval ([CI](#conf-inf))
4. Test sampled distributions against a null hypothesis using: [exact binomial test](sim.html#exact-binom), [t\-test](sim.html#t-test) (1\-sample, independent samples, paired samples), [correlation](sim.html#correlation) (pearson, kendall and spearman)
5. [Calculate power](sim.html#calc-power-binom) using iteration and a sampling function
9. [Introduction to GLM](glm.html#glm)
1. Define the [components](glm.html#glm-components) of the GLM
2. [Simulate data](glm.html#sim-glm) using GLM equations
3. Identify the model parameters that correspond to the data\-generation parameters
4. Understand and plot [residuals](glm.html#residuals)
5. [Predict new values](glm.html#predict) using the model
6. Explain the differences among [coding schemes](glm.html#coding-schemes)
10. [Reproducible Workflows](repro.html#repro)
1. Create a reproducible script in R Markdown
2. Edit the YAML header to add table of contents and other options
3. Include a table
4. Include a figure
5. Use `source()` to include code from an external file
6. Report the output of an analysis using inline R
0\.5 Formative Exercises
------------------------
Exercises are available at the end of each lesson’s webpage. These are not marked or mandatory, but if you can work through each of these (using web resources, of course), you will easily complete the marked assessments.
Download all [exercises and data files](exercises/msc-data-skills-exercises.zip) below as a ZIP archive.
* [01 intro](exercises/01_intro_exercise.Rmd): Intro to R, functions, R markdown
* [02 data](exercises/02_data_exercise.Rmd): Vectors, tabular data, data import, pipes
* [03 ggplot](exercises/03_ggplot_exercise.Rmd): Data visualisation
* [04 tidyr](exercises/04_tidyr_exercise.Rmd): Tidy Data
* [05 dplyr](exercises/05_dplyr_exercise.Rmd): Data wrangling
* [06 joins](exercises/06_joins_exercise.Rmd): Data relations
* [07 functions](exercises/07_func_exercise.Rmd): Functions and iteration
* [08 simulation](exercises/08_sim_exercise.Rmd): Simulation
* [09 glm](exercises/09_glm_exercise.Rmd): GLM
0\.6 I found a bug!
-------------------
This book is a work in progress, so you might find errors. Please help me fix them! The best way is to open an [issue on github](https://github.com/PsyTeachR/msc-data-skills/issues) that describes the error, but you can also mention it on the class Teams forum or [email Lisa](mailto:[email protected]?subject=msc-data-skills).
0\.7 Other Resources
--------------------
* [Learning Statistics with R](https://learningstatisticswithr-bookdown.netlify.com) by Navarro
* [R for Data Science](http://r4ds.had.co.nz) by Grolemund and Wickham
* [swirl](http://swirlstats.com)
* [R for Reproducible Scientific Analysis](http://swcarpentry.github.io/r-novice-gapminder/)
* [codeschool.com](http://tryr.codeschool.com)
* [datacamp](https://www.datacamp.com/courses/free-introduction-to-r)
* [Improving your statistical inferences](https://www.coursera.org/learn/statistical-inferences/) on Coursera
* You can access several cheatsheets in RStudio under the `Help` menu, or get the most recent [RStudio Cheat Sheets](https://www.rstudio.com/resources/cheatsheets/)
* [Style guide for R programming](http://style.tidyverse.org)
* [\#rstats on twitter](https://twitter.com/search?q=%2523rstats) highly recommended!
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/intro.html |
Chapter 1 Getting Started
=========================
1\.1 Learning Objectives
------------------------
1. Understand the components of the [RStudio IDE](intro.html#rstudio_ide) [(video)](https://youtu.be/CbA6ZVlJE78)
2. Type commands into the [console](intro.html#console) [(video)](https://youtu.be/wbI4c_7y0kE)
3. Understand [function syntax](intro.html#function_syx) [(video)](https://youtu.be/X5P038N5Q8I)
4. Install a [package](intro.html#install-package) [(video)](https://youtu.be/u_pvHnqkVCE)
5. Organise a [project](intro.html#projects) [(video)](https://youtu.be/y-KiPueC9xw)
6. Create and compile an [Rmarkdown document](intro.html#rmarkdown) [(video)](https://youtu.be/EqJiAlJAl8Y)
1\.2 Resources
--------------
* [Chapter 1: Introduction](http://r4ds.had.co.nz/introduction.html) in *R for Data Science*
* [RStudio IDE Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/rstudio-ide.pdf)
* [Introduction to R Markdown](https://rmarkdown.rstudio.com/lesson-1.html)
* [R Markdown Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/rmarkdown-2.0.pdf)
* [R Markdown Reference](https://www.rstudio.com/wp-content/uploads/2015/03/rmarkdown-reference.pdf)
* [RStudio Cloud](https://rstudio.cloud/)
1\.3 What is R?
---------------
R is a programming environment for data processing and statistical analysis. We use R in Psychology at the University of Glasgow to promote [reproducible research](https://psyteachr.github.io/glossary/r#reproducible-research "Research that documents all of the steps between raw data and results in a way that can be verified."). This refers to being able to document and reproduce all of the steps between raw data and results. R allows you to write [scripts](https://psyteachr.github.io/glossary/s#script "A plain-text file that contains commands in a coding language, such as R.") that combine data files, clean data, and run analyses. There are many other ways to do this, including writing SPSS syntax files, but we find R to be a useful tool that is free, open source, and commonly used by research psychologists.
See Appendix [A](installingr.html#installingr) for more information on on how to install R and associated programs.
### 1\.3\.1 The Base R Console
If you open up the application called R, you will see an “R Console” window that looks something like this.
Figure 1\.1: The R Console window.
You can close R and never open it again. We’ll be working entirely in RStudio in this class.
ALWAYS REMEMBER: Launch R though the RStudio IDE
Launch .
### 1\.3\.2 RStudio
[RStudio](http://www.rstudio.com) is an Integrated Development Environment ([IDE](https://psyteachr.github.io/glossary/i#ide "Integrated Development Environment: a program that serves as a text editor, file manager, and provides functions to help you read and write code. RStudio is an IDE for R.")). This is a program that serves as a text editor, file manager, and provides many functions to help you read and write R code.
Figure 1\.2: The RStudio IDE
RStudio is arranged with four window [panes](https://psyteachr.github.io/glossary/p#panes "RStudio is arranged with four window “panes.”"). By default, the upper left pane is the **source pane**, where you view and edit source code from files. The bottom left pane is usually the **console pane**, where you can type in commands and view output messages. The right panes have several different tabs that show you information about your code. You can change the location of panes and what tabs are shown under **`Preferences > Pane Layout`**.
Your browser does not support the video tag.
### 1\.3\.3 Configure RStudio
In this class, you will be learning how to do [reproducible research](https://psyteachr.github.io/glossary/r#reproducible-research "Research that documents all of the steps between raw data and results in a way that can be verified."). This involves writing scripts that completely and transparently perform some analysis from start to finish in a way that yields the same result for different people using the same software on different computers. Transparency is a key value of science, as embodied in the “trust but verify” motto.
When you do things reproducibly, others can understand and check your work. This benefits science, but there is a selfish reason, too: the most important person who will benefit from a reproducible script is your future self. When you return to an analysis after two weeks of vacation, you will thank your earlier self for doing things in a transparent, reproducible way, as you can easily pick up right where you left off.
There are two tweaks that you should do to your RStudio installation to maximize reproducibility. Go to **`Global Options...`** under the **`Tools`** menu (⌘,), and uncheck the box that says **`Restore .RData into workspace at startup`**. If you keep things around in your workspace, things will get messy, and unexpected things will happen. You should always start with a clear workspace. This also means that you never want to save your workspace when you exit, so set this to **`Never`**. The only thing you want to save are your scripts.
Figure 1\.3: Alter these settings for increased reproducibility.
Your settings should have:
* Restore .RData into workspace at startup: Checked Not Checked
* Save workspace to .RData on exit: Always Never Ask
1\.4 Getting Started
--------------------
### 1\.4\.1 Console commands
We are first going to learn about how to interact with the [console](https://psyteachr.github.io/glossary/c#console "The pane in RStudio where you can type in commands and view output messages."). In general, you will be developing R [script](https://psyteachr.github.io/glossary/s#scripts "NA") or [R Markdown](https://psyteachr.github.io/glossary/r#r-markdown "The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.") files, rather than working directly in the console window. However, you can consider the console a kind of “sandbox” where you can try out lines of code and adapt them until you get them to do what you want. Then you can copy them back into the script editor.
Mostly, however, you will be typing into the script editor window (either into an R script or an R Markdown file) and then sending the commands to the console by placing the cursor on the line and holding down the Ctrl key while you press Enter. The Ctrl\+Enter key sequence sends the command in the script to the console.
One simple way to learn about the R console is to use it as a calculator. Enter the lines of code below and see if your results match. Be prepared to make lots of typos (at first).
```
1 + 1
```
```
## [1] 2
```
The R console remembers a history of the commands you typed in the past. Use the up and down arrow keys on your keyboard to scroll backwards and forwards through your history. It’s a lot faster than re\-typing.
```
1 + 1 + 3
```
```
## [1] 5
```
You can break up mathematical expressions over multiple lines; R waits for a complete expression before processing it.
```
## here comes a long expression
## let's break it over multiple lines
1 + 2 + 3 + 4 + 5 + 6 +
7 + 8 + 9 +
10
```
```
## [1] 55
```
Text inside quotes is called a [string](https://psyteachr.github.io/glossary/s#string "A piece of text inside of quotes.").
```
"Good afternoon"
```
```
## [1] "Good afternoon"
```
You can break up text over multiple lines; R waits for a close quote before processing it. If you want to include a double quote inside this quoted string, [escape](https://psyteachr.github.io/glossary/e#escape "Include special characters like \" inside of a string by prefacing them with a backslash.") it with a backslash.
```
africa <- "I hear the drums echoing tonight
But she hears only whispers of some quiet conversation
She's coming in, 12:30 flight
The moonlit wings reflect the stars that guide me towards salvation
I stopped an old man along the way
Hoping to find some old forgotten words or ancient melodies
He turned to me as if to say, \"Hurry boy, it's waiting there for you\"
- Toto"
cat(africa) # cat() prints the string
```
```
## I hear the drums echoing tonight
## But she hears only whispers of some quiet conversation
## She's coming in, 12:30 flight
## The moonlit wings reflect the stars that guide me towards salvation
## I stopped an old man along the way
## Hoping to find some old forgotten words or ancient melodies
## He turned to me as if to say, "Hurry boy, it's waiting there for you"
##
## - Toto
```
### 1\.4\.2 Objects
Often you want to store the result of some computation for later use. You can store it in an [object](https://psyteachr.github.io/glossary/o#object "A word that identifies and stores the value of some data for later use.") (also sometimes called a [variable](https://psyteachr.github.io/glossary/v#variable "A word that identifies and stores the value of some data for later use.")). An object in R:
* contains only letters, numbers, full stops, and underscores
* starts with a letter or a full stop and a letter
* distinguishes uppercase and lowercase letters (`rickastley` is not the same as `RickAstley`)
The following are valid and different objects:
* songdata
* SongData
* song\_data
* song.data
* .song.data
* never\_gonna\_give\_you\_up\_never\_gonna\_let\_you\_down
The following are not valid objects:
* \_song\_data
* 1song
* .1song
* song data
* song\-data
Use the [assignment operator](https://psyteachr.github.io/glossary/a#assignment-operator "The symbol <-, which functions like = and assigns the value on the right to the object on the left")\<\-\` to assign the value on the right to the object named on the left.
```
## use the assignment operator '<-'
## R stores the number in the object
x <- 5
```
Now that we have set `x` to a value, we can do something with it:
```
x * 2
## R evaluates the expression and stores the result in the object boring_calculation
boring_calculation <- 2 + 2
```
```
## [1] 10
```
Note that it doesn’t print the result back at you when it’s stored. To view the result, just type the object name on a blank line.
```
boring_calculation
```
```
## [1] 4
```
Once an object is assigned a value, its value doesn’t change unless you reassign the object, even if the objects you used to calculate it change. Predict what the code below does and test yourself:
```
this_year <- 2019
my_birth_year <- 1976
my_age <- this_year - my_birth_year
this_year <- 2020
```
After all the code above is run:
* `this_year` \= 43 44 1976 2019 2020
* `my_birth_year` \= 43 44 1976 2019 2020
* `my_age` \= 43 44 1976 2019 2020
### 1\.4\.3 The environment
Anytime you assign something to a new object, R creates a new entry in the [global environment](https://psyteachr.github.io/glossary/g#global-environment "The interactive workspace where your script runs"). Objects in the global environment exist until you end your session; then they disappear forever (unless you save them).
Look at the **Environment** tab in the upper right pane. It lists all of the objects you have created. Click the broom icon to clear all of the objects and start fresh. You can also use the following functions in the console to view all objects, remove one object, or remove all objects.
```
ls() # print the objects in the global environment
rm("x") # remove the object named x from the global environment
rm(list = ls()) # clear out the global environment
```
In the upper right corner of the Environment tab, change **`List`** to **`Grid`**. Now you can see the type, length, and size of your objects, and reorder the list by any of these attributes.
### 1\.4\.4 Whitespace
R mostly ignores [whitespace](https://psyteachr.github.io/glossary/w#whitespace "Spaces, tabs and line breaks"): spaces, tabs, and line breaks. This means that you can use whitespace to help you organise your code.
```
# a and b are identical
a <- list(ctl = "Control Condition", exp1 = "Experimental Condition 1", exp2 = "Experimental Condition 2")
# but b is much easier to read
b <- list(ctl = "Control Condition",
exp1 = "Experimental Condition 1",
exp2 = "Experimental Condition 2")
```
When you see `>` at the beginning of a line, that means R is waiting for you to start a new command. However, if you see a `+` instead of `>` at the start of the line, that means R is waiting for you to finish a command you started on a previous line. If you want to cancel whatever command you started, just press the Esc key in the console window and you’ll get back to the `>` command prompt.
```
# R waits until next line for evaluation
(3 + 2) *
5
```
```
## [1] 25
```
It is often useful to break up long functions onto several lines.
```
cat("3, 6, 9, the goose drank wine",
"The monkey chewed tobacco on the streetcar line",
"The line broke, the monkey got choked",
"And they all went to heaven in a little rowboat",
sep = " \n")
```
```
## 3, 6, 9, the goose drank wine
## The monkey chewed tobacco on the streetcar line
## The line broke, the monkey got choked
## And they all went to heaven in a little rowboat
```
### 1\.4\.5 Function syntax
A lot of what you do in R involves calling a [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") and storing the results. A function is a named section of code that can be reused.
For example, `sd` is a function that returns the [standard deviation](https://psyteachr.github.io/glossary/s#standard-deviation "A descriptive statistic that measures how spread out data are relative to the mean.") of the [vector](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings.") of numbers that you provide as the input [argument](https://psyteachr.github.io/glossary/a#argument "A variable that provides input to a function."). Functions are set up like this:
`function_name(argument1, argument2 = "value")`.
The arguments in parentheses can be named (like, `argument1 = 10`) or you can skip the names if you put them in the exact same order that they’re defined in the function. You can check this by typing `?sd` (or whatever function name you’re looking up) into the console and the Help pane will show you the default order under **Usage**. You can also skip arguments that have a default value specified.
Most functions return a value, but may also produce side effects like printing to the console.
To illustrate, the function `rnorm()` generates random numbers from the standard [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable."). The help page for `rnorm()` (accessed by typing `?rnorm` in the console) shows that it has the syntax
`rnorm(n, mean = 0, sd = 1)`
where `n` is the number of randomly generated numbers you want, `mean` is the mean of the distribution, and `sd` is the standard deviation. The default mean is 0, and the default standard deviation is 1\. There is no default for `n`, which means you’ll get an error if you don’t specify it:
```
rnorm()
```
```
## Error in rnorm(): argument "n" is missing, with no default
```
If you want 10 random numbers from a normal distribution with mean of 0 and standard deviation, you can just use the defaults.
```
rnorm(10)
```
```
## [1] -0.04234663 -2.00393149 0.83611187 -1.46404127 1.31714428 0.42608581
## [7] -0.46673798 -0.01670509 1.64072295 0.85876439
```
If you want 10 numbers from a normal distribution with a mean of 100:
```
rnorm(10, 100)
```
```
## [1] 101.34917 99.86059 100.36287 99.65575 100.66818 100.04771 99.76782
## [8] 102.57691 100.05575 99.42490
```
This would be an equivalent but less efficient way of calling the function:
```
rnorm(n = 10, mean = 100)
```
```
## [1] 100.52773 99.40241 100.39641 101.01629 99.41961 100.52202 98.09828
## [8] 99.52169 100.25677 99.92092
```
We don’t need to name the arguments because R will recognize that we intended to fill in the first and second arguments by their position in the function call. However, if we want to change the default for an argument coming later in the list, then we need to name it. For instance, if we wanted to keep the default `mean = 0` but change the standard deviation to 100 we would do it this way:
```
rnorm(10, sd = 100)
```
```
## [1] -68.254349 -17.636619 140.047575 7.570674 -68.309751 -2.378786
## [7] 117.356343 -104.772092 -40.163750 54.358941
```
Some functions give a list of options after an argument; this means the default value is the first option. The usage entry for the `power.t.test()` function looks like this:
```
power.t.test(n = NULL, delta = NULL, sd = 1, sig.level = 0.05,
power = NULL,
type = c("two.sample", "one.sample", "paired"),
alternative = c("two.sided", "one.sided"),
strict = FALSE, tol = .Machine$double.eps^0.25)
```
* What is the default value for `sd`? NULL 1 0\.05 two.sample
* What is the default value for `type`? NULL two.sample one.sample paired
* Which is equivalent to `power.t.test(100, 0.5)`? power.t.test(100, 0\.5, sig.level \= 1, sd \= 0\.05\) power.t.test() power.t.test(n \= 100\) power.t.test(delta \= 0\.5, n \= 100\)
### 1\.4\.6 Getting help
Start up help in a browser using the function `help.start()`.
If a function is in [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") or a loaded [package](https://psyteachr.github.io/glossary/p#package "A group of R functions."), you can use the `help("function_name")` function or the `?function_name` shortcut to access the help file. If the package isn’t loaded, specify the package name as the second argument to the help function.
```
# these methods are all equivalent ways of getting help
help("rnorm")
?rnorm
help("rnorm", package="stats")
```
When the package isn’t loaded or you aren’t sure what package the function is in, use the shortcut `??function_name`.
* What is the first argument to the `mean` function? trim na.rm mean x
* What package is `read_excel` in? readr readxl base stats
1\.5 Add\-on packages
---------------------
One of the great things about R is that it is **user extensible**: anyone can create a new add\-on software package that extends its functionality. There are currently thousands of add\-on packages that R users have created to solve many different kinds of problems, or just simply to have fun. There are packages for data visualisation, machine learning, neuroimaging, eyetracking, web scraping, and playing games such as Sudoku.
Add\-on packages are not distributed with base R, but have to be downloaded and installed from an archive, in the same way that you would, for instance, download and install a fitness app on your smartphone.
The main repository where packages reside is called CRAN, the Comprehensive R Archive Network. A package has to pass strict tests devised by the R core team to be allowed to be part of the CRAN archive. You can install from the CRAN archive through R using the `install.packages()` function.
There is an important distinction between **installing** a package and **loading** a package.
### 1\.5\.1 Installing a package
This is done using `install.packages()`. This is like installing an app on your phone: you only have to do it once and the app will remain installed until you remove it. For instance, if you want to use PokemonGo on your phone, you install it once from the App Store or Play Store, and you don’t have to re\-install it each time you want to use it. Once you launch the app, it will run in the background until you close it or restart your phone. Likewise, when you install a package, the package will be available (but not *loaded*) every time you open up R.
You may only be able to permanently install packages if you are using R on your own system; you may not be able to do this on public workstations if you lack the appropriate privileges.
Install the `ggExtra` package on your system. This package lets you create plots with marginal histograms.
```
install.packages("ggExtra")
```
If you don’t already have packages like ggplot2 and shiny installed, it will also install these **dependencies** for you. If you don’t get an error message at the end, the installation was successful.
### 1\.5\.2 Loading a package
This is done using `library(packagename)`. This is like **launching** an app on your phone: the functionality is only there where the app is launched and remains there until you close the app or restart. Likewise, when you run `library(packagename)` within a session, the functionality of the package referred to by `packagename` will be made available for your R session. The next time you start R, you will need to run the `library()` function again if you want to access its functionality.
You can load the functions in `ggExtra` for your current R session as follows:
```
library(ggExtra)
```
You might get some red text when you load a package, this is normal. It is usually warning you that this package has functions that have the same name as other packages you’ve already loaded.
You can use the convention `package::function()` to indicate in which add\-on package a function resides. For instance, if you see `readr::read_csv()`, that refers to the function `read_csv()` in the `readr` add\-on package.
Now you can run the function `ggExtra::runExample()`, which runs an interactive example of marginal plots using shiny.
```
ggExtra::runExample()
```
### 1\.5\.3 Install from GitHub
Many R packages are not yet on [CRAN](https://psyteachr.github.io/glossary/c#cran "The Comprehensive R Archive Network: a network of ftp and web servers around the world that store identical, up-to-date, versions of code and documentation for R.") because they are still in development. Increasingly, datasets and code for papers are available as packages you can download from github. You’ll need to install the devtools package to be able to install packages from github. Check if you have a package installed by trying to load it (e.g., if you don’t have devtools installed, `library("devtools")` will display an error message) or by searching for it in the packages tab in the lower right pane. All listed packages are installed; all checked packages are currently loaded.
Figure 1\.4: Check installed and loaded packages in the packages tab in the lower right pane.
```
# install devtools if you get
# Error in loadNamespace(name) : there is no package called ‘devtools’
# install.packages("devtools")
devtools::install_github("psyteachr/msc-data-skills")
```
After you install the dataskills package, load it using the `library()` function. You can then try out some of the functions below.
* `book()` opens a local copy of this book in your web browser.
* `app("plotdemo")` opens a shiny app that lets you see how simulated data would look in different plot styles
* `exercise(1)` creates and opens a file containing the exercises for this chapter
* `?disgust` shows you the documentation for the built\-in dataset `disgust`, which we will be using in future lessons
```
library(dataskills)
book()
app("plotdemo")
exercise(1)
?disgust
```
How many different ways can you find to discover what functions are available in the dataskills package?
1\.6 Organising a project
-------------------------
[Projects](https://psyteachr.github.io/glossary/p#project "A way to organise related files in RStudio") in RStudio are a way to group all of the files you need for one project. Most projects include scripts, data files, and output files like the PDF version of the script or images.
Make a new directory where you will keep all of your materials for this class. If you’re using a lab computer, make sure you make this directory in your network drive so you can access it from other computers.
Choose **`New Project…`** under the **`File`** menu to create a new project called `01-intro` in this directory.
### 1\.6\.1 Structure
Here is what an R script looks like. Don’t worry about the details for now.
```
# load add-on packages
library(tidyverse)
# set object ----
n <- 100
# simulate data ----
data <- data.frame(
id = 1:n,
dv = c(rnorm(n/2, 0), rnorm(n/2, 1)),
condition = rep(c("A", "B"), each = n/2)
)
# plot data ----
ggplot(data, aes(condition, dv)) +
geom_violin(trim = FALSE) +
geom_boxplot(width = 0.25,
aes(fill = condition),
show.legend = FALSE)
# save plot ----
ggsave("sim_data.png", width = 8, height = 6)
```
It’s best if you follow the following structure when developing your own scripts:
* load in any add\-on packages you need to use
* define any custom functions
* load or simulate the data you will be working with
* work with the data
* save anything you need to save
Often when you are working on a script, you will realize that you need to load another add\-on package. Don’t bury the call to `library(package_I_need)` way down in the script. Put it in the top, so the user has an overview of what packages are needed.
You can add comments to an R script by with the hash symbol (`#`). The R interpreter will ignore characters from the hash to the end of the line.
```
## comments: any text from '#' on is ignored until end of line
22 / 7 # approximation to pi
```
```
## [1] 3.142857
```
If you add 4 or more dashes to the end of a comment, it acts like a header and creates an outline that you can see in the document outline (⇧⌘O).
### 1\.6\.2 Reproducible reports with R Markdown
We will make reproducible reports following the principles of [literate programming](https://en.wikipedia.org/wiki/Literate_programming). The basic idea is to have the text of the report together in a single document along with the code needed to perform all analyses and generate the tables. The report is then “compiled” from the original format into some other, more portable format, such as HTML or PDF. This is different from traditional cutting and pasting approaches where, for instance, you create a graph in Microsoft Excel or a statistics program like SPSS and then paste it into Microsoft Word.
We will use [R Markdown](https://psyteachr.github.io/glossary/r#r-markdown "The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.") to create reproducible reports, which enables mixing of text and code. A reproducible script will contain sections of code in code blocks. A code block starts and ends with backtick symbols in a row, with some infomation about the code between curly brackets, such as `{r chunk-name, echo=FALSE}` (this runs the code, but does not show the text of the code block in the compiled document). The text outside of code blocks is written in [markdown](https://psyteachr.github.io/glossary/m#markdown "A way to specify formatting, such as headers, paragraphs, lists, bolding, and links."), which is a way to specify formatting, such as headers, paragraphs, lists, bolding, and links.
Figure 1\.5: A reproducible script.
If you open up a new R Markdown file from a template, you will see an example document with several code blocks in it. To create an HTML or PDF report from an R Markdown (Rmd) document, you compile it. Compiling a document is called [knitting](https://psyteachr.github.io/glossary/k#knit "To create an HTML, PDF, or Word document from an R Markdown (Rmd) document") in RStudio. There is a button that looks like a ball of yarn with needles through it that you click on to compile your file into a report.
Create a new R Markdown file from the **`File > New File > R Markdown…`** menu. Change the title and author, then click the knit button to create an html file.
### 1\.6\.3 Working Directory
Where should you put all of your files? When developing an analysis, you usually want to have all of your scripts and data files in one subtree of your computer’s directory structure. Usually there is a single [working directory](https://psyteachr.github.io/glossary/w#working-directory "The filepath where R is currently reading and writing files.") where your data and scripts are stored.
Your script should only reference files in three locations, using the appropriate format.
| Where | Example |
| --- | --- |
| on the web | “[https://psyteachr.github.io/msc\-data\-skills/data/disgust\_scores.csv](https://psyteachr.github.io/msc-data-skills/data/disgust_scores.csv)” |
| in the working directory | “disgust\_scores.csv” |
| in a subdirectory | “data/disgust\_scores.csv” |
Never set or change your working directory in a script.
If you are working with an R Markdown file, it will automatically use the same directory the .Rmd file is in as the working directory.
If you are working with R scripts, store your main script file in the top\-level directory and manually set your working directory to that location. You will have to reset the working directory each time you open RStudio, unless you create a [project](https://psyteachr.github.io/glossary/p#project "A way to organise related files in RStudio") and access the script from the project.
For instance, if you are on a Windows machine your data and scripts are in the directory `C:\Carla's_files\thesis2\my_thesis\new_analysis`, you will set your working directory in one of two ways: (1\) by going to the `Session` pull down menu in RStudio and choosing `Set Working Directory`, or (2\) by typing `setwd("C:\Carla's_files\thesis2\my_thesis\new_analysis")` in the console window.
It’s tempting to make your life simple by putting the `setwd()` command in your script. Don’t do this! Others will not have the same directory tree as you (and when your laptop dies and you get a new one, neither will you).
When manually setting the working directory, always do so by using the **`Session > Set Working Directory`** pull\-down option or by typing `setwd()` in the console.
If your script needs a file in a subdirectory of `new_analysis`, say, `data/questionnaire.csv`, load it in using a [relative path](https://psyteachr.github.io/glossary/r#relative-path "The location of a file in relation to the working directory.") so that it is accessible if you move the folder `new_analysis` to another location or computer:
```
dat <- read_csv("data/questionnaire.csv") # correct
```
Do not load it in using an [absolute path](https://psyteachr.github.io/glossary/a#absolute-path "A file path that starts with / and is not appended to the working directory"):
```
dat <- read_csv("C:/Carla's_files/thesis22/my_thesis/new_analysis/data/questionnaire.csv") # wrong
```
Also note the convention of using forward slashes, unlike the Windows\-specific convention of using backward slashes. This is to make references to files platform independent.
1\.7 Glossary
-------------
Each chapter ends with a glossary table defining the jargon introduced in this chapter. The links below take you to the [glossary book](https://psyteachr.github.io/glossary), which you can also download for offline use with `devtools::install_github("psyteachr/glossary")` and access the glossary offline with `glossary::book()`.
| term | definition |
| --- | --- |
| [absolute path](https://psyteachr.github.io/glossary/a#absolute.path) | A file path that starts with / and is not appended to the working directory |
| [argument](https://psyteachr.github.io/glossary/a#argument) | A variable that provides input to a function. |
| [assignment operator](https://psyteachr.github.io/glossary/a#assignment.operator) | The symbol \<\-, which functions like \= and assigns the value on the right to the object on the left |
| [base r](https://psyteachr.github.io/glossary/b#base.r) | The set of R functions that come with a basic installation of R, before you add external packages |
| [console](https://psyteachr.github.io/glossary/c#console) | The pane in RStudio where you can type in commands and view output messages. |
| [cran](https://psyteachr.github.io/glossary/c#cran) | The Comprehensive R Archive Network: a network of ftp and web servers around the world that store identical, up\-to\-date, versions of code and documentation for R. |
| [escape](https://psyteachr.github.io/glossary/e#escape) | Include special characters like " inside of a string by prefacing them with a backslash. |
| [function](https://psyteachr.github.io/glossary/f#function.) | A named section of code that can be reused. |
| [global environment](https://psyteachr.github.io/glossary/g#global.environment) | The interactive workspace where your script runs |
| [ide](https://psyteachr.github.io/glossary/i#ide) | Integrated Development Environment: a program that serves as a text editor, file manager, and provides functions to help you read and write code. RStudio is an IDE for R. |
| [knit](https://psyteachr.github.io/glossary/k#knit) | To create an HTML, PDF, or Word document from an R Markdown (Rmd) document |
| [markdown](https://psyteachr.github.io/glossary/m#markdown) | A way to specify formatting, such as headers, paragraphs, lists, bolding, and links. |
| [normal distribution](https://psyteachr.github.io/glossary/n#normal.distribution) | A symmetric distribution of data where values near the centre are most probable. |
| [object](https://psyteachr.github.io/glossary/o#object) | A word that identifies and stores the value of some data for later use. |
| [package](https://psyteachr.github.io/glossary/p#package) | A group of R functions. |
| [panes](https://psyteachr.github.io/glossary/p#panes) | RStudio is arranged with four window “panes.” |
| [project](https://psyteachr.github.io/glossary/p#project) | A way to organise related files in RStudio |
| [r markdown](https://psyteachr.github.io/glossary/r#r.markdown) | The R\-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code. |
| [relative path](https://psyteachr.github.io/glossary/r#relative.path) | The location of a file in relation to the working directory. |
| [reproducible research](https://psyteachr.github.io/glossary/r#reproducible.research) | Research that documents all of the steps between raw data and results in a way that can be verified. |
| [script](https://psyteachr.github.io/glossary/s#script) | A plain\-text file that contains commands in a coding language, such as R. |
| [scripts](https://psyteachr.github.io/glossary/s#scripts) | NA |
| [standard deviation](https://psyteachr.github.io/glossary/s#standard.deviation) | A descriptive statistic that measures how spread out data are relative to the mean. |
| [string](https://psyteachr.github.io/glossary/s#string) | A piece of text inside of quotes. |
| [variable](https://psyteachr.github.io/glossary/v#variable) | A word that identifies and stores the value of some data for later use. |
| [vector](https://psyteachr.github.io/glossary/v#vector) | A type of data structure that is basically a list of things like T/F values, numbers, or strings. |
| [whitespace](https://psyteachr.github.io/glossary/w#whitespace) | Spaces, tabs and line breaks |
| [working directory](https://psyteachr.github.io/glossary/w#working.directory) | The filepath where R is currently reading and writing files. |
1\.8 Exercises
--------------
Download the first set of [exercises](exercises/01_intro_exercise.Rmd) and put it in the project directory you created earlier for today’s exercises. See the [answers](exercises/01_intro_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(1)
# run this to access the answers
dataskills::exercise(1, answers = TRUE)
```
1\.1 Learning Objectives
------------------------
1. Understand the components of the [RStudio IDE](intro.html#rstudio_ide) [(video)](https://youtu.be/CbA6ZVlJE78)
2. Type commands into the [console](intro.html#console) [(video)](https://youtu.be/wbI4c_7y0kE)
3. Understand [function syntax](intro.html#function_syx) [(video)](https://youtu.be/X5P038N5Q8I)
4. Install a [package](intro.html#install-package) [(video)](https://youtu.be/u_pvHnqkVCE)
5. Organise a [project](intro.html#projects) [(video)](https://youtu.be/y-KiPueC9xw)
6. Create and compile an [Rmarkdown document](intro.html#rmarkdown) [(video)](https://youtu.be/EqJiAlJAl8Y)
1\.2 Resources
--------------
* [Chapter 1: Introduction](http://r4ds.had.co.nz/introduction.html) in *R for Data Science*
* [RStudio IDE Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/rstudio-ide.pdf)
* [Introduction to R Markdown](https://rmarkdown.rstudio.com/lesson-1.html)
* [R Markdown Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/rmarkdown-2.0.pdf)
* [R Markdown Reference](https://www.rstudio.com/wp-content/uploads/2015/03/rmarkdown-reference.pdf)
* [RStudio Cloud](https://rstudio.cloud/)
1\.3 What is R?
---------------
R is a programming environment for data processing and statistical analysis. We use R in Psychology at the University of Glasgow to promote [reproducible research](https://psyteachr.github.io/glossary/r#reproducible-research "Research that documents all of the steps between raw data and results in a way that can be verified."). This refers to being able to document and reproduce all of the steps between raw data and results. R allows you to write [scripts](https://psyteachr.github.io/glossary/s#script "A plain-text file that contains commands in a coding language, such as R.") that combine data files, clean data, and run analyses. There are many other ways to do this, including writing SPSS syntax files, but we find R to be a useful tool that is free, open source, and commonly used by research psychologists.
See Appendix [A](installingr.html#installingr) for more information on on how to install R and associated programs.
### 1\.3\.1 The Base R Console
If you open up the application called R, you will see an “R Console” window that looks something like this.
Figure 1\.1: The R Console window.
You can close R and never open it again. We’ll be working entirely in RStudio in this class.
ALWAYS REMEMBER: Launch R though the RStudio IDE
Launch .
### 1\.3\.2 RStudio
[RStudio](http://www.rstudio.com) is an Integrated Development Environment ([IDE](https://psyteachr.github.io/glossary/i#ide "Integrated Development Environment: a program that serves as a text editor, file manager, and provides functions to help you read and write code. RStudio is an IDE for R.")). This is a program that serves as a text editor, file manager, and provides many functions to help you read and write R code.
Figure 1\.2: The RStudio IDE
RStudio is arranged with four window [panes](https://psyteachr.github.io/glossary/p#panes "RStudio is arranged with four window “panes.”"). By default, the upper left pane is the **source pane**, where you view and edit source code from files. The bottom left pane is usually the **console pane**, where you can type in commands and view output messages. The right panes have several different tabs that show you information about your code. You can change the location of panes and what tabs are shown under **`Preferences > Pane Layout`**.
Your browser does not support the video tag.
### 1\.3\.3 Configure RStudio
In this class, you will be learning how to do [reproducible research](https://psyteachr.github.io/glossary/r#reproducible-research "Research that documents all of the steps between raw data and results in a way that can be verified."). This involves writing scripts that completely and transparently perform some analysis from start to finish in a way that yields the same result for different people using the same software on different computers. Transparency is a key value of science, as embodied in the “trust but verify” motto.
When you do things reproducibly, others can understand and check your work. This benefits science, but there is a selfish reason, too: the most important person who will benefit from a reproducible script is your future self. When you return to an analysis after two weeks of vacation, you will thank your earlier self for doing things in a transparent, reproducible way, as you can easily pick up right where you left off.
There are two tweaks that you should do to your RStudio installation to maximize reproducibility. Go to **`Global Options...`** under the **`Tools`** menu (⌘,), and uncheck the box that says **`Restore .RData into workspace at startup`**. If you keep things around in your workspace, things will get messy, and unexpected things will happen. You should always start with a clear workspace. This also means that you never want to save your workspace when you exit, so set this to **`Never`**. The only thing you want to save are your scripts.
Figure 1\.3: Alter these settings for increased reproducibility.
Your settings should have:
* Restore .RData into workspace at startup: Checked Not Checked
* Save workspace to .RData on exit: Always Never Ask
### 1\.3\.1 The Base R Console
If you open up the application called R, you will see an “R Console” window that looks something like this.
Figure 1\.1: The R Console window.
You can close R and never open it again. We’ll be working entirely in RStudio in this class.
ALWAYS REMEMBER: Launch R though the RStudio IDE
Launch .
### 1\.3\.2 RStudio
[RStudio](http://www.rstudio.com) is an Integrated Development Environment ([IDE](https://psyteachr.github.io/glossary/i#ide "Integrated Development Environment: a program that serves as a text editor, file manager, and provides functions to help you read and write code. RStudio is an IDE for R.")). This is a program that serves as a text editor, file manager, and provides many functions to help you read and write R code.
Figure 1\.2: The RStudio IDE
RStudio is arranged with four window [panes](https://psyteachr.github.io/glossary/p#panes "RStudio is arranged with four window “panes.”"). By default, the upper left pane is the **source pane**, where you view and edit source code from files. The bottom left pane is usually the **console pane**, where you can type in commands and view output messages. The right panes have several different tabs that show you information about your code. You can change the location of panes and what tabs are shown under **`Preferences > Pane Layout`**.
Your browser does not support the video tag.
### 1\.3\.3 Configure RStudio
In this class, you will be learning how to do [reproducible research](https://psyteachr.github.io/glossary/r#reproducible-research "Research that documents all of the steps between raw data and results in a way that can be verified."). This involves writing scripts that completely and transparently perform some analysis from start to finish in a way that yields the same result for different people using the same software on different computers. Transparency is a key value of science, as embodied in the “trust but verify” motto.
When you do things reproducibly, others can understand and check your work. This benefits science, but there is a selfish reason, too: the most important person who will benefit from a reproducible script is your future self. When you return to an analysis after two weeks of vacation, you will thank your earlier self for doing things in a transparent, reproducible way, as you can easily pick up right where you left off.
There are two tweaks that you should do to your RStudio installation to maximize reproducibility. Go to **`Global Options...`** under the **`Tools`** menu (⌘,), and uncheck the box that says **`Restore .RData into workspace at startup`**. If you keep things around in your workspace, things will get messy, and unexpected things will happen. You should always start with a clear workspace. This also means that you never want to save your workspace when you exit, so set this to **`Never`**. The only thing you want to save are your scripts.
Figure 1\.3: Alter these settings for increased reproducibility.
Your settings should have:
* Restore .RData into workspace at startup: Checked Not Checked
* Save workspace to .RData on exit: Always Never Ask
1\.4 Getting Started
--------------------
### 1\.4\.1 Console commands
We are first going to learn about how to interact with the [console](https://psyteachr.github.io/glossary/c#console "The pane in RStudio where you can type in commands and view output messages."). In general, you will be developing R [script](https://psyteachr.github.io/glossary/s#scripts "NA") or [R Markdown](https://psyteachr.github.io/glossary/r#r-markdown "The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.") files, rather than working directly in the console window. However, you can consider the console a kind of “sandbox” where you can try out lines of code and adapt them until you get them to do what you want. Then you can copy them back into the script editor.
Mostly, however, you will be typing into the script editor window (either into an R script or an R Markdown file) and then sending the commands to the console by placing the cursor on the line and holding down the Ctrl key while you press Enter. The Ctrl\+Enter key sequence sends the command in the script to the console.
One simple way to learn about the R console is to use it as a calculator. Enter the lines of code below and see if your results match. Be prepared to make lots of typos (at first).
```
1 + 1
```
```
## [1] 2
```
The R console remembers a history of the commands you typed in the past. Use the up and down arrow keys on your keyboard to scroll backwards and forwards through your history. It’s a lot faster than re\-typing.
```
1 + 1 + 3
```
```
## [1] 5
```
You can break up mathematical expressions over multiple lines; R waits for a complete expression before processing it.
```
## here comes a long expression
## let's break it over multiple lines
1 + 2 + 3 + 4 + 5 + 6 +
7 + 8 + 9 +
10
```
```
## [1] 55
```
Text inside quotes is called a [string](https://psyteachr.github.io/glossary/s#string "A piece of text inside of quotes.").
```
"Good afternoon"
```
```
## [1] "Good afternoon"
```
You can break up text over multiple lines; R waits for a close quote before processing it. If you want to include a double quote inside this quoted string, [escape](https://psyteachr.github.io/glossary/e#escape "Include special characters like \" inside of a string by prefacing them with a backslash.") it with a backslash.
```
africa <- "I hear the drums echoing tonight
But she hears only whispers of some quiet conversation
She's coming in, 12:30 flight
The moonlit wings reflect the stars that guide me towards salvation
I stopped an old man along the way
Hoping to find some old forgotten words or ancient melodies
He turned to me as if to say, \"Hurry boy, it's waiting there for you\"
- Toto"
cat(africa) # cat() prints the string
```
```
## I hear the drums echoing tonight
## But she hears only whispers of some quiet conversation
## She's coming in, 12:30 flight
## The moonlit wings reflect the stars that guide me towards salvation
## I stopped an old man along the way
## Hoping to find some old forgotten words or ancient melodies
## He turned to me as if to say, "Hurry boy, it's waiting there for you"
##
## - Toto
```
### 1\.4\.2 Objects
Often you want to store the result of some computation for later use. You can store it in an [object](https://psyteachr.github.io/glossary/o#object "A word that identifies and stores the value of some data for later use.") (also sometimes called a [variable](https://psyteachr.github.io/glossary/v#variable "A word that identifies and stores the value of some data for later use.")). An object in R:
* contains only letters, numbers, full stops, and underscores
* starts with a letter or a full stop and a letter
* distinguishes uppercase and lowercase letters (`rickastley` is not the same as `RickAstley`)
The following are valid and different objects:
* songdata
* SongData
* song\_data
* song.data
* .song.data
* never\_gonna\_give\_you\_up\_never\_gonna\_let\_you\_down
The following are not valid objects:
* \_song\_data
* 1song
* .1song
* song data
* song\-data
Use the [assignment operator](https://psyteachr.github.io/glossary/a#assignment-operator "The symbol <-, which functions like = and assigns the value on the right to the object on the left")\<\-\` to assign the value on the right to the object named on the left.
```
## use the assignment operator '<-'
## R stores the number in the object
x <- 5
```
Now that we have set `x` to a value, we can do something with it:
```
x * 2
## R evaluates the expression and stores the result in the object boring_calculation
boring_calculation <- 2 + 2
```
```
## [1] 10
```
Note that it doesn’t print the result back at you when it’s stored. To view the result, just type the object name on a blank line.
```
boring_calculation
```
```
## [1] 4
```
Once an object is assigned a value, its value doesn’t change unless you reassign the object, even if the objects you used to calculate it change. Predict what the code below does and test yourself:
```
this_year <- 2019
my_birth_year <- 1976
my_age <- this_year - my_birth_year
this_year <- 2020
```
After all the code above is run:
* `this_year` \= 43 44 1976 2019 2020
* `my_birth_year` \= 43 44 1976 2019 2020
* `my_age` \= 43 44 1976 2019 2020
### 1\.4\.3 The environment
Anytime you assign something to a new object, R creates a new entry in the [global environment](https://psyteachr.github.io/glossary/g#global-environment "The interactive workspace where your script runs"). Objects in the global environment exist until you end your session; then they disappear forever (unless you save them).
Look at the **Environment** tab in the upper right pane. It lists all of the objects you have created. Click the broom icon to clear all of the objects and start fresh. You can also use the following functions in the console to view all objects, remove one object, or remove all objects.
```
ls() # print the objects in the global environment
rm("x") # remove the object named x from the global environment
rm(list = ls()) # clear out the global environment
```
In the upper right corner of the Environment tab, change **`List`** to **`Grid`**. Now you can see the type, length, and size of your objects, and reorder the list by any of these attributes.
### 1\.4\.4 Whitespace
R mostly ignores [whitespace](https://psyteachr.github.io/glossary/w#whitespace "Spaces, tabs and line breaks"): spaces, tabs, and line breaks. This means that you can use whitespace to help you organise your code.
```
# a and b are identical
a <- list(ctl = "Control Condition", exp1 = "Experimental Condition 1", exp2 = "Experimental Condition 2")
# but b is much easier to read
b <- list(ctl = "Control Condition",
exp1 = "Experimental Condition 1",
exp2 = "Experimental Condition 2")
```
When you see `>` at the beginning of a line, that means R is waiting for you to start a new command. However, if you see a `+` instead of `>` at the start of the line, that means R is waiting for you to finish a command you started on a previous line. If you want to cancel whatever command you started, just press the Esc key in the console window and you’ll get back to the `>` command prompt.
```
# R waits until next line for evaluation
(3 + 2) *
5
```
```
## [1] 25
```
It is often useful to break up long functions onto several lines.
```
cat("3, 6, 9, the goose drank wine",
"The monkey chewed tobacco on the streetcar line",
"The line broke, the monkey got choked",
"And they all went to heaven in a little rowboat",
sep = " \n")
```
```
## 3, 6, 9, the goose drank wine
## The monkey chewed tobacco on the streetcar line
## The line broke, the monkey got choked
## And they all went to heaven in a little rowboat
```
### 1\.4\.5 Function syntax
A lot of what you do in R involves calling a [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") and storing the results. A function is a named section of code that can be reused.
For example, `sd` is a function that returns the [standard deviation](https://psyteachr.github.io/glossary/s#standard-deviation "A descriptive statistic that measures how spread out data are relative to the mean.") of the [vector](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings.") of numbers that you provide as the input [argument](https://psyteachr.github.io/glossary/a#argument "A variable that provides input to a function."). Functions are set up like this:
`function_name(argument1, argument2 = "value")`.
The arguments in parentheses can be named (like, `argument1 = 10`) or you can skip the names if you put them in the exact same order that they’re defined in the function. You can check this by typing `?sd` (or whatever function name you’re looking up) into the console and the Help pane will show you the default order under **Usage**. You can also skip arguments that have a default value specified.
Most functions return a value, but may also produce side effects like printing to the console.
To illustrate, the function `rnorm()` generates random numbers from the standard [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable."). The help page for `rnorm()` (accessed by typing `?rnorm` in the console) shows that it has the syntax
`rnorm(n, mean = 0, sd = 1)`
where `n` is the number of randomly generated numbers you want, `mean` is the mean of the distribution, and `sd` is the standard deviation. The default mean is 0, and the default standard deviation is 1\. There is no default for `n`, which means you’ll get an error if you don’t specify it:
```
rnorm()
```
```
## Error in rnorm(): argument "n" is missing, with no default
```
If you want 10 random numbers from a normal distribution with mean of 0 and standard deviation, you can just use the defaults.
```
rnorm(10)
```
```
## [1] -0.04234663 -2.00393149 0.83611187 -1.46404127 1.31714428 0.42608581
## [7] -0.46673798 -0.01670509 1.64072295 0.85876439
```
If you want 10 numbers from a normal distribution with a mean of 100:
```
rnorm(10, 100)
```
```
## [1] 101.34917 99.86059 100.36287 99.65575 100.66818 100.04771 99.76782
## [8] 102.57691 100.05575 99.42490
```
This would be an equivalent but less efficient way of calling the function:
```
rnorm(n = 10, mean = 100)
```
```
## [1] 100.52773 99.40241 100.39641 101.01629 99.41961 100.52202 98.09828
## [8] 99.52169 100.25677 99.92092
```
We don’t need to name the arguments because R will recognize that we intended to fill in the first and second arguments by their position in the function call. However, if we want to change the default for an argument coming later in the list, then we need to name it. For instance, if we wanted to keep the default `mean = 0` but change the standard deviation to 100 we would do it this way:
```
rnorm(10, sd = 100)
```
```
## [1] -68.254349 -17.636619 140.047575 7.570674 -68.309751 -2.378786
## [7] 117.356343 -104.772092 -40.163750 54.358941
```
Some functions give a list of options after an argument; this means the default value is the first option. The usage entry for the `power.t.test()` function looks like this:
```
power.t.test(n = NULL, delta = NULL, sd = 1, sig.level = 0.05,
power = NULL,
type = c("two.sample", "one.sample", "paired"),
alternative = c("two.sided", "one.sided"),
strict = FALSE, tol = .Machine$double.eps^0.25)
```
* What is the default value for `sd`? NULL 1 0\.05 two.sample
* What is the default value for `type`? NULL two.sample one.sample paired
* Which is equivalent to `power.t.test(100, 0.5)`? power.t.test(100, 0\.5, sig.level \= 1, sd \= 0\.05\) power.t.test() power.t.test(n \= 100\) power.t.test(delta \= 0\.5, n \= 100\)
### 1\.4\.6 Getting help
Start up help in a browser using the function `help.start()`.
If a function is in [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") or a loaded [package](https://psyteachr.github.io/glossary/p#package "A group of R functions."), you can use the `help("function_name")` function or the `?function_name` shortcut to access the help file. If the package isn’t loaded, specify the package name as the second argument to the help function.
```
# these methods are all equivalent ways of getting help
help("rnorm")
?rnorm
help("rnorm", package="stats")
```
When the package isn’t loaded or you aren’t sure what package the function is in, use the shortcut `??function_name`.
* What is the first argument to the `mean` function? trim na.rm mean x
* What package is `read_excel` in? readr readxl base stats
### 1\.4\.1 Console commands
We are first going to learn about how to interact with the [console](https://psyteachr.github.io/glossary/c#console "The pane in RStudio where you can type in commands and view output messages."). In general, you will be developing R [script](https://psyteachr.github.io/glossary/s#scripts "NA") or [R Markdown](https://psyteachr.github.io/glossary/r#r-markdown "The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.") files, rather than working directly in the console window. However, you can consider the console a kind of “sandbox” where you can try out lines of code and adapt them until you get them to do what you want. Then you can copy them back into the script editor.
Mostly, however, you will be typing into the script editor window (either into an R script or an R Markdown file) and then sending the commands to the console by placing the cursor on the line and holding down the Ctrl key while you press Enter. The Ctrl\+Enter key sequence sends the command in the script to the console.
One simple way to learn about the R console is to use it as a calculator. Enter the lines of code below and see if your results match. Be prepared to make lots of typos (at first).
```
1 + 1
```
```
## [1] 2
```
The R console remembers a history of the commands you typed in the past. Use the up and down arrow keys on your keyboard to scroll backwards and forwards through your history. It’s a lot faster than re\-typing.
```
1 + 1 + 3
```
```
## [1] 5
```
You can break up mathematical expressions over multiple lines; R waits for a complete expression before processing it.
```
## here comes a long expression
## let's break it over multiple lines
1 + 2 + 3 + 4 + 5 + 6 +
7 + 8 + 9 +
10
```
```
## [1] 55
```
Text inside quotes is called a [string](https://psyteachr.github.io/glossary/s#string "A piece of text inside of quotes.").
```
"Good afternoon"
```
```
## [1] "Good afternoon"
```
You can break up text over multiple lines; R waits for a close quote before processing it. If you want to include a double quote inside this quoted string, [escape](https://psyteachr.github.io/glossary/e#escape "Include special characters like \" inside of a string by prefacing them with a backslash.") it with a backslash.
```
africa <- "I hear the drums echoing tonight
But she hears only whispers of some quiet conversation
She's coming in, 12:30 flight
The moonlit wings reflect the stars that guide me towards salvation
I stopped an old man along the way
Hoping to find some old forgotten words or ancient melodies
He turned to me as if to say, \"Hurry boy, it's waiting there for you\"
- Toto"
cat(africa) # cat() prints the string
```
```
## I hear the drums echoing tonight
## But she hears only whispers of some quiet conversation
## She's coming in, 12:30 flight
## The moonlit wings reflect the stars that guide me towards salvation
## I stopped an old man along the way
## Hoping to find some old forgotten words or ancient melodies
## He turned to me as if to say, "Hurry boy, it's waiting there for you"
##
## - Toto
```
### 1\.4\.2 Objects
Often you want to store the result of some computation for later use. You can store it in an [object](https://psyteachr.github.io/glossary/o#object "A word that identifies and stores the value of some data for later use.") (also sometimes called a [variable](https://psyteachr.github.io/glossary/v#variable "A word that identifies and stores the value of some data for later use.")). An object in R:
* contains only letters, numbers, full stops, and underscores
* starts with a letter or a full stop and a letter
* distinguishes uppercase and lowercase letters (`rickastley` is not the same as `RickAstley`)
The following are valid and different objects:
* songdata
* SongData
* song\_data
* song.data
* .song.data
* never\_gonna\_give\_you\_up\_never\_gonna\_let\_you\_down
The following are not valid objects:
* \_song\_data
* 1song
* .1song
* song data
* song\-data
Use the [assignment operator](https://psyteachr.github.io/glossary/a#assignment-operator "The symbol <-, which functions like = and assigns the value on the right to the object on the left")\<\-\` to assign the value on the right to the object named on the left.
```
## use the assignment operator '<-'
## R stores the number in the object
x <- 5
```
Now that we have set `x` to a value, we can do something with it:
```
x * 2
## R evaluates the expression and stores the result in the object boring_calculation
boring_calculation <- 2 + 2
```
```
## [1] 10
```
Note that it doesn’t print the result back at you when it’s stored. To view the result, just type the object name on a blank line.
```
boring_calculation
```
```
## [1] 4
```
Once an object is assigned a value, its value doesn’t change unless you reassign the object, even if the objects you used to calculate it change. Predict what the code below does and test yourself:
```
this_year <- 2019
my_birth_year <- 1976
my_age <- this_year - my_birth_year
this_year <- 2020
```
After all the code above is run:
* `this_year` \= 43 44 1976 2019 2020
* `my_birth_year` \= 43 44 1976 2019 2020
* `my_age` \= 43 44 1976 2019 2020
### 1\.4\.3 The environment
Anytime you assign something to a new object, R creates a new entry in the [global environment](https://psyteachr.github.io/glossary/g#global-environment "The interactive workspace where your script runs"). Objects in the global environment exist until you end your session; then they disappear forever (unless you save them).
Look at the **Environment** tab in the upper right pane. It lists all of the objects you have created. Click the broom icon to clear all of the objects and start fresh. You can also use the following functions in the console to view all objects, remove one object, or remove all objects.
```
ls() # print the objects in the global environment
rm("x") # remove the object named x from the global environment
rm(list = ls()) # clear out the global environment
```
In the upper right corner of the Environment tab, change **`List`** to **`Grid`**. Now you can see the type, length, and size of your objects, and reorder the list by any of these attributes.
### 1\.4\.4 Whitespace
R mostly ignores [whitespace](https://psyteachr.github.io/glossary/w#whitespace "Spaces, tabs and line breaks"): spaces, tabs, and line breaks. This means that you can use whitespace to help you organise your code.
```
# a and b are identical
a <- list(ctl = "Control Condition", exp1 = "Experimental Condition 1", exp2 = "Experimental Condition 2")
# but b is much easier to read
b <- list(ctl = "Control Condition",
exp1 = "Experimental Condition 1",
exp2 = "Experimental Condition 2")
```
When you see `>` at the beginning of a line, that means R is waiting for you to start a new command. However, if you see a `+` instead of `>` at the start of the line, that means R is waiting for you to finish a command you started on a previous line. If you want to cancel whatever command you started, just press the Esc key in the console window and you’ll get back to the `>` command prompt.
```
# R waits until next line for evaluation
(3 + 2) *
5
```
```
## [1] 25
```
It is often useful to break up long functions onto several lines.
```
cat("3, 6, 9, the goose drank wine",
"The monkey chewed tobacco on the streetcar line",
"The line broke, the monkey got choked",
"And they all went to heaven in a little rowboat",
sep = " \n")
```
```
## 3, 6, 9, the goose drank wine
## The monkey chewed tobacco on the streetcar line
## The line broke, the monkey got choked
## And they all went to heaven in a little rowboat
```
### 1\.4\.5 Function syntax
A lot of what you do in R involves calling a [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") and storing the results. A function is a named section of code that can be reused.
For example, `sd` is a function that returns the [standard deviation](https://psyteachr.github.io/glossary/s#standard-deviation "A descriptive statistic that measures how spread out data are relative to the mean.") of the [vector](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings.") of numbers that you provide as the input [argument](https://psyteachr.github.io/glossary/a#argument "A variable that provides input to a function."). Functions are set up like this:
`function_name(argument1, argument2 = "value")`.
The arguments in parentheses can be named (like, `argument1 = 10`) or you can skip the names if you put them in the exact same order that they’re defined in the function. You can check this by typing `?sd` (or whatever function name you’re looking up) into the console and the Help pane will show you the default order under **Usage**. You can also skip arguments that have a default value specified.
Most functions return a value, but may also produce side effects like printing to the console.
To illustrate, the function `rnorm()` generates random numbers from the standard [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable."). The help page for `rnorm()` (accessed by typing `?rnorm` in the console) shows that it has the syntax
`rnorm(n, mean = 0, sd = 1)`
where `n` is the number of randomly generated numbers you want, `mean` is the mean of the distribution, and `sd` is the standard deviation. The default mean is 0, and the default standard deviation is 1\. There is no default for `n`, which means you’ll get an error if you don’t specify it:
```
rnorm()
```
```
## Error in rnorm(): argument "n" is missing, with no default
```
If you want 10 random numbers from a normal distribution with mean of 0 and standard deviation, you can just use the defaults.
```
rnorm(10)
```
```
## [1] -0.04234663 -2.00393149 0.83611187 -1.46404127 1.31714428 0.42608581
## [7] -0.46673798 -0.01670509 1.64072295 0.85876439
```
If you want 10 numbers from a normal distribution with a mean of 100:
```
rnorm(10, 100)
```
```
## [1] 101.34917 99.86059 100.36287 99.65575 100.66818 100.04771 99.76782
## [8] 102.57691 100.05575 99.42490
```
This would be an equivalent but less efficient way of calling the function:
```
rnorm(n = 10, mean = 100)
```
```
## [1] 100.52773 99.40241 100.39641 101.01629 99.41961 100.52202 98.09828
## [8] 99.52169 100.25677 99.92092
```
We don’t need to name the arguments because R will recognize that we intended to fill in the first and second arguments by their position in the function call. However, if we want to change the default for an argument coming later in the list, then we need to name it. For instance, if we wanted to keep the default `mean = 0` but change the standard deviation to 100 we would do it this way:
```
rnorm(10, sd = 100)
```
```
## [1] -68.254349 -17.636619 140.047575 7.570674 -68.309751 -2.378786
## [7] 117.356343 -104.772092 -40.163750 54.358941
```
Some functions give a list of options after an argument; this means the default value is the first option. The usage entry for the `power.t.test()` function looks like this:
```
power.t.test(n = NULL, delta = NULL, sd = 1, sig.level = 0.05,
power = NULL,
type = c("two.sample", "one.sample", "paired"),
alternative = c("two.sided", "one.sided"),
strict = FALSE, tol = .Machine$double.eps^0.25)
```
* What is the default value for `sd`? NULL 1 0\.05 two.sample
* What is the default value for `type`? NULL two.sample one.sample paired
* Which is equivalent to `power.t.test(100, 0.5)`? power.t.test(100, 0\.5, sig.level \= 1, sd \= 0\.05\) power.t.test() power.t.test(n \= 100\) power.t.test(delta \= 0\.5, n \= 100\)
### 1\.4\.6 Getting help
Start up help in a browser using the function `help.start()`.
If a function is in [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") or a loaded [package](https://psyteachr.github.io/glossary/p#package "A group of R functions."), you can use the `help("function_name")` function or the `?function_name` shortcut to access the help file. If the package isn’t loaded, specify the package name as the second argument to the help function.
```
# these methods are all equivalent ways of getting help
help("rnorm")
?rnorm
help("rnorm", package="stats")
```
When the package isn’t loaded or you aren’t sure what package the function is in, use the shortcut `??function_name`.
* What is the first argument to the `mean` function? trim na.rm mean x
* What package is `read_excel` in? readr readxl base stats
1\.5 Add\-on packages
---------------------
One of the great things about R is that it is **user extensible**: anyone can create a new add\-on software package that extends its functionality. There are currently thousands of add\-on packages that R users have created to solve many different kinds of problems, or just simply to have fun. There are packages for data visualisation, machine learning, neuroimaging, eyetracking, web scraping, and playing games such as Sudoku.
Add\-on packages are not distributed with base R, but have to be downloaded and installed from an archive, in the same way that you would, for instance, download and install a fitness app on your smartphone.
The main repository where packages reside is called CRAN, the Comprehensive R Archive Network. A package has to pass strict tests devised by the R core team to be allowed to be part of the CRAN archive. You can install from the CRAN archive through R using the `install.packages()` function.
There is an important distinction between **installing** a package and **loading** a package.
### 1\.5\.1 Installing a package
This is done using `install.packages()`. This is like installing an app on your phone: you only have to do it once and the app will remain installed until you remove it. For instance, if you want to use PokemonGo on your phone, you install it once from the App Store or Play Store, and you don’t have to re\-install it each time you want to use it. Once you launch the app, it will run in the background until you close it or restart your phone. Likewise, when you install a package, the package will be available (but not *loaded*) every time you open up R.
You may only be able to permanently install packages if you are using R on your own system; you may not be able to do this on public workstations if you lack the appropriate privileges.
Install the `ggExtra` package on your system. This package lets you create plots with marginal histograms.
```
install.packages("ggExtra")
```
If you don’t already have packages like ggplot2 and shiny installed, it will also install these **dependencies** for you. If you don’t get an error message at the end, the installation was successful.
### 1\.5\.2 Loading a package
This is done using `library(packagename)`. This is like **launching** an app on your phone: the functionality is only there where the app is launched and remains there until you close the app or restart. Likewise, when you run `library(packagename)` within a session, the functionality of the package referred to by `packagename` will be made available for your R session. The next time you start R, you will need to run the `library()` function again if you want to access its functionality.
You can load the functions in `ggExtra` for your current R session as follows:
```
library(ggExtra)
```
You might get some red text when you load a package, this is normal. It is usually warning you that this package has functions that have the same name as other packages you’ve already loaded.
You can use the convention `package::function()` to indicate in which add\-on package a function resides. For instance, if you see `readr::read_csv()`, that refers to the function `read_csv()` in the `readr` add\-on package.
Now you can run the function `ggExtra::runExample()`, which runs an interactive example of marginal plots using shiny.
```
ggExtra::runExample()
```
### 1\.5\.3 Install from GitHub
Many R packages are not yet on [CRAN](https://psyteachr.github.io/glossary/c#cran "The Comprehensive R Archive Network: a network of ftp and web servers around the world that store identical, up-to-date, versions of code and documentation for R.") because they are still in development. Increasingly, datasets and code for papers are available as packages you can download from github. You’ll need to install the devtools package to be able to install packages from github. Check if you have a package installed by trying to load it (e.g., if you don’t have devtools installed, `library("devtools")` will display an error message) or by searching for it in the packages tab in the lower right pane. All listed packages are installed; all checked packages are currently loaded.
Figure 1\.4: Check installed and loaded packages in the packages tab in the lower right pane.
```
# install devtools if you get
# Error in loadNamespace(name) : there is no package called ‘devtools’
# install.packages("devtools")
devtools::install_github("psyteachr/msc-data-skills")
```
After you install the dataskills package, load it using the `library()` function. You can then try out some of the functions below.
* `book()` opens a local copy of this book in your web browser.
* `app("plotdemo")` opens a shiny app that lets you see how simulated data would look in different plot styles
* `exercise(1)` creates and opens a file containing the exercises for this chapter
* `?disgust` shows you the documentation for the built\-in dataset `disgust`, which we will be using in future lessons
```
library(dataskills)
book()
app("plotdemo")
exercise(1)
?disgust
```
How many different ways can you find to discover what functions are available in the dataskills package?
### 1\.5\.1 Installing a package
This is done using `install.packages()`. This is like installing an app on your phone: you only have to do it once and the app will remain installed until you remove it. For instance, if you want to use PokemonGo on your phone, you install it once from the App Store or Play Store, and you don’t have to re\-install it each time you want to use it. Once you launch the app, it will run in the background until you close it or restart your phone. Likewise, when you install a package, the package will be available (but not *loaded*) every time you open up R.
You may only be able to permanently install packages if you are using R on your own system; you may not be able to do this on public workstations if you lack the appropriate privileges.
Install the `ggExtra` package on your system. This package lets you create plots with marginal histograms.
```
install.packages("ggExtra")
```
If you don’t already have packages like ggplot2 and shiny installed, it will also install these **dependencies** for you. If you don’t get an error message at the end, the installation was successful.
### 1\.5\.2 Loading a package
This is done using `library(packagename)`. This is like **launching** an app on your phone: the functionality is only there where the app is launched and remains there until you close the app or restart. Likewise, when you run `library(packagename)` within a session, the functionality of the package referred to by `packagename` will be made available for your R session. The next time you start R, you will need to run the `library()` function again if you want to access its functionality.
You can load the functions in `ggExtra` for your current R session as follows:
```
library(ggExtra)
```
You might get some red text when you load a package, this is normal. It is usually warning you that this package has functions that have the same name as other packages you’ve already loaded.
You can use the convention `package::function()` to indicate in which add\-on package a function resides. For instance, if you see `readr::read_csv()`, that refers to the function `read_csv()` in the `readr` add\-on package.
Now you can run the function `ggExtra::runExample()`, which runs an interactive example of marginal plots using shiny.
```
ggExtra::runExample()
```
### 1\.5\.3 Install from GitHub
Many R packages are not yet on [CRAN](https://psyteachr.github.io/glossary/c#cran "The Comprehensive R Archive Network: a network of ftp and web servers around the world that store identical, up-to-date, versions of code and documentation for R.") because they are still in development. Increasingly, datasets and code for papers are available as packages you can download from github. You’ll need to install the devtools package to be able to install packages from github. Check if you have a package installed by trying to load it (e.g., if you don’t have devtools installed, `library("devtools")` will display an error message) or by searching for it in the packages tab in the lower right pane. All listed packages are installed; all checked packages are currently loaded.
Figure 1\.4: Check installed and loaded packages in the packages tab in the lower right pane.
```
# install devtools if you get
# Error in loadNamespace(name) : there is no package called ‘devtools’
# install.packages("devtools")
devtools::install_github("psyteachr/msc-data-skills")
```
After you install the dataskills package, load it using the `library()` function. You can then try out some of the functions below.
* `book()` opens a local copy of this book in your web browser.
* `app("plotdemo")` opens a shiny app that lets you see how simulated data would look in different plot styles
* `exercise(1)` creates and opens a file containing the exercises for this chapter
* `?disgust` shows you the documentation for the built\-in dataset `disgust`, which we will be using in future lessons
```
library(dataskills)
book()
app("plotdemo")
exercise(1)
?disgust
```
How many different ways can you find to discover what functions are available in the dataskills package?
1\.6 Organising a project
-------------------------
[Projects](https://psyteachr.github.io/glossary/p#project "A way to organise related files in RStudio") in RStudio are a way to group all of the files you need for one project. Most projects include scripts, data files, and output files like the PDF version of the script or images.
Make a new directory where you will keep all of your materials for this class. If you’re using a lab computer, make sure you make this directory in your network drive so you can access it from other computers.
Choose **`New Project…`** under the **`File`** menu to create a new project called `01-intro` in this directory.
### 1\.6\.1 Structure
Here is what an R script looks like. Don’t worry about the details for now.
```
# load add-on packages
library(tidyverse)
# set object ----
n <- 100
# simulate data ----
data <- data.frame(
id = 1:n,
dv = c(rnorm(n/2, 0), rnorm(n/2, 1)),
condition = rep(c("A", "B"), each = n/2)
)
# plot data ----
ggplot(data, aes(condition, dv)) +
geom_violin(trim = FALSE) +
geom_boxplot(width = 0.25,
aes(fill = condition),
show.legend = FALSE)
# save plot ----
ggsave("sim_data.png", width = 8, height = 6)
```
It’s best if you follow the following structure when developing your own scripts:
* load in any add\-on packages you need to use
* define any custom functions
* load or simulate the data you will be working with
* work with the data
* save anything you need to save
Often when you are working on a script, you will realize that you need to load another add\-on package. Don’t bury the call to `library(package_I_need)` way down in the script. Put it in the top, so the user has an overview of what packages are needed.
You can add comments to an R script by with the hash symbol (`#`). The R interpreter will ignore characters from the hash to the end of the line.
```
## comments: any text from '#' on is ignored until end of line
22 / 7 # approximation to pi
```
```
## [1] 3.142857
```
If you add 4 or more dashes to the end of a comment, it acts like a header and creates an outline that you can see in the document outline (⇧⌘O).
### 1\.6\.2 Reproducible reports with R Markdown
We will make reproducible reports following the principles of [literate programming](https://en.wikipedia.org/wiki/Literate_programming). The basic idea is to have the text of the report together in a single document along with the code needed to perform all analyses and generate the tables. The report is then “compiled” from the original format into some other, more portable format, such as HTML or PDF. This is different from traditional cutting and pasting approaches where, for instance, you create a graph in Microsoft Excel or a statistics program like SPSS and then paste it into Microsoft Word.
We will use [R Markdown](https://psyteachr.github.io/glossary/r#r-markdown "The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.") to create reproducible reports, which enables mixing of text and code. A reproducible script will contain sections of code in code blocks. A code block starts and ends with backtick symbols in a row, with some infomation about the code between curly brackets, such as `{r chunk-name, echo=FALSE}` (this runs the code, but does not show the text of the code block in the compiled document). The text outside of code blocks is written in [markdown](https://psyteachr.github.io/glossary/m#markdown "A way to specify formatting, such as headers, paragraphs, lists, bolding, and links."), which is a way to specify formatting, such as headers, paragraphs, lists, bolding, and links.
Figure 1\.5: A reproducible script.
If you open up a new R Markdown file from a template, you will see an example document with several code blocks in it. To create an HTML or PDF report from an R Markdown (Rmd) document, you compile it. Compiling a document is called [knitting](https://psyteachr.github.io/glossary/k#knit "To create an HTML, PDF, or Word document from an R Markdown (Rmd) document") in RStudio. There is a button that looks like a ball of yarn with needles through it that you click on to compile your file into a report.
Create a new R Markdown file from the **`File > New File > R Markdown…`** menu. Change the title and author, then click the knit button to create an html file.
### 1\.6\.3 Working Directory
Where should you put all of your files? When developing an analysis, you usually want to have all of your scripts and data files in one subtree of your computer’s directory structure. Usually there is a single [working directory](https://psyteachr.github.io/glossary/w#working-directory "The filepath where R is currently reading and writing files.") where your data and scripts are stored.
Your script should only reference files in three locations, using the appropriate format.
| Where | Example |
| --- | --- |
| on the web | “[https://psyteachr.github.io/msc\-data\-skills/data/disgust\_scores.csv](https://psyteachr.github.io/msc-data-skills/data/disgust_scores.csv)” |
| in the working directory | “disgust\_scores.csv” |
| in a subdirectory | “data/disgust\_scores.csv” |
Never set or change your working directory in a script.
If you are working with an R Markdown file, it will automatically use the same directory the .Rmd file is in as the working directory.
If you are working with R scripts, store your main script file in the top\-level directory and manually set your working directory to that location. You will have to reset the working directory each time you open RStudio, unless you create a [project](https://psyteachr.github.io/glossary/p#project "A way to organise related files in RStudio") and access the script from the project.
For instance, if you are on a Windows machine your data and scripts are in the directory `C:\Carla's_files\thesis2\my_thesis\new_analysis`, you will set your working directory in one of two ways: (1\) by going to the `Session` pull down menu in RStudio and choosing `Set Working Directory`, or (2\) by typing `setwd("C:\Carla's_files\thesis2\my_thesis\new_analysis")` in the console window.
It’s tempting to make your life simple by putting the `setwd()` command in your script. Don’t do this! Others will not have the same directory tree as you (and when your laptop dies and you get a new one, neither will you).
When manually setting the working directory, always do so by using the **`Session > Set Working Directory`** pull\-down option or by typing `setwd()` in the console.
If your script needs a file in a subdirectory of `new_analysis`, say, `data/questionnaire.csv`, load it in using a [relative path](https://psyteachr.github.io/glossary/r#relative-path "The location of a file in relation to the working directory.") so that it is accessible if you move the folder `new_analysis` to another location or computer:
```
dat <- read_csv("data/questionnaire.csv") # correct
```
Do not load it in using an [absolute path](https://psyteachr.github.io/glossary/a#absolute-path "A file path that starts with / and is not appended to the working directory"):
```
dat <- read_csv("C:/Carla's_files/thesis22/my_thesis/new_analysis/data/questionnaire.csv") # wrong
```
Also note the convention of using forward slashes, unlike the Windows\-specific convention of using backward slashes. This is to make references to files platform independent.
### 1\.6\.1 Structure
Here is what an R script looks like. Don’t worry about the details for now.
```
# load add-on packages
library(tidyverse)
# set object ----
n <- 100
# simulate data ----
data <- data.frame(
id = 1:n,
dv = c(rnorm(n/2, 0), rnorm(n/2, 1)),
condition = rep(c("A", "B"), each = n/2)
)
# plot data ----
ggplot(data, aes(condition, dv)) +
geom_violin(trim = FALSE) +
geom_boxplot(width = 0.25,
aes(fill = condition),
show.legend = FALSE)
# save plot ----
ggsave("sim_data.png", width = 8, height = 6)
```
It’s best if you follow the following structure when developing your own scripts:
* load in any add\-on packages you need to use
* define any custom functions
* load or simulate the data you will be working with
* work with the data
* save anything you need to save
Often when you are working on a script, you will realize that you need to load another add\-on package. Don’t bury the call to `library(package_I_need)` way down in the script. Put it in the top, so the user has an overview of what packages are needed.
You can add comments to an R script by with the hash symbol (`#`). The R interpreter will ignore characters from the hash to the end of the line.
```
## comments: any text from '#' on is ignored until end of line
22 / 7 # approximation to pi
```
```
## [1] 3.142857
```
If you add 4 or more dashes to the end of a comment, it acts like a header and creates an outline that you can see in the document outline (⇧⌘O).
### 1\.6\.2 Reproducible reports with R Markdown
We will make reproducible reports following the principles of [literate programming](https://en.wikipedia.org/wiki/Literate_programming). The basic idea is to have the text of the report together in a single document along with the code needed to perform all analyses and generate the tables. The report is then “compiled” from the original format into some other, more portable format, such as HTML or PDF. This is different from traditional cutting and pasting approaches where, for instance, you create a graph in Microsoft Excel or a statistics program like SPSS and then paste it into Microsoft Word.
We will use [R Markdown](https://psyteachr.github.io/glossary/r#r-markdown "The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.") to create reproducible reports, which enables mixing of text and code. A reproducible script will contain sections of code in code blocks. A code block starts and ends with backtick symbols in a row, with some infomation about the code between curly brackets, such as `{r chunk-name, echo=FALSE}` (this runs the code, but does not show the text of the code block in the compiled document). The text outside of code blocks is written in [markdown](https://psyteachr.github.io/glossary/m#markdown "A way to specify formatting, such as headers, paragraphs, lists, bolding, and links."), which is a way to specify formatting, such as headers, paragraphs, lists, bolding, and links.
Figure 1\.5: A reproducible script.
If you open up a new R Markdown file from a template, you will see an example document with several code blocks in it. To create an HTML or PDF report from an R Markdown (Rmd) document, you compile it. Compiling a document is called [knitting](https://psyteachr.github.io/glossary/k#knit "To create an HTML, PDF, or Word document from an R Markdown (Rmd) document") in RStudio. There is a button that looks like a ball of yarn with needles through it that you click on to compile your file into a report.
Create a new R Markdown file from the **`File > New File > R Markdown…`** menu. Change the title and author, then click the knit button to create an html file.
### 1\.6\.3 Working Directory
Where should you put all of your files? When developing an analysis, you usually want to have all of your scripts and data files in one subtree of your computer’s directory structure. Usually there is a single [working directory](https://psyteachr.github.io/glossary/w#working-directory "The filepath where R is currently reading and writing files.") where your data and scripts are stored.
Your script should only reference files in three locations, using the appropriate format.
| Where | Example |
| --- | --- |
| on the web | “[https://psyteachr.github.io/msc\-data\-skills/data/disgust\_scores.csv](https://psyteachr.github.io/msc-data-skills/data/disgust_scores.csv)” |
| in the working directory | “disgust\_scores.csv” |
| in a subdirectory | “data/disgust\_scores.csv” |
Never set or change your working directory in a script.
If you are working with an R Markdown file, it will automatically use the same directory the .Rmd file is in as the working directory.
If you are working with R scripts, store your main script file in the top\-level directory and manually set your working directory to that location. You will have to reset the working directory each time you open RStudio, unless you create a [project](https://psyteachr.github.io/glossary/p#project "A way to organise related files in RStudio") and access the script from the project.
For instance, if you are on a Windows machine your data and scripts are in the directory `C:\Carla's_files\thesis2\my_thesis\new_analysis`, you will set your working directory in one of two ways: (1\) by going to the `Session` pull down menu in RStudio and choosing `Set Working Directory`, or (2\) by typing `setwd("C:\Carla's_files\thesis2\my_thesis\new_analysis")` in the console window.
It’s tempting to make your life simple by putting the `setwd()` command in your script. Don’t do this! Others will not have the same directory tree as you (and when your laptop dies and you get a new one, neither will you).
When manually setting the working directory, always do so by using the **`Session > Set Working Directory`** pull\-down option or by typing `setwd()` in the console.
If your script needs a file in a subdirectory of `new_analysis`, say, `data/questionnaire.csv`, load it in using a [relative path](https://psyteachr.github.io/glossary/r#relative-path "The location of a file in relation to the working directory.") so that it is accessible if you move the folder `new_analysis` to another location or computer:
```
dat <- read_csv("data/questionnaire.csv") # correct
```
Do not load it in using an [absolute path](https://psyteachr.github.io/glossary/a#absolute-path "A file path that starts with / and is not appended to the working directory"):
```
dat <- read_csv("C:/Carla's_files/thesis22/my_thesis/new_analysis/data/questionnaire.csv") # wrong
```
Also note the convention of using forward slashes, unlike the Windows\-specific convention of using backward slashes. This is to make references to files platform independent.
1\.7 Glossary
-------------
Each chapter ends with a glossary table defining the jargon introduced in this chapter. The links below take you to the [glossary book](https://psyteachr.github.io/glossary), which you can also download for offline use with `devtools::install_github("psyteachr/glossary")` and access the glossary offline with `glossary::book()`.
| term | definition |
| --- | --- |
| [absolute path](https://psyteachr.github.io/glossary/a#absolute.path) | A file path that starts with / and is not appended to the working directory |
| [argument](https://psyteachr.github.io/glossary/a#argument) | A variable that provides input to a function. |
| [assignment operator](https://psyteachr.github.io/glossary/a#assignment.operator) | The symbol \<\-, which functions like \= and assigns the value on the right to the object on the left |
| [base r](https://psyteachr.github.io/glossary/b#base.r) | The set of R functions that come with a basic installation of R, before you add external packages |
| [console](https://psyteachr.github.io/glossary/c#console) | The pane in RStudio where you can type in commands and view output messages. |
| [cran](https://psyteachr.github.io/glossary/c#cran) | The Comprehensive R Archive Network: a network of ftp and web servers around the world that store identical, up\-to\-date, versions of code and documentation for R. |
| [escape](https://psyteachr.github.io/glossary/e#escape) | Include special characters like " inside of a string by prefacing them with a backslash. |
| [function](https://psyteachr.github.io/glossary/f#function.) | A named section of code that can be reused. |
| [global environment](https://psyteachr.github.io/glossary/g#global.environment) | The interactive workspace where your script runs |
| [ide](https://psyteachr.github.io/glossary/i#ide) | Integrated Development Environment: a program that serves as a text editor, file manager, and provides functions to help you read and write code. RStudio is an IDE for R. |
| [knit](https://psyteachr.github.io/glossary/k#knit) | To create an HTML, PDF, or Word document from an R Markdown (Rmd) document |
| [markdown](https://psyteachr.github.io/glossary/m#markdown) | A way to specify formatting, such as headers, paragraphs, lists, bolding, and links. |
| [normal distribution](https://psyteachr.github.io/glossary/n#normal.distribution) | A symmetric distribution of data where values near the centre are most probable. |
| [object](https://psyteachr.github.io/glossary/o#object) | A word that identifies and stores the value of some data for later use. |
| [package](https://psyteachr.github.io/glossary/p#package) | A group of R functions. |
| [panes](https://psyteachr.github.io/glossary/p#panes) | RStudio is arranged with four window “panes.” |
| [project](https://psyteachr.github.io/glossary/p#project) | A way to organise related files in RStudio |
| [r markdown](https://psyteachr.github.io/glossary/r#r.markdown) | The R\-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code. |
| [relative path](https://psyteachr.github.io/glossary/r#relative.path) | The location of a file in relation to the working directory. |
| [reproducible research](https://psyteachr.github.io/glossary/r#reproducible.research) | Research that documents all of the steps between raw data and results in a way that can be verified. |
| [script](https://psyteachr.github.io/glossary/s#script) | A plain\-text file that contains commands in a coding language, such as R. |
| [scripts](https://psyteachr.github.io/glossary/s#scripts) | NA |
| [standard deviation](https://psyteachr.github.io/glossary/s#standard.deviation) | A descriptive statistic that measures how spread out data are relative to the mean. |
| [string](https://psyteachr.github.io/glossary/s#string) | A piece of text inside of quotes. |
| [variable](https://psyteachr.github.io/glossary/v#variable) | A word that identifies and stores the value of some data for later use. |
| [vector](https://psyteachr.github.io/glossary/v#vector) | A type of data structure that is basically a list of things like T/F values, numbers, or strings. |
| [whitespace](https://psyteachr.github.io/glossary/w#whitespace) | Spaces, tabs and line breaks |
| [working directory](https://psyteachr.github.io/glossary/w#working.directory) | The filepath where R is currently reading and writing files. |
1\.8 Exercises
--------------
Download the first set of [exercises](exercises/01_intro_exercise.Rmd) and put it in the project directory you created earlier for today’s exercises. See the [answers](exercises/01_intro_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(1)
# run this to access the answers
dataskills::exercise(1, answers = TRUE)
```
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/intro.html |
Chapter 1 Getting Started
=========================
1\.1 Learning Objectives
------------------------
1. Understand the components of the [RStudio IDE](intro.html#rstudio_ide) [(video)](https://youtu.be/CbA6ZVlJE78)
2. Type commands into the [console](intro.html#console) [(video)](https://youtu.be/wbI4c_7y0kE)
3. Understand [function syntax](intro.html#function_syx) [(video)](https://youtu.be/X5P038N5Q8I)
4. Install a [package](intro.html#install-package) [(video)](https://youtu.be/u_pvHnqkVCE)
5. Organise a [project](intro.html#projects) [(video)](https://youtu.be/y-KiPueC9xw)
6. Create and compile an [Rmarkdown document](intro.html#rmarkdown) [(video)](https://youtu.be/EqJiAlJAl8Y)
1\.2 Resources
--------------
* [Chapter 1: Introduction](http://r4ds.had.co.nz/introduction.html) in *R for Data Science*
* [RStudio IDE Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/rstudio-ide.pdf)
* [Introduction to R Markdown](https://rmarkdown.rstudio.com/lesson-1.html)
* [R Markdown Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/rmarkdown-2.0.pdf)
* [R Markdown Reference](https://www.rstudio.com/wp-content/uploads/2015/03/rmarkdown-reference.pdf)
* [RStudio Cloud](https://rstudio.cloud/)
1\.3 What is R?
---------------
R is a programming environment for data processing and statistical analysis. We use R in Psychology at the University of Glasgow to promote [reproducible research](https://psyteachr.github.io/glossary/r#reproducible-research "Research that documents all of the steps between raw data and results in a way that can be verified."). This refers to being able to document and reproduce all of the steps between raw data and results. R allows you to write [scripts](https://psyteachr.github.io/glossary/s#script "A plain-text file that contains commands in a coding language, such as R.") that combine data files, clean data, and run analyses. There are many other ways to do this, including writing SPSS syntax files, but we find R to be a useful tool that is free, open source, and commonly used by research psychologists.
See Appendix [A](installingr.html#installingr) for more information on on how to install R and associated programs.
### 1\.3\.1 The Base R Console
If you open up the application called R, you will see an “R Console” window that looks something like this.
Figure 1\.1: The R Console window.
You can close R and never open it again. We’ll be working entirely in RStudio in this class.
ALWAYS REMEMBER: Launch R though the RStudio IDE
Launch .
### 1\.3\.2 RStudio
[RStudio](http://www.rstudio.com) is an Integrated Development Environment ([IDE](https://psyteachr.github.io/glossary/i#ide "Integrated Development Environment: a program that serves as a text editor, file manager, and provides functions to help you read and write code. RStudio is an IDE for R.")). This is a program that serves as a text editor, file manager, and provides many functions to help you read and write R code.
Figure 1\.2: The RStudio IDE
RStudio is arranged with four window [panes](https://psyteachr.github.io/glossary/p#panes "RStudio is arranged with four window “panes.”"). By default, the upper left pane is the **source pane**, where you view and edit source code from files. The bottom left pane is usually the **console pane**, where you can type in commands and view output messages. The right panes have several different tabs that show you information about your code. You can change the location of panes and what tabs are shown under **`Preferences > Pane Layout`**.
Your browser does not support the video tag.
### 1\.3\.3 Configure RStudio
In this class, you will be learning how to do [reproducible research](https://psyteachr.github.io/glossary/r#reproducible-research "Research that documents all of the steps between raw data and results in a way that can be verified."). This involves writing scripts that completely and transparently perform some analysis from start to finish in a way that yields the same result for different people using the same software on different computers. Transparency is a key value of science, as embodied in the “trust but verify” motto.
When you do things reproducibly, others can understand and check your work. This benefits science, but there is a selfish reason, too: the most important person who will benefit from a reproducible script is your future self. When you return to an analysis after two weeks of vacation, you will thank your earlier self for doing things in a transparent, reproducible way, as you can easily pick up right where you left off.
There are two tweaks that you should do to your RStudio installation to maximize reproducibility. Go to **`Global Options...`** under the **`Tools`** menu (⌘,), and uncheck the box that says **`Restore .RData into workspace at startup`**. If you keep things around in your workspace, things will get messy, and unexpected things will happen. You should always start with a clear workspace. This also means that you never want to save your workspace when you exit, so set this to **`Never`**. The only thing you want to save are your scripts.
Figure 1\.3: Alter these settings for increased reproducibility.
Your settings should have:
* Restore .RData into workspace at startup: Checked Not Checked
* Save workspace to .RData on exit: Always Never Ask
1\.4 Getting Started
--------------------
### 1\.4\.1 Console commands
We are first going to learn about how to interact with the [console](https://psyteachr.github.io/glossary/c#console "The pane in RStudio where you can type in commands and view output messages."). In general, you will be developing R [script](https://psyteachr.github.io/glossary/s#scripts "NA") or [R Markdown](https://psyteachr.github.io/glossary/r#r-markdown "The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.") files, rather than working directly in the console window. However, you can consider the console a kind of “sandbox” where you can try out lines of code and adapt them until you get them to do what you want. Then you can copy them back into the script editor.
Mostly, however, you will be typing into the script editor window (either into an R script or an R Markdown file) and then sending the commands to the console by placing the cursor on the line and holding down the Ctrl key while you press Enter. The Ctrl\+Enter key sequence sends the command in the script to the console.
One simple way to learn about the R console is to use it as a calculator. Enter the lines of code below and see if your results match. Be prepared to make lots of typos (at first).
```
1 + 1
```
```
## [1] 2
```
The R console remembers a history of the commands you typed in the past. Use the up and down arrow keys on your keyboard to scroll backwards and forwards through your history. It’s a lot faster than re\-typing.
```
1 + 1 + 3
```
```
## [1] 5
```
You can break up mathematical expressions over multiple lines; R waits for a complete expression before processing it.
```
## here comes a long expression
## let's break it over multiple lines
1 + 2 + 3 + 4 + 5 + 6 +
7 + 8 + 9 +
10
```
```
## [1] 55
```
Text inside quotes is called a [string](https://psyteachr.github.io/glossary/s#string "A piece of text inside of quotes.").
```
"Good afternoon"
```
```
## [1] "Good afternoon"
```
You can break up text over multiple lines; R waits for a close quote before processing it. If you want to include a double quote inside this quoted string, [escape](https://psyteachr.github.io/glossary/e#escape "Include special characters like \" inside of a string by prefacing them with a backslash.") it with a backslash.
```
africa <- "I hear the drums echoing tonight
But she hears only whispers of some quiet conversation
She's coming in, 12:30 flight
The moonlit wings reflect the stars that guide me towards salvation
I stopped an old man along the way
Hoping to find some old forgotten words or ancient melodies
He turned to me as if to say, \"Hurry boy, it's waiting there for you\"
- Toto"
cat(africa) # cat() prints the string
```
```
## I hear the drums echoing tonight
## But she hears only whispers of some quiet conversation
## She's coming in, 12:30 flight
## The moonlit wings reflect the stars that guide me towards salvation
## I stopped an old man along the way
## Hoping to find some old forgotten words or ancient melodies
## He turned to me as if to say, "Hurry boy, it's waiting there for you"
##
## - Toto
```
### 1\.4\.2 Objects
Often you want to store the result of some computation for later use. You can store it in an [object](https://psyteachr.github.io/glossary/o#object "A word that identifies and stores the value of some data for later use.") (also sometimes called a [variable](https://psyteachr.github.io/glossary/v#variable "A word that identifies and stores the value of some data for later use.")). An object in R:
* contains only letters, numbers, full stops, and underscores
* starts with a letter or a full stop and a letter
* distinguishes uppercase and lowercase letters (`rickastley` is not the same as `RickAstley`)
The following are valid and different objects:
* songdata
* SongData
* song\_data
* song.data
* .song.data
* never\_gonna\_give\_you\_up\_never\_gonna\_let\_you\_down
The following are not valid objects:
* \_song\_data
* 1song
* .1song
* song data
* song\-data
Use the [assignment operator](https://psyteachr.github.io/glossary/a#assignment-operator "The symbol <-, which functions like = and assigns the value on the right to the object on the left")\<\-\` to assign the value on the right to the object named on the left.
```
## use the assignment operator '<-'
## R stores the number in the object
x <- 5
```
Now that we have set `x` to a value, we can do something with it:
```
x * 2
## R evaluates the expression and stores the result in the object boring_calculation
boring_calculation <- 2 + 2
```
```
## [1] 10
```
Note that it doesn’t print the result back at you when it’s stored. To view the result, just type the object name on a blank line.
```
boring_calculation
```
```
## [1] 4
```
Once an object is assigned a value, its value doesn’t change unless you reassign the object, even if the objects you used to calculate it change. Predict what the code below does and test yourself:
```
this_year <- 2019
my_birth_year <- 1976
my_age <- this_year - my_birth_year
this_year <- 2020
```
After all the code above is run:
* `this_year` \= 43 44 1976 2019 2020
* `my_birth_year` \= 43 44 1976 2019 2020
* `my_age` \= 43 44 1976 2019 2020
### 1\.4\.3 The environment
Anytime you assign something to a new object, R creates a new entry in the [global environment](https://psyteachr.github.io/glossary/g#global-environment "The interactive workspace where your script runs"). Objects in the global environment exist until you end your session; then they disappear forever (unless you save them).
Look at the **Environment** tab in the upper right pane. It lists all of the objects you have created. Click the broom icon to clear all of the objects and start fresh. You can also use the following functions in the console to view all objects, remove one object, or remove all objects.
```
ls() # print the objects in the global environment
rm("x") # remove the object named x from the global environment
rm(list = ls()) # clear out the global environment
```
In the upper right corner of the Environment tab, change **`List`** to **`Grid`**. Now you can see the type, length, and size of your objects, and reorder the list by any of these attributes.
### 1\.4\.4 Whitespace
R mostly ignores [whitespace](https://psyteachr.github.io/glossary/w#whitespace "Spaces, tabs and line breaks"): spaces, tabs, and line breaks. This means that you can use whitespace to help you organise your code.
```
# a and b are identical
a <- list(ctl = "Control Condition", exp1 = "Experimental Condition 1", exp2 = "Experimental Condition 2")
# but b is much easier to read
b <- list(ctl = "Control Condition",
exp1 = "Experimental Condition 1",
exp2 = "Experimental Condition 2")
```
When you see `>` at the beginning of a line, that means R is waiting for you to start a new command. However, if you see a `+` instead of `>` at the start of the line, that means R is waiting for you to finish a command you started on a previous line. If you want to cancel whatever command you started, just press the Esc key in the console window and you’ll get back to the `>` command prompt.
```
# R waits until next line for evaluation
(3 + 2) *
5
```
```
## [1] 25
```
It is often useful to break up long functions onto several lines.
```
cat("3, 6, 9, the goose drank wine",
"The monkey chewed tobacco on the streetcar line",
"The line broke, the monkey got choked",
"And they all went to heaven in a little rowboat",
sep = " \n")
```
```
## 3, 6, 9, the goose drank wine
## The monkey chewed tobacco on the streetcar line
## The line broke, the monkey got choked
## And they all went to heaven in a little rowboat
```
### 1\.4\.5 Function syntax
A lot of what you do in R involves calling a [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") and storing the results. A function is a named section of code that can be reused.
For example, `sd` is a function that returns the [standard deviation](https://psyteachr.github.io/glossary/s#standard-deviation "A descriptive statistic that measures how spread out data are relative to the mean.") of the [vector](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings.") of numbers that you provide as the input [argument](https://psyteachr.github.io/glossary/a#argument "A variable that provides input to a function."). Functions are set up like this:
`function_name(argument1, argument2 = "value")`.
The arguments in parentheses can be named (like, `argument1 = 10`) or you can skip the names if you put them in the exact same order that they’re defined in the function. You can check this by typing `?sd` (or whatever function name you’re looking up) into the console and the Help pane will show you the default order under **Usage**. You can also skip arguments that have a default value specified.
Most functions return a value, but may also produce side effects like printing to the console.
To illustrate, the function `rnorm()` generates random numbers from the standard [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable."). The help page for `rnorm()` (accessed by typing `?rnorm` in the console) shows that it has the syntax
`rnorm(n, mean = 0, sd = 1)`
where `n` is the number of randomly generated numbers you want, `mean` is the mean of the distribution, and `sd` is the standard deviation. The default mean is 0, and the default standard deviation is 1\. There is no default for `n`, which means you’ll get an error if you don’t specify it:
```
rnorm()
```
```
## Error in rnorm(): argument "n" is missing, with no default
```
If you want 10 random numbers from a normal distribution with mean of 0 and standard deviation, you can just use the defaults.
```
rnorm(10)
```
```
## [1] -0.04234663 -2.00393149 0.83611187 -1.46404127 1.31714428 0.42608581
## [7] -0.46673798 -0.01670509 1.64072295 0.85876439
```
If you want 10 numbers from a normal distribution with a mean of 100:
```
rnorm(10, 100)
```
```
## [1] 101.34917 99.86059 100.36287 99.65575 100.66818 100.04771 99.76782
## [8] 102.57691 100.05575 99.42490
```
This would be an equivalent but less efficient way of calling the function:
```
rnorm(n = 10, mean = 100)
```
```
## [1] 100.52773 99.40241 100.39641 101.01629 99.41961 100.52202 98.09828
## [8] 99.52169 100.25677 99.92092
```
We don’t need to name the arguments because R will recognize that we intended to fill in the first and second arguments by their position in the function call. However, if we want to change the default for an argument coming later in the list, then we need to name it. For instance, if we wanted to keep the default `mean = 0` but change the standard deviation to 100 we would do it this way:
```
rnorm(10, sd = 100)
```
```
## [1] -68.254349 -17.636619 140.047575 7.570674 -68.309751 -2.378786
## [7] 117.356343 -104.772092 -40.163750 54.358941
```
Some functions give a list of options after an argument; this means the default value is the first option. The usage entry for the `power.t.test()` function looks like this:
```
power.t.test(n = NULL, delta = NULL, sd = 1, sig.level = 0.05,
power = NULL,
type = c("two.sample", "one.sample", "paired"),
alternative = c("two.sided", "one.sided"),
strict = FALSE, tol = .Machine$double.eps^0.25)
```
* What is the default value for `sd`? NULL 1 0\.05 two.sample
* What is the default value for `type`? NULL two.sample one.sample paired
* Which is equivalent to `power.t.test(100, 0.5)`? power.t.test(100, 0\.5, sig.level \= 1, sd \= 0\.05\) power.t.test() power.t.test(n \= 100\) power.t.test(delta \= 0\.5, n \= 100\)
### 1\.4\.6 Getting help
Start up help in a browser using the function `help.start()`.
If a function is in [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") or a loaded [package](https://psyteachr.github.io/glossary/p#package "A group of R functions."), you can use the `help("function_name")` function or the `?function_name` shortcut to access the help file. If the package isn’t loaded, specify the package name as the second argument to the help function.
```
# these methods are all equivalent ways of getting help
help("rnorm")
?rnorm
help("rnorm", package="stats")
```
When the package isn’t loaded or you aren’t sure what package the function is in, use the shortcut `??function_name`.
* What is the first argument to the `mean` function? trim na.rm mean x
* What package is `read_excel` in? readr readxl base stats
1\.5 Add\-on packages
---------------------
One of the great things about R is that it is **user extensible**: anyone can create a new add\-on software package that extends its functionality. There are currently thousands of add\-on packages that R users have created to solve many different kinds of problems, or just simply to have fun. There are packages for data visualisation, machine learning, neuroimaging, eyetracking, web scraping, and playing games such as Sudoku.
Add\-on packages are not distributed with base R, but have to be downloaded and installed from an archive, in the same way that you would, for instance, download and install a fitness app on your smartphone.
The main repository where packages reside is called CRAN, the Comprehensive R Archive Network. A package has to pass strict tests devised by the R core team to be allowed to be part of the CRAN archive. You can install from the CRAN archive through R using the `install.packages()` function.
There is an important distinction between **installing** a package and **loading** a package.
### 1\.5\.1 Installing a package
This is done using `install.packages()`. This is like installing an app on your phone: you only have to do it once and the app will remain installed until you remove it. For instance, if you want to use PokemonGo on your phone, you install it once from the App Store or Play Store, and you don’t have to re\-install it each time you want to use it. Once you launch the app, it will run in the background until you close it or restart your phone. Likewise, when you install a package, the package will be available (but not *loaded*) every time you open up R.
You may only be able to permanently install packages if you are using R on your own system; you may not be able to do this on public workstations if you lack the appropriate privileges.
Install the `ggExtra` package on your system. This package lets you create plots with marginal histograms.
```
install.packages("ggExtra")
```
If you don’t already have packages like ggplot2 and shiny installed, it will also install these **dependencies** for you. If you don’t get an error message at the end, the installation was successful.
### 1\.5\.2 Loading a package
This is done using `library(packagename)`. This is like **launching** an app on your phone: the functionality is only there where the app is launched and remains there until you close the app or restart. Likewise, when you run `library(packagename)` within a session, the functionality of the package referred to by `packagename` will be made available for your R session. The next time you start R, you will need to run the `library()` function again if you want to access its functionality.
You can load the functions in `ggExtra` for your current R session as follows:
```
library(ggExtra)
```
You might get some red text when you load a package, this is normal. It is usually warning you that this package has functions that have the same name as other packages you’ve already loaded.
You can use the convention `package::function()` to indicate in which add\-on package a function resides. For instance, if you see `readr::read_csv()`, that refers to the function `read_csv()` in the `readr` add\-on package.
Now you can run the function `ggExtra::runExample()`, which runs an interactive example of marginal plots using shiny.
```
ggExtra::runExample()
```
### 1\.5\.3 Install from GitHub
Many R packages are not yet on [CRAN](https://psyteachr.github.io/glossary/c#cran "The Comprehensive R Archive Network: a network of ftp and web servers around the world that store identical, up-to-date, versions of code and documentation for R.") because they are still in development. Increasingly, datasets and code for papers are available as packages you can download from github. You’ll need to install the devtools package to be able to install packages from github. Check if you have a package installed by trying to load it (e.g., if you don’t have devtools installed, `library("devtools")` will display an error message) or by searching for it in the packages tab in the lower right pane. All listed packages are installed; all checked packages are currently loaded.
Figure 1\.4: Check installed and loaded packages in the packages tab in the lower right pane.
```
# install devtools if you get
# Error in loadNamespace(name) : there is no package called ‘devtools’
# install.packages("devtools")
devtools::install_github("psyteachr/msc-data-skills")
```
After you install the dataskills package, load it using the `library()` function. You can then try out some of the functions below.
* `book()` opens a local copy of this book in your web browser.
* `app("plotdemo")` opens a shiny app that lets you see how simulated data would look in different plot styles
* `exercise(1)` creates and opens a file containing the exercises for this chapter
* `?disgust` shows you the documentation for the built\-in dataset `disgust`, which we will be using in future lessons
```
library(dataskills)
book()
app("plotdemo")
exercise(1)
?disgust
```
How many different ways can you find to discover what functions are available in the dataskills package?
1\.6 Organising a project
-------------------------
[Projects](https://psyteachr.github.io/glossary/p#project "A way to organise related files in RStudio") in RStudio are a way to group all of the files you need for one project. Most projects include scripts, data files, and output files like the PDF version of the script or images.
Make a new directory where you will keep all of your materials for this class. If you’re using a lab computer, make sure you make this directory in your network drive so you can access it from other computers.
Choose **`New Project…`** under the **`File`** menu to create a new project called `01-intro` in this directory.
### 1\.6\.1 Structure
Here is what an R script looks like. Don’t worry about the details for now.
```
# load add-on packages
library(tidyverse)
# set object ----
n <- 100
# simulate data ----
data <- data.frame(
id = 1:n,
dv = c(rnorm(n/2, 0), rnorm(n/2, 1)),
condition = rep(c("A", "B"), each = n/2)
)
# plot data ----
ggplot(data, aes(condition, dv)) +
geom_violin(trim = FALSE) +
geom_boxplot(width = 0.25,
aes(fill = condition),
show.legend = FALSE)
# save plot ----
ggsave("sim_data.png", width = 8, height = 6)
```
It’s best if you follow the following structure when developing your own scripts:
* load in any add\-on packages you need to use
* define any custom functions
* load or simulate the data you will be working with
* work with the data
* save anything you need to save
Often when you are working on a script, you will realize that you need to load another add\-on package. Don’t bury the call to `library(package_I_need)` way down in the script. Put it in the top, so the user has an overview of what packages are needed.
You can add comments to an R script by with the hash symbol (`#`). The R interpreter will ignore characters from the hash to the end of the line.
```
## comments: any text from '#' on is ignored until end of line
22 / 7 # approximation to pi
```
```
## [1] 3.142857
```
If you add 4 or more dashes to the end of a comment, it acts like a header and creates an outline that you can see in the document outline (⇧⌘O).
### 1\.6\.2 Reproducible reports with R Markdown
We will make reproducible reports following the principles of [literate programming](https://en.wikipedia.org/wiki/Literate_programming). The basic idea is to have the text of the report together in a single document along with the code needed to perform all analyses and generate the tables. The report is then “compiled” from the original format into some other, more portable format, such as HTML or PDF. This is different from traditional cutting and pasting approaches where, for instance, you create a graph in Microsoft Excel or a statistics program like SPSS and then paste it into Microsoft Word.
We will use [R Markdown](https://psyteachr.github.io/glossary/r#r-markdown "The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.") to create reproducible reports, which enables mixing of text and code. A reproducible script will contain sections of code in code blocks. A code block starts and ends with backtick symbols in a row, with some infomation about the code between curly brackets, such as `{r chunk-name, echo=FALSE}` (this runs the code, but does not show the text of the code block in the compiled document). The text outside of code blocks is written in [markdown](https://psyteachr.github.io/glossary/m#markdown "A way to specify formatting, such as headers, paragraphs, lists, bolding, and links."), which is a way to specify formatting, such as headers, paragraphs, lists, bolding, and links.
Figure 1\.5: A reproducible script.
If you open up a new R Markdown file from a template, you will see an example document with several code blocks in it. To create an HTML or PDF report from an R Markdown (Rmd) document, you compile it. Compiling a document is called [knitting](https://psyteachr.github.io/glossary/k#knit "To create an HTML, PDF, or Word document from an R Markdown (Rmd) document") in RStudio. There is a button that looks like a ball of yarn with needles through it that you click on to compile your file into a report.
Create a new R Markdown file from the **`File > New File > R Markdown…`** menu. Change the title and author, then click the knit button to create an html file.
### 1\.6\.3 Working Directory
Where should you put all of your files? When developing an analysis, you usually want to have all of your scripts and data files in one subtree of your computer’s directory structure. Usually there is a single [working directory](https://psyteachr.github.io/glossary/w#working-directory "The filepath where R is currently reading and writing files.") where your data and scripts are stored.
Your script should only reference files in three locations, using the appropriate format.
| Where | Example |
| --- | --- |
| on the web | “[https://psyteachr.github.io/msc\-data\-skills/data/disgust\_scores.csv](https://psyteachr.github.io/msc-data-skills/data/disgust_scores.csv)” |
| in the working directory | “disgust\_scores.csv” |
| in a subdirectory | “data/disgust\_scores.csv” |
Never set or change your working directory in a script.
If you are working with an R Markdown file, it will automatically use the same directory the .Rmd file is in as the working directory.
If you are working with R scripts, store your main script file in the top\-level directory and manually set your working directory to that location. You will have to reset the working directory each time you open RStudio, unless you create a [project](https://psyteachr.github.io/glossary/p#project "A way to organise related files in RStudio") and access the script from the project.
For instance, if you are on a Windows machine your data and scripts are in the directory `C:\Carla's_files\thesis2\my_thesis\new_analysis`, you will set your working directory in one of two ways: (1\) by going to the `Session` pull down menu in RStudio and choosing `Set Working Directory`, or (2\) by typing `setwd("C:\Carla's_files\thesis2\my_thesis\new_analysis")` in the console window.
It’s tempting to make your life simple by putting the `setwd()` command in your script. Don’t do this! Others will not have the same directory tree as you (and when your laptop dies and you get a new one, neither will you).
When manually setting the working directory, always do so by using the **`Session > Set Working Directory`** pull\-down option or by typing `setwd()` in the console.
If your script needs a file in a subdirectory of `new_analysis`, say, `data/questionnaire.csv`, load it in using a [relative path](https://psyteachr.github.io/glossary/r#relative-path "The location of a file in relation to the working directory.") so that it is accessible if you move the folder `new_analysis` to another location or computer:
```
dat <- read_csv("data/questionnaire.csv") # correct
```
Do not load it in using an [absolute path](https://psyteachr.github.io/glossary/a#absolute-path "A file path that starts with / and is not appended to the working directory"):
```
dat <- read_csv("C:/Carla's_files/thesis22/my_thesis/new_analysis/data/questionnaire.csv") # wrong
```
Also note the convention of using forward slashes, unlike the Windows\-specific convention of using backward slashes. This is to make references to files platform independent.
1\.7 Glossary
-------------
Each chapter ends with a glossary table defining the jargon introduced in this chapter. The links below take you to the [glossary book](https://psyteachr.github.io/glossary), which you can also download for offline use with `devtools::install_github("psyteachr/glossary")` and access the glossary offline with `glossary::book()`.
| term | definition |
| --- | --- |
| [absolute path](https://psyteachr.github.io/glossary/a#absolute.path) | A file path that starts with / and is not appended to the working directory |
| [argument](https://psyteachr.github.io/glossary/a#argument) | A variable that provides input to a function. |
| [assignment operator](https://psyteachr.github.io/glossary/a#assignment.operator) | The symbol \<\-, which functions like \= and assigns the value on the right to the object on the left |
| [base r](https://psyteachr.github.io/glossary/b#base.r) | The set of R functions that come with a basic installation of R, before you add external packages |
| [console](https://psyteachr.github.io/glossary/c#console) | The pane in RStudio where you can type in commands and view output messages. |
| [cran](https://psyteachr.github.io/glossary/c#cran) | The Comprehensive R Archive Network: a network of ftp and web servers around the world that store identical, up\-to\-date, versions of code and documentation for R. |
| [escape](https://psyteachr.github.io/glossary/e#escape) | Include special characters like " inside of a string by prefacing them with a backslash. |
| [function](https://psyteachr.github.io/glossary/f#function.) | A named section of code that can be reused. |
| [global environment](https://psyteachr.github.io/glossary/g#global.environment) | The interactive workspace where your script runs |
| [ide](https://psyteachr.github.io/glossary/i#ide) | Integrated Development Environment: a program that serves as a text editor, file manager, and provides functions to help you read and write code. RStudio is an IDE for R. |
| [knit](https://psyteachr.github.io/glossary/k#knit) | To create an HTML, PDF, or Word document from an R Markdown (Rmd) document |
| [markdown](https://psyteachr.github.io/glossary/m#markdown) | A way to specify formatting, such as headers, paragraphs, lists, bolding, and links. |
| [normal distribution](https://psyteachr.github.io/glossary/n#normal.distribution) | A symmetric distribution of data where values near the centre are most probable. |
| [object](https://psyteachr.github.io/glossary/o#object) | A word that identifies and stores the value of some data for later use. |
| [package](https://psyteachr.github.io/glossary/p#package) | A group of R functions. |
| [panes](https://psyteachr.github.io/glossary/p#panes) | RStudio is arranged with four window “panes.” |
| [project](https://psyteachr.github.io/glossary/p#project) | A way to organise related files in RStudio |
| [r markdown](https://psyteachr.github.io/glossary/r#r.markdown) | The R\-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code. |
| [relative path](https://psyteachr.github.io/glossary/r#relative.path) | The location of a file in relation to the working directory. |
| [reproducible research](https://psyteachr.github.io/glossary/r#reproducible.research) | Research that documents all of the steps between raw data and results in a way that can be verified. |
| [script](https://psyteachr.github.io/glossary/s#script) | A plain\-text file that contains commands in a coding language, such as R. |
| [scripts](https://psyteachr.github.io/glossary/s#scripts) | NA |
| [standard deviation](https://psyteachr.github.io/glossary/s#standard.deviation) | A descriptive statistic that measures how spread out data are relative to the mean. |
| [string](https://psyteachr.github.io/glossary/s#string) | A piece of text inside of quotes. |
| [variable](https://psyteachr.github.io/glossary/v#variable) | A word that identifies and stores the value of some data for later use. |
| [vector](https://psyteachr.github.io/glossary/v#vector) | A type of data structure that is basically a list of things like T/F values, numbers, or strings. |
| [whitespace](https://psyteachr.github.io/glossary/w#whitespace) | Spaces, tabs and line breaks |
| [working directory](https://psyteachr.github.io/glossary/w#working.directory) | The filepath where R is currently reading and writing files. |
1\.8 Exercises
--------------
Download the first set of [exercises](exercises/01_intro_exercise.Rmd) and put it in the project directory you created earlier for today’s exercises. See the [answers](exercises/01_intro_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(1)
# run this to access the answers
dataskills::exercise(1, answers = TRUE)
```
1\.1 Learning Objectives
------------------------
1. Understand the components of the [RStudio IDE](intro.html#rstudio_ide) [(video)](https://youtu.be/CbA6ZVlJE78)
2. Type commands into the [console](intro.html#console) [(video)](https://youtu.be/wbI4c_7y0kE)
3. Understand [function syntax](intro.html#function_syx) [(video)](https://youtu.be/X5P038N5Q8I)
4. Install a [package](intro.html#install-package) [(video)](https://youtu.be/u_pvHnqkVCE)
5. Organise a [project](intro.html#projects) [(video)](https://youtu.be/y-KiPueC9xw)
6. Create and compile an [Rmarkdown document](intro.html#rmarkdown) [(video)](https://youtu.be/EqJiAlJAl8Y)
1\.2 Resources
--------------
* [Chapter 1: Introduction](http://r4ds.had.co.nz/introduction.html) in *R for Data Science*
* [RStudio IDE Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/rstudio-ide.pdf)
* [Introduction to R Markdown](https://rmarkdown.rstudio.com/lesson-1.html)
* [R Markdown Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/rmarkdown-2.0.pdf)
* [R Markdown Reference](https://www.rstudio.com/wp-content/uploads/2015/03/rmarkdown-reference.pdf)
* [RStudio Cloud](https://rstudio.cloud/)
1\.3 What is R?
---------------
R is a programming environment for data processing and statistical analysis. We use R in Psychology at the University of Glasgow to promote [reproducible research](https://psyteachr.github.io/glossary/r#reproducible-research "Research that documents all of the steps between raw data and results in a way that can be verified."). This refers to being able to document and reproduce all of the steps between raw data and results. R allows you to write [scripts](https://psyteachr.github.io/glossary/s#script "A plain-text file that contains commands in a coding language, such as R.") that combine data files, clean data, and run analyses. There are many other ways to do this, including writing SPSS syntax files, but we find R to be a useful tool that is free, open source, and commonly used by research psychologists.
See Appendix [A](installingr.html#installingr) for more information on on how to install R and associated programs.
### 1\.3\.1 The Base R Console
If you open up the application called R, you will see an “R Console” window that looks something like this.
Figure 1\.1: The R Console window.
You can close R and never open it again. We’ll be working entirely in RStudio in this class.
ALWAYS REMEMBER: Launch R though the RStudio IDE
Launch .
### 1\.3\.2 RStudio
[RStudio](http://www.rstudio.com) is an Integrated Development Environment ([IDE](https://psyteachr.github.io/glossary/i#ide "Integrated Development Environment: a program that serves as a text editor, file manager, and provides functions to help you read and write code. RStudio is an IDE for R.")). This is a program that serves as a text editor, file manager, and provides many functions to help you read and write R code.
Figure 1\.2: The RStudio IDE
RStudio is arranged with four window [panes](https://psyteachr.github.io/glossary/p#panes "RStudio is arranged with four window “panes.”"). By default, the upper left pane is the **source pane**, where you view and edit source code from files. The bottom left pane is usually the **console pane**, where you can type in commands and view output messages. The right panes have several different tabs that show you information about your code. You can change the location of panes and what tabs are shown under **`Preferences > Pane Layout`**.
Your browser does not support the video tag.
### 1\.3\.3 Configure RStudio
In this class, you will be learning how to do [reproducible research](https://psyteachr.github.io/glossary/r#reproducible-research "Research that documents all of the steps between raw data and results in a way that can be verified."). This involves writing scripts that completely and transparently perform some analysis from start to finish in a way that yields the same result for different people using the same software on different computers. Transparency is a key value of science, as embodied in the “trust but verify” motto.
When you do things reproducibly, others can understand and check your work. This benefits science, but there is a selfish reason, too: the most important person who will benefit from a reproducible script is your future self. When you return to an analysis after two weeks of vacation, you will thank your earlier self for doing things in a transparent, reproducible way, as you can easily pick up right where you left off.
There are two tweaks that you should do to your RStudio installation to maximize reproducibility. Go to **`Global Options...`** under the **`Tools`** menu (⌘,), and uncheck the box that says **`Restore .RData into workspace at startup`**. If you keep things around in your workspace, things will get messy, and unexpected things will happen. You should always start with a clear workspace. This also means that you never want to save your workspace when you exit, so set this to **`Never`**. The only thing you want to save are your scripts.
Figure 1\.3: Alter these settings for increased reproducibility.
Your settings should have:
* Restore .RData into workspace at startup: Checked Not Checked
* Save workspace to .RData on exit: Always Never Ask
### 1\.3\.1 The Base R Console
If you open up the application called R, you will see an “R Console” window that looks something like this.
Figure 1\.1: The R Console window.
You can close R and never open it again. We’ll be working entirely in RStudio in this class.
ALWAYS REMEMBER: Launch R though the RStudio IDE
Launch .
### 1\.3\.2 RStudio
[RStudio](http://www.rstudio.com) is an Integrated Development Environment ([IDE](https://psyteachr.github.io/glossary/i#ide "Integrated Development Environment: a program that serves as a text editor, file manager, and provides functions to help you read and write code. RStudio is an IDE for R.")). This is a program that serves as a text editor, file manager, and provides many functions to help you read and write R code.
Figure 1\.2: The RStudio IDE
RStudio is arranged with four window [panes](https://psyteachr.github.io/glossary/p#panes "RStudio is arranged with four window “panes.”"). By default, the upper left pane is the **source pane**, where you view and edit source code from files. The bottom left pane is usually the **console pane**, where you can type in commands and view output messages. The right panes have several different tabs that show you information about your code. You can change the location of panes and what tabs are shown under **`Preferences > Pane Layout`**.
Your browser does not support the video tag.
### 1\.3\.3 Configure RStudio
In this class, you will be learning how to do [reproducible research](https://psyteachr.github.io/glossary/r#reproducible-research "Research that documents all of the steps between raw data and results in a way that can be verified."). This involves writing scripts that completely and transparently perform some analysis from start to finish in a way that yields the same result for different people using the same software on different computers. Transparency is a key value of science, as embodied in the “trust but verify” motto.
When you do things reproducibly, others can understand and check your work. This benefits science, but there is a selfish reason, too: the most important person who will benefit from a reproducible script is your future self. When you return to an analysis after two weeks of vacation, you will thank your earlier self for doing things in a transparent, reproducible way, as you can easily pick up right where you left off.
There are two tweaks that you should do to your RStudio installation to maximize reproducibility. Go to **`Global Options...`** under the **`Tools`** menu (⌘,), and uncheck the box that says **`Restore .RData into workspace at startup`**. If you keep things around in your workspace, things will get messy, and unexpected things will happen. You should always start with a clear workspace. This also means that you never want to save your workspace when you exit, so set this to **`Never`**. The only thing you want to save are your scripts.
Figure 1\.3: Alter these settings for increased reproducibility.
Your settings should have:
* Restore .RData into workspace at startup: Checked Not Checked
* Save workspace to .RData on exit: Always Never Ask
1\.4 Getting Started
--------------------
### 1\.4\.1 Console commands
We are first going to learn about how to interact with the [console](https://psyteachr.github.io/glossary/c#console "The pane in RStudio where you can type in commands and view output messages."). In general, you will be developing R [script](https://psyteachr.github.io/glossary/s#scripts "NA") or [R Markdown](https://psyteachr.github.io/glossary/r#r-markdown "The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.") files, rather than working directly in the console window. However, you can consider the console a kind of “sandbox” where you can try out lines of code and adapt them until you get them to do what you want. Then you can copy them back into the script editor.
Mostly, however, you will be typing into the script editor window (either into an R script or an R Markdown file) and then sending the commands to the console by placing the cursor on the line and holding down the Ctrl key while you press Enter. The Ctrl\+Enter key sequence sends the command in the script to the console.
One simple way to learn about the R console is to use it as a calculator. Enter the lines of code below and see if your results match. Be prepared to make lots of typos (at first).
```
1 + 1
```
```
## [1] 2
```
The R console remembers a history of the commands you typed in the past. Use the up and down arrow keys on your keyboard to scroll backwards and forwards through your history. It’s a lot faster than re\-typing.
```
1 + 1 + 3
```
```
## [1] 5
```
You can break up mathematical expressions over multiple lines; R waits for a complete expression before processing it.
```
## here comes a long expression
## let's break it over multiple lines
1 + 2 + 3 + 4 + 5 + 6 +
7 + 8 + 9 +
10
```
```
## [1] 55
```
Text inside quotes is called a [string](https://psyteachr.github.io/glossary/s#string "A piece of text inside of quotes.").
```
"Good afternoon"
```
```
## [1] "Good afternoon"
```
You can break up text over multiple lines; R waits for a close quote before processing it. If you want to include a double quote inside this quoted string, [escape](https://psyteachr.github.io/glossary/e#escape "Include special characters like \" inside of a string by prefacing them with a backslash.") it with a backslash.
```
africa <- "I hear the drums echoing tonight
But she hears only whispers of some quiet conversation
She's coming in, 12:30 flight
The moonlit wings reflect the stars that guide me towards salvation
I stopped an old man along the way
Hoping to find some old forgotten words or ancient melodies
He turned to me as if to say, \"Hurry boy, it's waiting there for you\"
- Toto"
cat(africa) # cat() prints the string
```
```
## I hear the drums echoing tonight
## But she hears only whispers of some quiet conversation
## She's coming in, 12:30 flight
## The moonlit wings reflect the stars that guide me towards salvation
## I stopped an old man along the way
## Hoping to find some old forgotten words or ancient melodies
## He turned to me as if to say, "Hurry boy, it's waiting there for you"
##
## - Toto
```
### 1\.4\.2 Objects
Often you want to store the result of some computation for later use. You can store it in an [object](https://psyteachr.github.io/glossary/o#object "A word that identifies and stores the value of some data for later use.") (also sometimes called a [variable](https://psyteachr.github.io/glossary/v#variable "A word that identifies and stores the value of some data for later use.")). An object in R:
* contains only letters, numbers, full stops, and underscores
* starts with a letter or a full stop and a letter
* distinguishes uppercase and lowercase letters (`rickastley` is not the same as `RickAstley`)
The following are valid and different objects:
* songdata
* SongData
* song\_data
* song.data
* .song.data
* never\_gonna\_give\_you\_up\_never\_gonna\_let\_you\_down
The following are not valid objects:
* \_song\_data
* 1song
* .1song
* song data
* song\-data
Use the [assignment operator](https://psyteachr.github.io/glossary/a#assignment-operator "The symbol <-, which functions like = and assigns the value on the right to the object on the left")\<\-\` to assign the value on the right to the object named on the left.
```
## use the assignment operator '<-'
## R stores the number in the object
x <- 5
```
Now that we have set `x` to a value, we can do something with it:
```
x * 2
## R evaluates the expression and stores the result in the object boring_calculation
boring_calculation <- 2 + 2
```
```
## [1] 10
```
Note that it doesn’t print the result back at you when it’s stored. To view the result, just type the object name on a blank line.
```
boring_calculation
```
```
## [1] 4
```
Once an object is assigned a value, its value doesn’t change unless you reassign the object, even if the objects you used to calculate it change. Predict what the code below does and test yourself:
```
this_year <- 2019
my_birth_year <- 1976
my_age <- this_year - my_birth_year
this_year <- 2020
```
After all the code above is run:
* `this_year` \= 43 44 1976 2019 2020
* `my_birth_year` \= 43 44 1976 2019 2020
* `my_age` \= 43 44 1976 2019 2020
### 1\.4\.3 The environment
Anytime you assign something to a new object, R creates a new entry in the [global environment](https://psyteachr.github.io/glossary/g#global-environment "The interactive workspace where your script runs"). Objects in the global environment exist until you end your session; then they disappear forever (unless you save them).
Look at the **Environment** tab in the upper right pane. It lists all of the objects you have created. Click the broom icon to clear all of the objects and start fresh. You can also use the following functions in the console to view all objects, remove one object, or remove all objects.
```
ls() # print the objects in the global environment
rm("x") # remove the object named x from the global environment
rm(list = ls()) # clear out the global environment
```
In the upper right corner of the Environment tab, change **`List`** to **`Grid`**. Now you can see the type, length, and size of your objects, and reorder the list by any of these attributes.
### 1\.4\.4 Whitespace
R mostly ignores [whitespace](https://psyteachr.github.io/glossary/w#whitespace "Spaces, tabs and line breaks"): spaces, tabs, and line breaks. This means that you can use whitespace to help you organise your code.
```
# a and b are identical
a <- list(ctl = "Control Condition", exp1 = "Experimental Condition 1", exp2 = "Experimental Condition 2")
# but b is much easier to read
b <- list(ctl = "Control Condition",
exp1 = "Experimental Condition 1",
exp2 = "Experimental Condition 2")
```
When you see `>` at the beginning of a line, that means R is waiting for you to start a new command. However, if you see a `+` instead of `>` at the start of the line, that means R is waiting for you to finish a command you started on a previous line. If you want to cancel whatever command you started, just press the Esc key in the console window and you’ll get back to the `>` command prompt.
```
# R waits until next line for evaluation
(3 + 2) *
5
```
```
## [1] 25
```
It is often useful to break up long functions onto several lines.
```
cat("3, 6, 9, the goose drank wine",
"The monkey chewed tobacco on the streetcar line",
"The line broke, the monkey got choked",
"And they all went to heaven in a little rowboat",
sep = " \n")
```
```
## 3, 6, 9, the goose drank wine
## The monkey chewed tobacco on the streetcar line
## The line broke, the monkey got choked
## And they all went to heaven in a little rowboat
```
### 1\.4\.5 Function syntax
A lot of what you do in R involves calling a [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") and storing the results. A function is a named section of code that can be reused.
For example, `sd` is a function that returns the [standard deviation](https://psyteachr.github.io/glossary/s#standard-deviation "A descriptive statistic that measures how spread out data are relative to the mean.") of the [vector](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings.") of numbers that you provide as the input [argument](https://psyteachr.github.io/glossary/a#argument "A variable that provides input to a function."). Functions are set up like this:
`function_name(argument1, argument2 = "value")`.
The arguments in parentheses can be named (like, `argument1 = 10`) or you can skip the names if you put them in the exact same order that they’re defined in the function. You can check this by typing `?sd` (or whatever function name you’re looking up) into the console and the Help pane will show you the default order under **Usage**. You can also skip arguments that have a default value specified.
Most functions return a value, but may also produce side effects like printing to the console.
To illustrate, the function `rnorm()` generates random numbers from the standard [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable."). The help page for `rnorm()` (accessed by typing `?rnorm` in the console) shows that it has the syntax
`rnorm(n, mean = 0, sd = 1)`
where `n` is the number of randomly generated numbers you want, `mean` is the mean of the distribution, and `sd` is the standard deviation. The default mean is 0, and the default standard deviation is 1\. There is no default for `n`, which means you’ll get an error if you don’t specify it:
```
rnorm()
```
```
## Error in rnorm(): argument "n" is missing, with no default
```
If you want 10 random numbers from a normal distribution with mean of 0 and standard deviation, you can just use the defaults.
```
rnorm(10)
```
```
## [1] -0.04234663 -2.00393149 0.83611187 -1.46404127 1.31714428 0.42608581
## [7] -0.46673798 -0.01670509 1.64072295 0.85876439
```
If you want 10 numbers from a normal distribution with a mean of 100:
```
rnorm(10, 100)
```
```
## [1] 101.34917 99.86059 100.36287 99.65575 100.66818 100.04771 99.76782
## [8] 102.57691 100.05575 99.42490
```
This would be an equivalent but less efficient way of calling the function:
```
rnorm(n = 10, mean = 100)
```
```
## [1] 100.52773 99.40241 100.39641 101.01629 99.41961 100.52202 98.09828
## [8] 99.52169 100.25677 99.92092
```
We don’t need to name the arguments because R will recognize that we intended to fill in the first and second arguments by their position in the function call. However, if we want to change the default for an argument coming later in the list, then we need to name it. For instance, if we wanted to keep the default `mean = 0` but change the standard deviation to 100 we would do it this way:
```
rnorm(10, sd = 100)
```
```
## [1] -68.254349 -17.636619 140.047575 7.570674 -68.309751 -2.378786
## [7] 117.356343 -104.772092 -40.163750 54.358941
```
Some functions give a list of options after an argument; this means the default value is the first option. The usage entry for the `power.t.test()` function looks like this:
```
power.t.test(n = NULL, delta = NULL, sd = 1, sig.level = 0.05,
power = NULL,
type = c("two.sample", "one.sample", "paired"),
alternative = c("two.sided", "one.sided"),
strict = FALSE, tol = .Machine$double.eps^0.25)
```
* What is the default value for `sd`? NULL 1 0\.05 two.sample
* What is the default value for `type`? NULL two.sample one.sample paired
* Which is equivalent to `power.t.test(100, 0.5)`? power.t.test(100, 0\.5, sig.level \= 1, sd \= 0\.05\) power.t.test() power.t.test(n \= 100\) power.t.test(delta \= 0\.5, n \= 100\)
### 1\.4\.6 Getting help
Start up help in a browser using the function `help.start()`.
If a function is in [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") or a loaded [package](https://psyteachr.github.io/glossary/p#package "A group of R functions."), you can use the `help("function_name")` function or the `?function_name` shortcut to access the help file. If the package isn’t loaded, specify the package name as the second argument to the help function.
```
# these methods are all equivalent ways of getting help
help("rnorm")
?rnorm
help("rnorm", package="stats")
```
When the package isn’t loaded or you aren’t sure what package the function is in, use the shortcut `??function_name`.
* What is the first argument to the `mean` function? trim na.rm mean x
* What package is `read_excel` in? readr readxl base stats
### 1\.4\.1 Console commands
We are first going to learn about how to interact with the [console](https://psyteachr.github.io/glossary/c#console "The pane in RStudio where you can type in commands and view output messages."). In general, you will be developing R [script](https://psyteachr.github.io/glossary/s#scripts "NA") or [R Markdown](https://psyteachr.github.io/glossary/r#r-markdown "The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.") files, rather than working directly in the console window. However, you can consider the console a kind of “sandbox” where you can try out lines of code and adapt them until you get them to do what you want. Then you can copy them back into the script editor.
Mostly, however, you will be typing into the script editor window (either into an R script or an R Markdown file) and then sending the commands to the console by placing the cursor on the line and holding down the Ctrl key while you press Enter. The Ctrl\+Enter key sequence sends the command in the script to the console.
One simple way to learn about the R console is to use it as a calculator. Enter the lines of code below and see if your results match. Be prepared to make lots of typos (at first).
```
1 + 1
```
```
## [1] 2
```
The R console remembers a history of the commands you typed in the past. Use the up and down arrow keys on your keyboard to scroll backwards and forwards through your history. It’s a lot faster than re\-typing.
```
1 + 1 + 3
```
```
## [1] 5
```
You can break up mathematical expressions over multiple lines; R waits for a complete expression before processing it.
```
## here comes a long expression
## let's break it over multiple lines
1 + 2 + 3 + 4 + 5 + 6 +
7 + 8 + 9 +
10
```
```
## [1] 55
```
Text inside quotes is called a [string](https://psyteachr.github.io/glossary/s#string "A piece of text inside of quotes.").
```
"Good afternoon"
```
```
## [1] "Good afternoon"
```
You can break up text over multiple lines; R waits for a close quote before processing it. If you want to include a double quote inside this quoted string, [escape](https://psyteachr.github.io/glossary/e#escape "Include special characters like \" inside of a string by prefacing them with a backslash.") it with a backslash.
```
africa <- "I hear the drums echoing tonight
But she hears only whispers of some quiet conversation
She's coming in, 12:30 flight
The moonlit wings reflect the stars that guide me towards salvation
I stopped an old man along the way
Hoping to find some old forgotten words or ancient melodies
He turned to me as if to say, \"Hurry boy, it's waiting there for you\"
- Toto"
cat(africa) # cat() prints the string
```
```
## I hear the drums echoing tonight
## But she hears only whispers of some quiet conversation
## She's coming in, 12:30 flight
## The moonlit wings reflect the stars that guide me towards salvation
## I stopped an old man along the way
## Hoping to find some old forgotten words or ancient melodies
## He turned to me as if to say, "Hurry boy, it's waiting there for you"
##
## - Toto
```
### 1\.4\.2 Objects
Often you want to store the result of some computation for later use. You can store it in an [object](https://psyteachr.github.io/glossary/o#object "A word that identifies and stores the value of some data for later use.") (also sometimes called a [variable](https://psyteachr.github.io/glossary/v#variable "A word that identifies and stores the value of some data for later use.")). An object in R:
* contains only letters, numbers, full stops, and underscores
* starts with a letter or a full stop and a letter
* distinguishes uppercase and lowercase letters (`rickastley` is not the same as `RickAstley`)
The following are valid and different objects:
* songdata
* SongData
* song\_data
* song.data
* .song.data
* never\_gonna\_give\_you\_up\_never\_gonna\_let\_you\_down
The following are not valid objects:
* \_song\_data
* 1song
* .1song
* song data
* song\-data
Use the [assignment operator](https://psyteachr.github.io/glossary/a#assignment-operator "The symbol <-, which functions like = and assigns the value on the right to the object on the left")\<\-\` to assign the value on the right to the object named on the left.
```
## use the assignment operator '<-'
## R stores the number in the object
x <- 5
```
Now that we have set `x` to a value, we can do something with it:
```
x * 2
## R evaluates the expression and stores the result in the object boring_calculation
boring_calculation <- 2 + 2
```
```
## [1] 10
```
Note that it doesn’t print the result back at you when it’s stored. To view the result, just type the object name on a blank line.
```
boring_calculation
```
```
## [1] 4
```
Once an object is assigned a value, its value doesn’t change unless you reassign the object, even if the objects you used to calculate it change. Predict what the code below does and test yourself:
```
this_year <- 2019
my_birth_year <- 1976
my_age <- this_year - my_birth_year
this_year <- 2020
```
After all the code above is run:
* `this_year` \= 43 44 1976 2019 2020
* `my_birth_year` \= 43 44 1976 2019 2020
* `my_age` \= 43 44 1976 2019 2020
### 1\.4\.3 The environment
Anytime you assign something to a new object, R creates a new entry in the [global environment](https://psyteachr.github.io/glossary/g#global-environment "The interactive workspace where your script runs"). Objects in the global environment exist until you end your session; then they disappear forever (unless you save them).
Look at the **Environment** tab in the upper right pane. It lists all of the objects you have created. Click the broom icon to clear all of the objects and start fresh. You can also use the following functions in the console to view all objects, remove one object, or remove all objects.
```
ls() # print the objects in the global environment
rm("x") # remove the object named x from the global environment
rm(list = ls()) # clear out the global environment
```
In the upper right corner of the Environment tab, change **`List`** to **`Grid`**. Now you can see the type, length, and size of your objects, and reorder the list by any of these attributes.
### 1\.4\.4 Whitespace
R mostly ignores [whitespace](https://psyteachr.github.io/glossary/w#whitespace "Spaces, tabs and line breaks"): spaces, tabs, and line breaks. This means that you can use whitespace to help you organise your code.
```
# a and b are identical
a <- list(ctl = "Control Condition", exp1 = "Experimental Condition 1", exp2 = "Experimental Condition 2")
# but b is much easier to read
b <- list(ctl = "Control Condition",
exp1 = "Experimental Condition 1",
exp2 = "Experimental Condition 2")
```
When you see `>` at the beginning of a line, that means R is waiting for you to start a new command. However, if you see a `+` instead of `>` at the start of the line, that means R is waiting for you to finish a command you started on a previous line. If you want to cancel whatever command you started, just press the Esc key in the console window and you’ll get back to the `>` command prompt.
```
# R waits until next line for evaluation
(3 + 2) *
5
```
```
## [1] 25
```
It is often useful to break up long functions onto several lines.
```
cat("3, 6, 9, the goose drank wine",
"The monkey chewed tobacco on the streetcar line",
"The line broke, the monkey got choked",
"And they all went to heaven in a little rowboat",
sep = " \n")
```
```
## 3, 6, 9, the goose drank wine
## The monkey chewed tobacco on the streetcar line
## The line broke, the monkey got choked
## And they all went to heaven in a little rowboat
```
### 1\.4\.5 Function syntax
A lot of what you do in R involves calling a [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") and storing the results. A function is a named section of code that can be reused.
For example, `sd` is a function that returns the [standard deviation](https://psyteachr.github.io/glossary/s#standard-deviation "A descriptive statistic that measures how spread out data are relative to the mean.") of the [vector](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings.") of numbers that you provide as the input [argument](https://psyteachr.github.io/glossary/a#argument "A variable that provides input to a function."). Functions are set up like this:
`function_name(argument1, argument2 = "value")`.
The arguments in parentheses can be named (like, `argument1 = 10`) or you can skip the names if you put them in the exact same order that they’re defined in the function. You can check this by typing `?sd` (or whatever function name you’re looking up) into the console and the Help pane will show you the default order under **Usage**. You can also skip arguments that have a default value specified.
Most functions return a value, but may also produce side effects like printing to the console.
To illustrate, the function `rnorm()` generates random numbers from the standard [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable."). The help page for `rnorm()` (accessed by typing `?rnorm` in the console) shows that it has the syntax
`rnorm(n, mean = 0, sd = 1)`
where `n` is the number of randomly generated numbers you want, `mean` is the mean of the distribution, and `sd` is the standard deviation. The default mean is 0, and the default standard deviation is 1\. There is no default for `n`, which means you’ll get an error if you don’t specify it:
```
rnorm()
```
```
## Error in rnorm(): argument "n" is missing, with no default
```
If you want 10 random numbers from a normal distribution with mean of 0 and standard deviation, you can just use the defaults.
```
rnorm(10)
```
```
## [1] -0.04234663 -2.00393149 0.83611187 -1.46404127 1.31714428 0.42608581
## [7] -0.46673798 -0.01670509 1.64072295 0.85876439
```
If you want 10 numbers from a normal distribution with a mean of 100:
```
rnorm(10, 100)
```
```
## [1] 101.34917 99.86059 100.36287 99.65575 100.66818 100.04771 99.76782
## [8] 102.57691 100.05575 99.42490
```
This would be an equivalent but less efficient way of calling the function:
```
rnorm(n = 10, mean = 100)
```
```
## [1] 100.52773 99.40241 100.39641 101.01629 99.41961 100.52202 98.09828
## [8] 99.52169 100.25677 99.92092
```
We don’t need to name the arguments because R will recognize that we intended to fill in the first and second arguments by their position in the function call. However, if we want to change the default for an argument coming later in the list, then we need to name it. For instance, if we wanted to keep the default `mean = 0` but change the standard deviation to 100 we would do it this way:
```
rnorm(10, sd = 100)
```
```
## [1] -68.254349 -17.636619 140.047575 7.570674 -68.309751 -2.378786
## [7] 117.356343 -104.772092 -40.163750 54.358941
```
Some functions give a list of options after an argument; this means the default value is the first option. The usage entry for the `power.t.test()` function looks like this:
```
power.t.test(n = NULL, delta = NULL, sd = 1, sig.level = 0.05,
power = NULL,
type = c("two.sample", "one.sample", "paired"),
alternative = c("two.sided", "one.sided"),
strict = FALSE, tol = .Machine$double.eps^0.25)
```
* What is the default value for `sd`? NULL 1 0\.05 two.sample
* What is the default value for `type`? NULL two.sample one.sample paired
* Which is equivalent to `power.t.test(100, 0.5)`? power.t.test(100, 0\.5, sig.level \= 1, sd \= 0\.05\) power.t.test() power.t.test(n \= 100\) power.t.test(delta \= 0\.5, n \= 100\)
### 1\.4\.6 Getting help
Start up help in a browser using the function `help.start()`.
If a function is in [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") or a loaded [package](https://psyteachr.github.io/glossary/p#package "A group of R functions."), you can use the `help("function_name")` function or the `?function_name` shortcut to access the help file. If the package isn’t loaded, specify the package name as the second argument to the help function.
```
# these methods are all equivalent ways of getting help
help("rnorm")
?rnorm
help("rnorm", package="stats")
```
When the package isn’t loaded or you aren’t sure what package the function is in, use the shortcut `??function_name`.
* What is the first argument to the `mean` function? trim na.rm mean x
* What package is `read_excel` in? readr readxl base stats
1\.5 Add\-on packages
---------------------
One of the great things about R is that it is **user extensible**: anyone can create a new add\-on software package that extends its functionality. There are currently thousands of add\-on packages that R users have created to solve many different kinds of problems, or just simply to have fun. There are packages for data visualisation, machine learning, neuroimaging, eyetracking, web scraping, and playing games such as Sudoku.
Add\-on packages are not distributed with base R, but have to be downloaded and installed from an archive, in the same way that you would, for instance, download and install a fitness app on your smartphone.
The main repository where packages reside is called CRAN, the Comprehensive R Archive Network. A package has to pass strict tests devised by the R core team to be allowed to be part of the CRAN archive. You can install from the CRAN archive through R using the `install.packages()` function.
There is an important distinction between **installing** a package and **loading** a package.
### 1\.5\.1 Installing a package
This is done using `install.packages()`. This is like installing an app on your phone: you only have to do it once and the app will remain installed until you remove it. For instance, if you want to use PokemonGo on your phone, you install it once from the App Store or Play Store, and you don’t have to re\-install it each time you want to use it. Once you launch the app, it will run in the background until you close it or restart your phone. Likewise, when you install a package, the package will be available (but not *loaded*) every time you open up R.
You may only be able to permanently install packages if you are using R on your own system; you may not be able to do this on public workstations if you lack the appropriate privileges.
Install the `ggExtra` package on your system. This package lets you create plots with marginal histograms.
```
install.packages("ggExtra")
```
If you don’t already have packages like ggplot2 and shiny installed, it will also install these **dependencies** for you. If you don’t get an error message at the end, the installation was successful.
### 1\.5\.2 Loading a package
This is done using `library(packagename)`. This is like **launching** an app on your phone: the functionality is only there where the app is launched and remains there until you close the app or restart. Likewise, when you run `library(packagename)` within a session, the functionality of the package referred to by `packagename` will be made available for your R session. The next time you start R, you will need to run the `library()` function again if you want to access its functionality.
You can load the functions in `ggExtra` for your current R session as follows:
```
library(ggExtra)
```
You might get some red text when you load a package, this is normal. It is usually warning you that this package has functions that have the same name as other packages you’ve already loaded.
You can use the convention `package::function()` to indicate in which add\-on package a function resides. For instance, if you see `readr::read_csv()`, that refers to the function `read_csv()` in the `readr` add\-on package.
Now you can run the function `ggExtra::runExample()`, which runs an interactive example of marginal plots using shiny.
```
ggExtra::runExample()
```
### 1\.5\.3 Install from GitHub
Many R packages are not yet on [CRAN](https://psyteachr.github.io/glossary/c#cran "The Comprehensive R Archive Network: a network of ftp and web servers around the world that store identical, up-to-date, versions of code and documentation for R.") because they are still in development. Increasingly, datasets and code for papers are available as packages you can download from github. You’ll need to install the devtools package to be able to install packages from github. Check if you have a package installed by trying to load it (e.g., if you don’t have devtools installed, `library("devtools")` will display an error message) or by searching for it in the packages tab in the lower right pane. All listed packages are installed; all checked packages are currently loaded.
Figure 1\.4: Check installed and loaded packages in the packages tab in the lower right pane.
```
# install devtools if you get
# Error in loadNamespace(name) : there is no package called ‘devtools’
# install.packages("devtools")
devtools::install_github("psyteachr/msc-data-skills")
```
After you install the dataskills package, load it using the `library()` function. You can then try out some of the functions below.
* `book()` opens a local copy of this book in your web browser.
* `app("plotdemo")` opens a shiny app that lets you see how simulated data would look in different plot styles
* `exercise(1)` creates and opens a file containing the exercises for this chapter
* `?disgust` shows you the documentation for the built\-in dataset `disgust`, which we will be using in future lessons
```
library(dataskills)
book()
app("plotdemo")
exercise(1)
?disgust
```
How many different ways can you find to discover what functions are available in the dataskills package?
### 1\.5\.1 Installing a package
This is done using `install.packages()`. This is like installing an app on your phone: you only have to do it once and the app will remain installed until you remove it. For instance, if you want to use PokemonGo on your phone, you install it once from the App Store or Play Store, and you don’t have to re\-install it each time you want to use it. Once you launch the app, it will run in the background until you close it or restart your phone. Likewise, when you install a package, the package will be available (but not *loaded*) every time you open up R.
You may only be able to permanently install packages if you are using R on your own system; you may not be able to do this on public workstations if you lack the appropriate privileges.
Install the `ggExtra` package on your system. This package lets you create plots with marginal histograms.
```
install.packages("ggExtra")
```
If you don’t already have packages like ggplot2 and shiny installed, it will also install these **dependencies** for you. If you don’t get an error message at the end, the installation was successful.
### 1\.5\.2 Loading a package
This is done using `library(packagename)`. This is like **launching** an app on your phone: the functionality is only there where the app is launched and remains there until you close the app or restart. Likewise, when you run `library(packagename)` within a session, the functionality of the package referred to by `packagename` will be made available for your R session. The next time you start R, you will need to run the `library()` function again if you want to access its functionality.
You can load the functions in `ggExtra` for your current R session as follows:
```
library(ggExtra)
```
You might get some red text when you load a package, this is normal. It is usually warning you that this package has functions that have the same name as other packages you’ve already loaded.
You can use the convention `package::function()` to indicate in which add\-on package a function resides. For instance, if you see `readr::read_csv()`, that refers to the function `read_csv()` in the `readr` add\-on package.
Now you can run the function `ggExtra::runExample()`, which runs an interactive example of marginal plots using shiny.
```
ggExtra::runExample()
```
### 1\.5\.3 Install from GitHub
Many R packages are not yet on [CRAN](https://psyteachr.github.io/glossary/c#cran "The Comprehensive R Archive Network: a network of ftp and web servers around the world that store identical, up-to-date, versions of code and documentation for R.") because they are still in development. Increasingly, datasets and code for papers are available as packages you can download from github. You’ll need to install the devtools package to be able to install packages from github. Check if you have a package installed by trying to load it (e.g., if you don’t have devtools installed, `library("devtools")` will display an error message) or by searching for it in the packages tab in the lower right pane. All listed packages are installed; all checked packages are currently loaded.
Figure 1\.4: Check installed and loaded packages in the packages tab in the lower right pane.
```
# install devtools if you get
# Error in loadNamespace(name) : there is no package called ‘devtools’
# install.packages("devtools")
devtools::install_github("psyteachr/msc-data-skills")
```
After you install the dataskills package, load it using the `library()` function. You can then try out some of the functions below.
* `book()` opens a local copy of this book in your web browser.
* `app("plotdemo")` opens a shiny app that lets you see how simulated data would look in different plot styles
* `exercise(1)` creates and opens a file containing the exercises for this chapter
* `?disgust` shows you the documentation for the built\-in dataset `disgust`, which we will be using in future lessons
```
library(dataskills)
book()
app("plotdemo")
exercise(1)
?disgust
```
How many different ways can you find to discover what functions are available in the dataskills package?
1\.6 Organising a project
-------------------------
[Projects](https://psyteachr.github.io/glossary/p#project "A way to organise related files in RStudio") in RStudio are a way to group all of the files you need for one project. Most projects include scripts, data files, and output files like the PDF version of the script or images.
Make a new directory where you will keep all of your materials for this class. If you’re using a lab computer, make sure you make this directory in your network drive so you can access it from other computers.
Choose **`New Project…`** under the **`File`** menu to create a new project called `01-intro` in this directory.
### 1\.6\.1 Structure
Here is what an R script looks like. Don’t worry about the details for now.
```
# load add-on packages
library(tidyverse)
# set object ----
n <- 100
# simulate data ----
data <- data.frame(
id = 1:n,
dv = c(rnorm(n/2, 0), rnorm(n/2, 1)),
condition = rep(c("A", "B"), each = n/2)
)
# plot data ----
ggplot(data, aes(condition, dv)) +
geom_violin(trim = FALSE) +
geom_boxplot(width = 0.25,
aes(fill = condition),
show.legend = FALSE)
# save plot ----
ggsave("sim_data.png", width = 8, height = 6)
```
It’s best if you follow the following structure when developing your own scripts:
* load in any add\-on packages you need to use
* define any custom functions
* load or simulate the data you will be working with
* work with the data
* save anything you need to save
Often when you are working on a script, you will realize that you need to load another add\-on package. Don’t bury the call to `library(package_I_need)` way down in the script. Put it in the top, so the user has an overview of what packages are needed.
You can add comments to an R script by with the hash symbol (`#`). The R interpreter will ignore characters from the hash to the end of the line.
```
## comments: any text from '#' on is ignored until end of line
22 / 7 # approximation to pi
```
```
## [1] 3.142857
```
If you add 4 or more dashes to the end of a comment, it acts like a header and creates an outline that you can see in the document outline (⇧⌘O).
### 1\.6\.2 Reproducible reports with R Markdown
We will make reproducible reports following the principles of [literate programming](https://en.wikipedia.org/wiki/Literate_programming). The basic idea is to have the text of the report together in a single document along with the code needed to perform all analyses and generate the tables. The report is then “compiled” from the original format into some other, more portable format, such as HTML or PDF. This is different from traditional cutting and pasting approaches where, for instance, you create a graph in Microsoft Excel or a statistics program like SPSS and then paste it into Microsoft Word.
We will use [R Markdown](https://psyteachr.github.io/glossary/r#r-markdown "The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.") to create reproducible reports, which enables mixing of text and code. A reproducible script will contain sections of code in code blocks. A code block starts and ends with backtick symbols in a row, with some infomation about the code between curly brackets, such as `{r chunk-name, echo=FALSE}` (this runs the code, but does not show the text of the code block in the compiled document). The text outside of code blocks is written in [markdown](https://psyteachr.github.io/glossary/m#markdown "A way to specify formatting, such as headers, paragraphs, lists, bolding, and links."), which is a way to specify formatting, such as headers, paragraphs, lists, bolding, and links.
Figure 1\.5: A reproducible script.
If you open up a new R Markdown file from a template, you will see an example document with several code blocks in it. To create an HTML or PDF report from an R Markdown (Rmd) document, you compile it. Compiling a document is called [knitting](https://psyteachr.github.io/glossary/k#knit "To create an HTML, PDF, or Word document from an R Markdown (Rmd) document") in RStudio. There is a button that looks like a ball of yarn with needles through it that you click on to compile your file into a report.
Create a new R Markdown file from the **`File > New File > R Markdown…`** menu. Change the title and author, then click the knit button to create an html file.
### 1\.6\.3 Working Directory
Where should you put all of your files? When developing an analysis, you usually want to have all of your scripts and data files in one subtree of your computer’s directory structure. Usually there is a single [working directory](https://psyteachr.github.io/glossary/w#working-directory "The filepath where R is currently reading and writing files.") where your data and scripts are stored.
Your script should only reference files in three locations, using the appropriate format.
| Where | Example |
| --- | --- |
| on the web | “[https://psyteachr.github.io/msc\-data\-skills/data/disgust\_scores.csv](https://psyteachr.github.io/msc-data-skills/data/disgust_scores.csv)” |
| in the working directory | “disgust\_scores.csv” |
| in a subdirectory | “data/disgust\_scores.csv” |
Never set or change your working directory in a script.
If you are working with an R Markdown file, it will automatically use the same directory the .Rmd file is in as the working directory.
If you are working with R scripts, store your main script file in the top\-level directory and manually set your working directory to that location. You will have to reset the working directory each time you open RStudio, unless you create a [project](https://psyteachr.github.io/glossary/p#project "A way to organise related files in RStudio") and access the script from the project.
For instance, if you are on a Windows machine your data and scripts are in the directory `C:\Carla's_files\thesis2\my_thesis\new_analysis`, you will set your working directory in one of two ways: (1\) by going to the `Session` pull down menu in RStudio and choosing `Set Working Directory`, or (2\) by typing `setwd("C:\Carla's_files\thesis2\my_thesis\new_analysis")` in the console window.
It’s tempting to make your life simple by putting the `setwd()` command in your script. Don’t do this! Others will not have the same directory tree as you (and when your laptop dies and you get a new one, neither will you).
When manually setting the working directory, always do so by using the **`Session > Set Working Directory`** pull\-down option or by typing `setwd()` in the console.
If your script needs a file in a subdirectory of `new_analysis`, say, `data/questionnaire.csv`, load it in using a [relative path](https://psyteachr.github.io/glossary/r#relative-path "The location of a file in relation to the working directory.") so that it is accessible if you move the folder `new_analysis` to another location or computer:
```
dat <- read_csv("data/questionnaire.csv") # correct
```
Do not load it in using an [absolute path](https://psyteachr.github.io/glossary/a#absolute-path "A file path that starts with / and is not appended to the working directory"):
```
dat <- read_csv("C:/Carla's_files/thesis22/my_thesis/new_analysis/data/questionnaire.csv") # wrong
```
Also note the convention of using forward slashes, unlike the Windows\-specific convention of using backward slashes. This is to make references to files platform independent.
### 1\.6\.1 Structure
Here is what an R script looks like. Don’t worry about the details for now.
```
# load add-on packages
library(tidyverse)
# set object ----
n <- 100
# simulate data ----
data <- data.frame(
id = 1:n,
dv = c(rnorm(n/2, 0), rnorm(n/2, 1)),
condition = rep(c("A", "B"), each = n/2)
)
# plot data ----
ggplot(data, aes(condition, dv)) +
geom_violin(trim = FALSE) +
geom_boxplot(width = 0.25,
aes(fill = condition),
show.legend = FALSE)
# save plot ----
ggsave("sim_data.png", width = 8, height = 6)
```
It’s best if you follow the following structure when developing your own scripts:
* load in any add\-on packages you need to use
* define any custom functions
* load or simulate the data you will be working with
* work with the data
* save anything you need to save
Often when you are working on a script, you will realize that you need to load another add\-on package. Don’t bury the call to `library(package_I_need)` way down in the script. Put it in the top, so the user has an overview of what packages are needed.
You can add comments to an R script by with the hash symbol (`#`). The R interpreter will ignore characters from the hash to the end of the line.
```
## comments: any text from '#' on is ignored until end of line
22 / 7 # approximation to pi
```
```
## [1] 3.142857
```
If you add 4 or more dashes to the end of a comment, it acts like a header and creates an outline that you can see in the document outline (⇧⌘O).
### 1\.6\.2 Reproducible reports with R Markdown
We will make reproducible reports following the principles of [literate programming](https://en.wikipedia.org/wiki/Literate_programming). The basic idea is to have the text of the report together in a single document along with the code needed to perform all analyses and generate the tables. The report is then “compiled” from the original format into some other, more portable format, such as HTML or PDF. This is different from traditional cutting and pasting approaches where, for instance, you create a graph in Microsoft Excel or a statistics program like SPSS and then paste it into Microsoft Word.
We will use [R Markdown](https://psyteachr.github.io/glossary/r#r-markdown "The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.") to create reproducible reports, which enables mixing of text and code. A reproducible script will contain sections of code in code blocks. A code block starts and ends with backtick symbols in a row, with some infomation about the code between curly brackets, such as `{r chunk-name, echo=FALSE}` (this runs the code, but does not show the text of the code block in the compiled document). The text outside of code blocks is written in [markdown](https://psyteachr.github.io/glossary/m#markdown "A way to specify formatting, such as headers, paragraphs, lists, bolding, and links."), which is a way to specify formatting, such as headers, paragraphs, lists, bolding, and links.
Figure 1\.5: A reproducible script.
If you open up a new R Markdown file from a template, you will see an example document with several code blocks in it. To create an HTML or PDF report from an R Markdown (Rmd) document, you compile it. Compiling a document is called [knitting](https://psyteachr.github.io/glossary/k#knit "To create an HTML, PDF, or Word document from an R Markdown (Rmd) document") in RStudio. There is a button that looks like a ball of yarn with needles through it that you click on to compile your file into a report.
Create a new R Markdown file from the **`File > New File > R Markdown…`** menu. Change the title and author, then click the knit button to create an html file.
### 1\.6\.3 Working Directory
Where should you put all of your files? When developing an analysis, you usually want to have all of your scripts and data files in one subtree of your computer’s directory structure. Usually there is a single [working directory](https://psyteachr.github.io/glossary/w#working-directory "The filepath where R is currently reading and writing files.") where your data and scripts are stored.
Your script should only reference files in three locations, using the appropriate format.
| Where | Example |
| --- | --- |
| on the web | “[https://psyteachr.github.io/msc\-data\-skills/data/disgust\_scores.csv](https://psyteachr.github.io/msc-data-skills/data/disgust_scores.csv)” |
| in the working directory | “disgust\_scores.csv” |
| in a subdirectory | “data/disgust\_scores.csv” |
Never set or change your working directory in a script.
If you are working with an R Markdown file, it will automatically use the same directory the .Rmd file is in as the working directory.
If you are working with R scripts, store your main script file in the top\-level directory and manually set your working directory to that location. You will have to reset the working directory each time you open RStudio, unless you create a [project](https://psyteachr.github.io/glossary/p#project "A way to organise related files in RStudio") and access the script from the project.
For instance, if you are on a Windows machine your data and scripts are in the directory `C:\Carla's_files\thesis2\my_thesis\new_analysis`, you will set your working directory in one of two ways: (1\) by going to the `Session` pull down menu in RStudio and choosing `Set Working Directory`, or (2\) by typing `setwd("C:\Carla's_files\thesis2\my_thesis\new_analysis")` in the console window.
It’s tempting to make your life simple by putting the `setwd()` command in your script. Don’t do this! Others will not have the same directory tree as you (and when your laptop dies and you get a new one, neither will you).
When manually setting the working directory, always do so by using the **`Session > Set Working Directory`** pull\-down option or by typing `setwd()` in the console.
If your script needs a file in a subdirectory of `new_analysis`, say, `data/questionnaire.csv`, load it in using a [relative path](https://psyteachr.github.io/glossary/r#relative-path "The location of a file in relation to the working directory.") so that it is accessible if you move the folder `new_analysis` to another location or computer:
```
dat <- read_csv("data/questionnaire.csv") # correct
```
Do not load it in using an [absolute path](https://psyteachr.github.io/glossary/a#absolute-path "A file path that starts with / and is not appended to the working directory"):
```
dat <- read_csv("C:/Carla's_files/thesis22/my_thesis/new_analysis/data/questionnaire.csv") # wrong
```
Also note the convention of using forward slashes, unlike the Windows\-specific convention of using backward slashes. This is to make references to files platform independent.
1\.7 Glossary
-------------
Each chapter ends with a glossary table defining the jargon introduced in this chapter. The links below take you to the [glossary book](https://psyteachr.github.io/glossary), which you can also download for offline use with `devtools::install_github("psyteachr/glossary")` and access the glossary offline with `glossary::book()`.
| term | definition |
| --- | --- |
| [absolute path](https://psyteachr.github.io/glossary/a#absolute.path) | A file path that starts with / and is not appended to the working directory |
| [argument](https://psyteachr.github.io/glossary/a#argument) | A variable that provides input to a function. |
| [assignment operator](https://psyteachr.github.io/glossary/a#assignment.operator) | The symbol \<\-, which functions like \= and assigns the value on the right to the object on the left |
| [base r](https://psyteachr.github.io/glossary/b#base.r) | The set of R functions that come with a basic installation of R, before you add external packages |
| [console](https://psyteachr.github.io/glossary/c#console) | The pane in RStudio where you can type in commands and view output messages. |
| [cran](https://psyteachr.github.io/glossary/c#cran) | The Comprehensive R Archive Network: a network of ftp and web servers around the world that store identical, up\-to\-date, versions of code and documentation for R. |
| [escape](https://psyteachr.github.io/glossary/e#escape) | Include special characters like " inside of a string by prefacing them with a backslash. |
| [function](https://psyteachr.github.io/glossary/f#function.) | A named section of code that can be reused. |
| [global environment](https://psyteachr.github.io/glossary/g#global.environment) | The interactive workspace where your script runs |
| [ide](https://psyteachr.github.io/glossary/i#ide) | Integrated Development Environment: a program that serves as a text editor, file manager, and provides functions to help you read and write code. RStudio is an IDE for R. |
| [knit](https://psyteachr.github.io/glossary/k#knit) | To create an HTML, PDF, or Word document from an R Markdown (Rmd) document |
| [markdown](https://psyteachr.github.io/glossary/m#markdown) | A way to specify formatting, such as headers, paragraphs, lists, bolding, and links. |
| [normal distribution](https://psyteachr.github.io/glossary/n#normal.distribution) | A symmetric distribution of data where values near the centre are most probable. |
| [object](https://psyteachr.github.io/glossary/o#object) | A word that identifies and stores the value of some data for later use. |
| [package](https://psyteachr.github.io/glossary/p#package) | A group of R functions. |
| [panes](https://psyteachr.github.io/glossary/p#panes) | RStudio is arranged with four window “panes.” |
| [project](https://psyteachr.github.io/glossary/p#project) | A way to organise related files in RStudio |
| [r markdown](https://psyteachr.github.io/glossary/r#r.markdown) | The R\-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code. |
| [relative path](https://psyteachr.github.io/glossary/r#relative.path) | The location of a file in relation to the working directory. |
| [reproducible research](https://psyteachr.github.io/glossary/r#reproducible.research) | Research that documents all of the steps between raw data and results in a way that can be verified. |
| [script](https://psyteachr.github.io/glossary/s#script) | A plain\-text file that contains commands in a coding language, such as R. |
| [scripts](https://psyteachr.github.io/glossary/s#scripts) | NA |
| [standard deviation](https://psyteachr.github.io/glossary/s#standard.deviation) | A descriptive statistic that measures how spread out data are relative to the mean. |
| [string](https://psyteachr.github.io/glossary/s#string) | A piece of text inside of quotes. |
| [variable](https://psyteachr.github.io/glossary/v#variable) | A word that identifies and stores the value of some data for later use. |
| [vector](https://psyteachr.github.io/glossary/v#vector) | A type of data structure that is basically a list of things like T/F values, numbers, or strings. |
| [whitespace](https://psyteachr.github.io/glossary/w#whitespace) | Spaces, tabs and line breaks |
| [working directory](https://psyteachr.github.io/glossary/w#working.directory) | The filepath where R is currently reading and writing files. |
1\.8 Exercises
--------------
Download the first set of [exercises](exercises/01_intro_exercise.Rmd) and put it in the project directory you created earlier for today’s exercises. See the [answers](exercises/01_intro_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(1)
# run this to access the answers
dataskills::exercise(1, answers = TRUE)
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/intro.html |
Chapter 1 Getting Started
=========================
1\.1 Learning Objectives
------------------------
1. Understand the components of the [RStudio IDE](intro.html#rstudio_ide) [(video)](https://youtu.be/CbA6ZVlJE78)
2. Type commands into the [console](intro.html#console) [(video)](https://youtu.be/wbI4c_7y0kE)
3. Understand [function syntax](intro.html#function_syx) [(video)](https://youtu.be/X5P038N5Q8I)
4. Install a [package](intro.html#install-package) [(video)](https://youtu.be/u_pvHnqkVCE)
5. Organise a [project](intro.html#projects) [(video)](https://youtu.be/y-KiPueC9xw)
6. Create and compile an [Rmarkdown document](intro.html#rmarkdown) [(video)](https://youtu.be/EqJiAlJAl8Y)
1\.2 Resources
--------------
* [Chapter 1: Introduction](http://r4ds.had.co.nz/introduction.html) in *R for Data Science*
* [RStudio IDE Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/rstudio-ide.pdf)
* [Introduction to R Markdown](https://rmarkdown.rstudio.com/lesson-1.html)
* [R Markdown Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/rmarkdown-2.0.pdf)
* [R Markdown Reference](https://www.rstudio.com/wp-content/uploads/2015/03/rmarkdown-reference.pdf)
* [RStudio Cloud](https://rstudio.cloud/)
1\.3 What is R?
---------------
R is a programming environment for data processing and statistical analysis. We use R in Psychology at the University of Glasgow to promote [reproducible research](https://psyteachr.github.io/glossary/r#reproducible-research "Research that documents all of the steps between raw data and results in a way that can be verified."). This refers to being able to document and reproduce all of the steps between raw data and results. R allows you to write [scripts](https://psyteachr.github.io/glossary/s#script "A plain-text file that contains commands in a coding language, such as R.") that combine data files, clean data, and run analyses. There are many other ways to do this, including writing SPSS syntax files, but we find R to be a useful tool that is free, open source, and commonly used by research psychologists.
See Appendix [A](installingr.html#installingr) for more information on on how to install R and associated programs.
### 1\.3\.1 The Base R Console
If you open up the application called R, you will see an “R Console” window that looks something like this.
Figure 1\.1: The R Console window.
You can close R and never open it again. We’ll be working entirely in RStudio in this class.
ALWAYS REMEMBER: Launch R though the RStudio IDE
Launch .
### 1\.3\.2 RStudio
[RStudio](http://www.rstudio.com) is an Integrated Development Environment ([IDE](https://psyteachr.github.io/glossary/i#ide "Integrated Development Environment: a program that serves as a text editor, file manager, and provides functions to help you read and write code. RStudio is an IDE for R.")). This is a program that serves as a text editor, file manager, and provides many functions to help you read and write R code.
Figure 1\.2: The RStudio IDE
RStudio is arranged with four window [panes](https://psyteachr.github.io/glossary/p#panes "RStudio is arranged with four window “panes.”"). By default, the upper left pane is the **source pane**, where you view and edit source code from files. The bottom left pane is usually the **console pane**, where you can type in commands and view output messages. The right panes have several different tabs that show you information about your code. You can change the location of panes and what tabs are shown under **`Preferences > Pane Layout`**.
Your browser does not support the video tag.
### 1\.3\.3 Configure RStudio
In this class, you will be learning how to do [reproducible research](https://psyteachr.github.io/glossary/r#reproducible-research "Research that documents all of the steps between raw data and results in a way that can be verified."). This involves writing scripts that completely and transparently perform some analysis from start to finish in a way that yields the same result for different people using the same software on different computers. Transparency is a key value of science, as embodied in the “trust but verify” motto.
When you do things reproducibly, others can understand and check your work. This benefits science, but there is a selfish reason, too: the most important person who will benefit from a reproducible script is your future self. When you return to an analysis after two weeks of vacation, you will thank your earlier self for doing things in a transparent, reproducible way, as you can easily pick up right where you left off.
There are two tweaks that you should do to your RStudio installation to maximize reproducibility. Go to **`Global Options...`** under the **`Tools`** menu (⌘,), and uncheck the box that says **`Restore .RData into workspace at startup`**. If you keep things around in your workspace, things will get messy, and unexpected things will happen. You should always start with a clear workspace. This also means that you never want to save your workspace when you exit, so set this to **`Never`**. The only thing you want to save are your scripts.
Figure 1\.3: Alter these settings for increased reproducibility.
Your settings should have:
* Restore .RData into workspace at startup: Checked Not Checked
* Save workspace to .RData on exit: Always Never Ask
1\.4 Getting Started
--------------------
### 1\.4\.1 Console commands
We are first going to learn about how to interact with the [console](https://psyteachr.github.io/glossary/c#console "The pane in RStudio where you can type in commands and view output messages."). In general, you will be developing R [script](https://psyteachr.github.io/glossary/s#scripts "NA") or [R Markdown](https://psyteachr.github.io/glossary/r#r-markdown "The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.") files, rather than working directly in the console window. However, you can consider the console a kind of “sandbox” where you can try out lines of code and adapt them until you get them to do what you want. Then you can copy them back into the script editor.
Mostly, however, you will be typing into the script editor window (either into an R script or an R Markdown file) and then sending the commands to the console by placing the cursor on the line and holding down the Ctrl key while you press Enter. The Ctrl\+Enter key sequence sends the command in the script to the console.
One simple way to learn about the R console is to use it as a calculator. Enter the lines of code below and see if your results match. Be prepared to make lots of typos (at first).
```
1 + 1
```
```
## [1] 2
```
The R console remembers a history of the commands you typed in the past. Use the up and down arrow keys on your keyboard to scroll backwards and forwards through your history. It’s a lot faster than re\-typing.
```
1 + 1 + 3
```
```
## [1] 5
```
You can break up mathematical expressions over multiple lines; R waits for a complete expression before processing it.
```
## here comes a long expression
## let's break it over multiple lines
1 + 2 + 3 + 4 + 5 + 6 +
7 + 8 + 9 +
10
```
```
## [1] 55
```
Text inside quotes is called a [string](https://psyteachr.github.io/glossary/s#string "A piece of text inside of quotes.").
```
"Good afternoon"
```
```
## [1] "Good afternoon"
```
You can break up text over multiple lines; R waits for a close quote before processing it. If you want to include a double quote inside this quoted string, [escape](https://psyteachr.github.io/glossary/e#escape "Include special characters like \" inside of a string by prefacing them with a backslash.") it with a backslash.
```
africa <- "I hear the drums echoing tonight
But she hears only whispers of some quiet conversation
She's coming in, 12:30 flight
The moonlit wings reflect the stars that guide me towards salvation
I stopped an old man along the way
Hoping to find some old forgotten words or ancient melodies
He turned to me as if to say, \"Hurry boy, it's waiting there for you\"
- Toto"
cat(africa) # cat() prints the string
```
```
## I hear the drums echoing tonight
## But she hears only whispers of some quiet conversation
## She's coming in, 12:30 flight
## The moonlit wings reflect the stars that guide me towards salvation
## I stopped an old man along the way
## Hoping to find some old forgotten words or ancient melodies
## He turned to me as if to say, "Hurry boy, it's waiting there for you"
##
## - Toto
```
### 1\.4\.2 Objects
Often you want to store the result of some computation for later use. You can store it in an [object](https://psyteachr.github.io/glossary/o#object "A word that identifies and stores the value of some data for later use.") (also sometimes called a [variable](https://psyteachr.github.io/glossary/v#variable "A word that identifies and stores the value of some data for later use.")). An object in R:
* contains only letters, numbers, full stops, and underscores
* starts with a letter or a full stop and a letter
* distinguishes uppercase and lowercase letters (`rickastley` is not the same as `RickAstley`)
The following are valid and different objects:
* songdata
* SongData
* song\_data
* song.data
* .song.data
* never\_gonna\_give\_you\_up\_never\_gonna\_let\_you\_down
The following are not valid objects:
* \_song\_data
* 1song
* .1song
* song data
* song\-data
Use the [assignment operator](https://psyteachr.github.io/glossary/a#assignment-operator "The symbol <-, which functions like = and assigns the value on the right to the object on the left")\<\-\` to assign the value on the right to the object named on the left.
```
## use the assignment operator '<-'
## R stores the number in the object
x <- 5
```
Now that we have set `x` to a value, we can do something with it:
```
x * 2
## R evaluates the expression and stores the result in the object boring_calculation
boring_calculation <- 2 + 2
```
```
## [1] 10
```
Note that it doesn’t print the result back at you when it’s stored. To view the result, just type the object name on a blank line.
```
boring_calculation
```
```
## [1] 4
```
Once an object is assigned a value, its value doesn’t change unless you reassign the object, even if the objects you used to calculate it change. Predict what the code below does and test yourself:
```
this_year <- 2019
my_birth_year <- 1976
my_age <- this_year - my_birth_year
this_year <- 2020
```
After all the code above is run:
* `this_year` \= 43 44 1976 2019 2020
* `my_birth_year` \= 43 44 1976 2019 2020
* `my_age` \= 43 44 1976 2019 2020
### 1\.4\.3 The environment
Anytime you assign something to a new object, R creates a new entry in the [global environment](https://psyteachr.github.io/glossary/g#global-environment "The interactive workspace where your script runs"). Objects in the global environment exist until you end your session; then they disappear forever (unless you save them).
Look at the **Environment** tab in the upper right pane. It lists all of the objects you have created. Click the broom icon to clear all of the objects and start fresh. You can also use the following functions in the console to view all objects, remove one object, or remove all objects.
```
ls() # print the objects in the global environment
rm("x") # remove the object named x from the global environment
rm(list = ls()) # clear out the global environment
```
In the upper right corner of the Environment tab, change **`List`** to **`Grid`**. Now you can see the type, length, and size of your objects, and reorder the list by any of these attributes.
### 1\.4\.4 Whitespace
R mostly ignores [whitespace](https://psyteachr.github.io/glossary/w#whitespace "Spaces, tabs and line breaks"): spaces, tabs, and line breaks. This means that you can use whitespace to help you organise your code.
```
# a and b are identical
a <- list(ctl = "Control Condition", exp1 = "Experimental Condition 1", exp2 = "Experimental Condition 2")
# but b is much easier to read
b <- list(ctl = "Control Condition",
exp1 = "Experimental Condition 1",
exp2 = "Experimental Condition 2")
```
When you see `>` at the beginning of a line, that means R is waiting for you to start a new command. However, if you see a `+` instead of `>` at the start of the line, that means R is waiting for you to finish a command you started on a previous line. If you want to cancel whatever command you started, just press the Esc key in the console window and you’ll get back to the `>` command prompt.
```
# R waits until next line for evaluation
(3 + 2) *
5
```
```
## [1] 25
```
It is often useful to break up long functions onto several lines.
```
cat("3, 6, 9, the goose drank wine",
"The monkey chewed tobacco on the streetcar line",
"The line broke, the monkey got choked",
"And they all went to heaven in a little rowboat",
sep = " \n")
```
```
## 3, 6, 9, the goose drank wine
## The monkey chewed tobacco on the streetcar line
## The line broke, the monkey got choked
## And they all went to heaven in a little rowboat
```
### 1\.4\.5 Function syntax
A lot of what you do in R involves calling a [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") and storing the results. A function is a named section of code that can be reused.
For example, `sd` is a function that returns the [standard deviation](https://psyteachr.github.io/glossary/s#standard-deviation "A descriptive statistic that measures how spread out data are relative to the mean.") of the [vector](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings.") of numbers that you provide as the input [argument](https://psyteachr.github.io/glossary/a#argument "A variable that provides input to a function."). Functions are set up like this:
`function_name(argument1, argument2 = "value")`.
The arguments in parentheses can be named (like, `argument1 = 10`) or you can skip the names if you put them in the exact same order that they’re defined in the function. You can check this by typing `?sd` (or whatever function name you’re looking up) into the console and the Help pane will show you the default order under **Usage**. You can also skip arguments that have a default value specified.
Most functions return a value, but may also produce side effects like printing to the console.
To illustrate, the function `rnorm()` generates random numbers from the standard [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable."). The help page for `rnorm()` (accessed by typing `?rnorm` in the console) shows that it has the syntax
`rnorm(n, mean = 0, sd = 1)`
where `n` is the number of randomly generated numbers you want, `mean` is the mean of the distribution, and `sd` is the standard deviation. The default mean is 0, and the default standard deviation is 1\. There is no default for `n`, which means you’ll get an error if you don’t specify it:
```
rnorm()
```
```
## Error in rnorm(): argument "n" is missing, with no default
```
If you want 10 random numbers from a normal distribution with mean of 0 and standard deviation, you can just use the defaults.
```
rnorm(10)
```
```
## [1] -0.04234663 -2.00393149 0.83611187 -1.46404127 1.31714428 0.42608581
## [7] -0.46673798 -0.01670509 1.64072295 0.85876439
```
If you want 10 numbers from a normal distribution with a mean of 100:
```
rnorm(10, 100)
```
```
## [1] 101.34917 99.86059 100.36287 99.65575 100.66818 100.04771 99.76782
## [8] 102.57691 100.05575 99.42490
```
This would be an equivalent but less efficient way of calling the function:
```
rnorm(n = 10, mean = 100)
```
```
## [1] 100.52773 99.40241 100.39641 101.01629 99.41961 100.52202 98.09828
## [8] 99.52169 100.25677 99.92092
```
We don’t need to name the arguments because R will recognize that we intended to fill in the first and second arguments by their position in the function call. However, if we want to change the default for an argument coming later in the list, then we need to name it. For instance, if we wanted to keep the default `mean = 0` but change the standard deviation to 100 we would do it this way:
```
rnorm(10, sd = 100)
```
```
## [1] -68.254349 -17.636619 140.047575 7.570674 -68.309751 -2.378786
## [7] 117.356343 -104.772092 -40.163750 54.358941
```
Some functions give a list of options after an argument; this means the default value is the first option. The usage entry for the `power.t.test()` function looks like this:
```
power.t.test(n = NULL, delta = NULL, sd = 1, sig.level = 0.05,
power = NULL,
type = c("two.sample", "one.sample", "paired"),
alternative = c("two.sided", "one.sided"),
strict = FALSE, tol = .Machine$double.eps^0.25)
```
* What is the default value for `sd`? NULL 1 0\.05 two.sample
* What is the default value for `type`? NULL two.sample one.sample paired
* Which is equivalent to `power.t.test(100, 0.5)`? power.t.test(100, 0\.5, sig.level \= 1, sd \= 0\.05\) power.t.test() power.t.test(n \= 100\) power.t.test(delta \= 0\.5, n \= 100\)
### 1\.4\.6 Getting help
Start up help in a browser using the function `help.start()`.
If a function is in [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") or a loaded [package](https://psyteachr.github.io/glossary/p#package "A group of R functions."), you can use the `help("function_name")` function or the `?function_name` shortcut to access the help file. If the package isn’t loaded, specify the package name as the second argument to the help function.
```
# these methods are all equivalent ways of getting help
help("rnorm")
?rnorm
help("rnorm", package="stats")
```
When the package isn’t loaded or you aren’t sure what package the function is in, use the shortcut `??function_name`.
* What is the first argument to the `mean` function? trim na.rm mean x
* What package is `read_excel` in? readr readxl base stats
1\.5 Add\-on packages
---------------------
One of the great things about R is that it is **user extensible**: anyone can create a new add\-on software package that extends its functionality. There are currently thousands of add\-on packages that R users have created to solve many different kinds of problems, or just simply to have fun. There are packages for data visualisation, machine learning, neuroimaging, eyetracking, web scraping, and playing games such as Sudoku.
Add\-on packages are not distributed with base R, but have to be downloaded and installed from an archive, in the same way that you would, for instance, download and install a fitness app on your smartphone.
The main repository where packages reside is called CRAN, the Comprehensive R Archive Network. A package has to pass strict tests devised by the R core team to be allowed to be part of the CRAN archive. You can install from the CRAN archive through R using the `install.packages()` function.
There is an important distinction between **installing** a package and **loading** a package.
### 1\.5\.1 Installing a package
This is done using `install.packages()`. This is like installing an app on your phone: you only have to do it once and the app will remain installed until you remove it. For instance, if you want to use PokemonGo on your phone, you install it once from the App Store or Play Store, and you don’t have to re\-install it each time you want to use it. Once you launch the app, it will run in the background until you close it or restart your phone. Likewise, when you install a package, the package will be available (but not *loaded*) every time you open up R.
You may only be able to permanently install packages if you are using R on your own system; you may not be able to do this on public workstations if you lack the appropriate privileges.
Install the `ggExtra` package on your system. This package lets you create plots with marginal histograms.
```
install.packages("ggExtra")
```
If you don’t already have packages like ggplot2 and shiny installed, it will also install these **dependencies** for you. If you don’t get an error message at the end, the installation was successful.
### 1\.5\.2 Loading a package
This is done using `library(packagename)`. This is like **launching** an app on your phone: the functionality is only there where the app is launched and remains there until you close the app or restart. Likewise, when you run `library(packagename)` within a session, the functionality of the package referred to by `packagename` will be made available for your R session. The next time you start R, you will need to run the `library()` function again if you want to access its functionality.
You can load the functions in `ggExtra` for your current R session as follows:
```
library(ggExtra)
```
You might get some red text when you load a package, this is normal. It is usually warning you that this package has functions that have the same name as other packages you’ve already loaded.
You can use the convention `package::function()` to indicate in which add\-on package a function resides. For instance, if you see `readr::read_csv()`, that refers to the function `read_csv()` in the `readr` add\-on package.
Now you can run the function `ggExtra::runExample()`, which runs an interactive example of marginal plots using shiny.
```
ggExtra::runExample()
```
### 1\.5\.3 Install from GitHub
Many R packages are not yet on [CRAN](https://psyteachr.github.io/glossary/c#cran "The Comprehensive R Archive Network: a network of ftp and web servers around the world that store identical, up-to-date, versions of code and documentation for R.") because they are still in development. Increasingly, datasets and code for papers are available as packages you can download from github. You’ll need to install the devtools package to be able to install packages from github. Check if you have a package installed by trying to load it (e.g., if you don’t have devtools installed, `library("devtools")` will display an error message) or by searching for it in the packages tab in the lower right pane. All listed packages are installed; all checked packages are currently loaded.
Figure 1\.4: Check installed and loaded packages in the packages tab in the lower right pane.
```
# install devtools if you get
# Error in loadNamespace(name) : there is no package called ‘devtools’
# install.packages("devtools")
devtools::install_github("psyteachr/msc-data-skills")
```
After you install the dataskills package, load it using the `library()` function. You can then try out some of the functions below.
* `book()` opens a local copy of this book in your web browser.
* `app("plotdemo")` opens a shiny app that lets you see how simulated data would look in different plot styles
* `exercise(1)` creates and opens a file containing the exercises for this chapter
* `?disgust` shows you the documentation for the built\-in dataset `disgust`, which we will be using in future lessons
```
library(dataskills)
book()
app("plotdemo")
exercise(1)
?disgust
```
How many different ways can you find to discover what functions are available in the dataskills package?
1\.6 Organising a project
-------------------------
[Projects](https://psyteachr.github.io/glossary/p#project "A way to organise related files in RStudio") in RStudio are a way to group all of the files you need for one project. Most projects include scripts, data files, and output files like the PDF version of the script or images.
Make a new directory where you will keep all of your materials for this class. If you’re using a lab computer, make sure you make this directory in your network drive so you can access it from other computers.
Choose **`New Project…`** under the **`File`** menu to create a new project called `01-intro` in this directory.
### 1\.6\.1 Structure
Here is what an R script looks like. Don’t worry about the details for now.
```
# load add-on packages
library(tidyverse)
# set object ----
n <- 100
# simulate data ----
data <- data.frame(
id = 1:n,
dv = c(rnorm(n/2, 0), rnorm(n/2, 1)),
condition = rep(c("A", "B"), each = n/2)
)
# plot data ----
ggplot(data, aes(condition, dv)) +
geom_violin(trim = FALSE) +
geom_boxplot(width = 0.25,
aes(fill = condition),
show.legend = FALSE)
# save plot ----
ggsave("sim_data.png", width = 8, height = 6)
```
It’s best if you follow the following structure when developing your own scripts:
* load in any add\-on packages you need to use
* define any custom functions
* load or simulate the data you will be working with
* work with the data
* save anything you need to save
Often when you are working on a script, you will realize that you need to load another add\-on package. Don’t bury the call to `library(package_I_need)` way down in the script. Put it in the top, so the user has an overview of what packages are needed.
You can add comments to an R script by with the hash symbol (`#`). The R interpreter will ignore characters from the hash to the end of the line.
```
## comments: any text from '#' on is ignored until end of line
22 / 7 # approximation to pi
```
```
## [1] 3.142857
```
If you add 4 or more dashes to the end of a comment, it acts like a header and creates an outline that you can see in the document outline (⇧⌘O).
### 1\.6\.2 Reproducible reports with R Markdown
We will make reproducible reports following the principles of [literate programming](https://en.wikipedia.org/wiki/Literate_programming). The basic idea is to have the text of the report together in a single document along with the code needed to perform all analyses and generate the tables. The report is then “compiled” from the original format into some other, more portable format, such as HTML or PDF. This is different from traditional cutting and pasting approaches where, for instance, you create a graph in Microsoft Excel or a statistics program like SPSS and then paste it into Microsoft Word.
We will use [R Markdown](https://psyteachr.github.io/glossary/r#r-markdown "The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.") to create reproducible reports, which enables mixing of text and code. A reproducible script will contain sections of code in code blocks. A code block starts and ends with backtick symbols in a row, with some infomation about the code between curly brackets, such as `{r chunk-name, echo=FALSE}` (this runs the code, but does not show the text of the code block in the compiled document). The text outside of code blocks is written in [markdown](https://psyteachr.github.io/glossary/m#markdown "A way to specify formatting, such as headers, paragraphs, lists, bolding, and links."), which is a way to specify formatting, such as headers, paragraphs, lists, bolding, and links.
Figure 1\.5: A reproducible script.
If you open up a new R Markdown file from a template, you will see an example document with several code blocks in it. To create an HTML or PDF report from an R Markdown (Rmd) document, you compile it. Compiling a document is called [knitting](https://psyteachr.github.io/glossary/k#knit "To create an HTML, PDF, or Word document from an R Markdown (Rmd) document") in RStudio. There is a button that looks like a ball of yarn with needles through it that you click on to compile your file into a report.
Create a new R Markdown file from the **`File > New File > R Markdown…`** menu. Change the title and author, then click the knit button to create an html file.
### 1\.6\.3 Working Directory
Where should you put all of your files? When developing an analysis, you usually want to have all of your scripts and data files in one subtree of your computer’s directory structure. Usually there is a single [working directory](https://psyteachr.github.io/glossary/w#working-directory "The filepath where R is currently reading and writing files.") where your data and scripts are stored.
Your script should only reference files in three locations, using the appropriate format.
| Where | Example |
| --- | --- |
| on the web | “[https://psyteachr.github.io/msc\-data\-skills/data/disgust\_scores.csv](https://psyteachr.github.io/msc-data-skills/data/disgust_scores.csv)” |
| in the working directory | “disgust\_scores.csv” |
| in a subdirectory | “data/disgust\_scores.csv” |
Never set or change your working directory in a script.
If you are working with an R Markdown file, it will automatically use the same directory the .Rmd file is in as the working directory.
If you are working with R scripts, store your main script file in the top\-level directory and manually set your working directory to that location. You will have to reset the working directory each time you open RStudio, unless you create a [project](https://psyteachr.github.io/glossary/p#project "A way to organise related files in RStudio") and access the script from the project.
For instance, if you are on a Windows machine your data and scripts are in the directory `C:\Carla's_files\thesis2\my_thesis\new_analysis`, you will set your working directory in one of two ways: (1\) by going to the `Session` pull down menu in RStudio and choosing `Set Working Directory`, or (2\) by typing `setwd("C:\Carla's_files\thesis2\my_thesis\new_analysis")` in the console window.
It’s tempting to make your life simple by putting the `setwd()` command in your script. Don’t do this! Others will not have the same directory tree as you (and when your laptop dies and you get a new one, neither will you).
When manually setting the working directory, always do so by using the **`Session > Set Working Directory`** pull\-down option or by typing `setwd()` in the console.
If your script needs a file in a subdirectory of `new_analysis`, say, `data/questionnaire.csv`, load it in using a [relative path](https://psyteachr.github.io/glossary/r#relative-path "The location of a file in relation to the working directory.") so that it is accessible if you move the folder `new_analysis` to another location or computer:
```
dat <- read_csv("data/questionnaire.csv") # correct
```
Do not load it in using an [absolute path](https://psyteachr.github.io/glossary/a#absolute-path "A file path that starts with / and is not appended to the working directory"):
```
dat <- read_csv("C:/Carla's_files/thesis22/my_thesis/new_analysis/data/questionnaire.csv") # wrong
```
Also note the convention of using forward slashes, unlike the Windows\-specific convention of using backward slashes. This is to make references to files platform independent.
1\.7 Glossary
-------------
Each chapter ends with a glossary table defining the jargon introduced in this chapter. The links below take you to the [glossary book](https://psyteachr.github.io/glossary), which you can also download for offline use with `devtools::install_github("psyteachr/glossary")` and access the glossary offline with `glossary::book()`.
| term | definition |
| --- | --- |
| [absolute path](https://psyteachr.github.io/glossary/a#absolute.path) | A file path that starts with / and is not appended to the working directory |
| [argument](https://psyteachr.github.io/glossary/a#argument) | A variable that provides input to a function. |
| [assignment operator](https://psyteachr.github.io/glossary/a#assignment.operator) | The symbol \<\-, which functions like \= and assigns the value on the right to the object on the left |
| [base r](https://psyteachr.github.io/glossary/b#base.r) | The set of R functions that come with a basic installation of R, before you add external packages |
| [console](https://psyteachr.github.io/glossary/c#console) | The pane in RStudio where you can type in commands and view output messages. |
| [cran](https://psyteachr.github.io/glossary/c#cran) | The Comprehensive R Archive Network: a network of ftp and web servers around the world that store identical, up\-to\-date, versions of code and documentation for R. |
| [escape](https://psyteachr.github.io/glossary/e#escape) | Include special characters like " inside of a string by prefacing them with a backslash. |
| [function](https://psyteachr.github.io/glossary/f#function.) | A named section of code that can be reused. |
| [global environment](https://psyteachr.github.io/glossary/g#global.environment) | The interactive workspace where your script runs |
| [ide](https://psyteachr.github.io/glossary/i#ide) | Integrated Development Environment: a program that serves as a text editor, file manager, and provides functions to help you read and write code. RStudio is an IDE for R. |
| [knit](https://psyteachr.github.io/glossary/k#knit) | To create an HTML, PDF, or Word document from an R Markdown (Rmd) document |
| [markdown](https://psyteachr.github.io/glossary/m#markdown) | A way to specify formatting, such as headers, paragraphs, lists, bolding, and links. |
| [normal distribution](https://psyteachr.github.io/glossary/n#normal.distribution) | A symmetric distribution of data where values near the centre are most probable. |
| [object](https://psyteachr.github.io/glossary/o#object) | A word that identifies and stores the value of some data for later use. |
| [package](https://psyteachr.github.io/glossary/p#package) | A group of R functions. |
| [panes](https://psyteachr.github.io/glossary/p#panes) | RStudio is arranged with four window “panes.” |
| [project](https://psyteachr.github.io/glossary/p#project) | A way to organise related files in RStudio |
| [r markdown](https://psyteachr.github.io/glossary/r#r.markdown) | The R\-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code. |
| [relative path](https://psyteachr.github.io/glossary/r#relative.path) | The location of a file in relation to the working directory. |
| [reproducible research](https://psyteachr.github.io/glossary/r#reproducible.research) | Research that documents all of the steps between raw data and results in a way that can be verified. |
| [script](https://psyteachr.github.io/glossary/s#script) | A plain\-text file that contains commands in a coding language, such as R. |
| [scripts](https://psyteachr.github.io/glossary/s#scripts) | NA |
| [standard deviation](https://psyteachr.github.io/glossary/s#standard.deviation) | A descriptive statistic that measures how spread out data are relative to the mean. |
| [string](https://psyteachr.github.io/glossary/s#string) | A piece of text inside of quotes. |
| [variable](https://psyteachr.github.io/glossary/v#variable) | A word that identifies and stores the value of some data for later use. |
| [vector](https://psyteachr.github.io/glossary/v#vector) | A type of data structure that is basically a list of things like T/F values, numbers, or strings. |
| [whitespace](https://psyteachr.github.io/glossary/w#whitespace) | Spaces, tabs and line breaks |
| [working directory](https://psyteachr.github.io/glossary/w#working.directory) | The filepath where R is currently reading and writing files. |
1\.8 Exercises
--------------
Download the first set of [exercises](exercises/01_intro_exercise.Rmd) and put it in the project directory you created earlier for today’s exercises. See the [answers](exercises/01_intro_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(1)
# run this to access the answers
dataskills::exercise(1, answers = TRUE)
```
1\.1 Learning Objectives
------------------------
1. Understand the components of the [RStudio IDE](intro.html#rstudio_ide) [(video)](https://youtu.be/CbA6ZVlJE78)
2. Type commands into the [console](intro.html#console) [(video)](https://youtu.be/wbI4c_7y0kE)
3. Understand [function syntax](intro.html#function_syx) [(video)](https://youtu.be/X5P038N5Q8I)
4. Install a [package](intro.html#install-package) [(video)](https://youtu.be/u_pvHnqkVCE)
5. Organise a [project](intro.html#projects) [(video)](https://youtu.be/y-KiPueC9xw)
6. Create and compile an [Rmarkdown document](intro.html#rmarkdown) [(video)](https://youtu.be/EqJiAlJAl8Y)
1\.2 Resources
--------------
* [Chapter 1: Introduction](http://r4ds.had.co.nz/introduction.html) in *R for Data Science*
* [RStudio IDE Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/rstudio-ide.pdf)
* [Introduction to R Markdown](https://rmarkdown.rstudio.com/lesson-1.html)
* [R Markdown Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/rmarkdown-2.0.pdf)
* [R Markdown Reference](https://www.rstudio.com/wp-content/uploads/2015/03/rmarkdown-reference.pdf)
* [RStudio Cloud](https://rstudio.cloud/)
1\.3 What is R?
---------------
R is a programming environment for data processing and statistical analysis. We use R in Psychology at the University of Glasgow to promote [reproducible research](https://psyteachr.github.io/glossary/r#reproducible-research "Research that documents all of the steps between raw data and results in a way that can be verified."). This refers to being able to document and reproduce all of the steps between raw data and results. R allows you to write [scripts](https://psyteachr.github.io/glossary/s#script "A plain-text file that contains commands in a coding language, such as R.") that combine data files, clean data, and run analyses. There are many other ways to do this, including writing SPSS syntax files, but we find R to be a useful tool that is free, open source, and commonly used by research psychologists.
See Appendix [A](installingr.html#installingr) for more information on on how to install R and associated programs.
### 1\.3\.1 The Base R Console
If you open up the application called R, you will see an “R Console” window that looks something like this.
Figure 1\.1: The R Console window.
You can close R and never open it again. We’ll be working entirely in RStudio in this class.
ALWAYS REMEMBER: Launch R though the RStudio IDE
Launch .
### 1\.3\.2 RStudio
[RStudio](http://www.rstudio.com) is an Integrated Development Environment ([IDE](https://psyteachr.github.io/glossary/i#ide "Integrated Development Environment: a program that serves as a text editor, file manager, and provides functions to help you read and write code. RStudio is an IDE for R.")). This is a program that serves as a text editor, file manager, and provides many functions to help you read and write R code.
Figure 1\.2: The RStudio IDE
RStudio is arranged with four window [panes](https://psyteachr.github.io/glossary/p#panes "RStudio is arranged with four window “panes.”"). By default, the upper left pane is the **source pane**, where you view and edit source code from files. The bottom left pane is usually the **console pane**, where you can type in commands and view output messages. The right panes have several different tabs that show you information about your code. You can change the location of panes and what tabs are shown under **`Preferences > Pane Layout`**.
Your browser does not support the video tag.
### 1\.3\.3 Configure RStudio
In this class, you will be learning how to do [reproducible research](https://psyteachr.github.io/glossary/r#reproducible-research "Research that documents all of the steps between raw data and results in a way that can be verified."). This involves writing scripts that completely and transparently perform some analysis from start to finish in a way that yields the same result for different people using the same software on different computers. Transparency is a key value of science, as embodied in the “trust but verify” motto.
When you do things reproducibly, others can understand and check your work. This benefits science, but there is a selfish reason, too: the most important person who will benefit from a reproducible script is your future self. When you return to an analysis after two weeks of vacation, you will thank your earlier self for doing things in a transparent, reproducible way, as you can easily pick up right where you left off.
There are two tweaks that you should do to your RStudio installation to maximize reproducibility. Go to **`Global Options...`** under the **`Tools`** menu (⌘,), and uncheck the box that says **`Restore .RData into workspace at startup`**. If you keep things around in your workspace, things will get messy, and unexpected things will happen. You should always start with a clear workspace. This also means that you never want to save your workspace when you exit, so set this to **`Never`**. The only thing you want to save are your scripts.
Figure 1\.3: Alter these settings for increased reproducibility.
Your settings should have:
* Restore .RData into workspace at startup: Checked Not Checked
* Save workspace to .RData on exit: Always Never Ask
### 1\.3\.1 The Base R Console
If you open up the application called R, you will see an “R Console” window that looks something like this.
Figure 1\.1: The R Console window.
You can close R and never open it again. We’ll be working entirely in RStudio in this class.
ALWAYS REMEMBER: Launch R though the RStudio IDE
Launch .
### 1\.3\.2 RStudio
[RStudio](http://www.rstudio.com) is an Integrated Development Environment ([IDE](https://psyteachr.github.io/glossary/i#ide "Integrated Development Environment: a program that serves as a text editor, file manager, and provides functions to help you read and write code. RStudio is an IDE for R.")). This is a program that serves as a text editor, file manager, and provides many functions to help you read and write R code.
Figure 1\.2: The RStudio IDE
RStudio is arranged with four window [panes](https://psyteachr.github.io/glossary/p#panes "RStudio is arranged with four window “panes.”"). By default, the upper left pane is the **source pane**, where you view and edit source code from files. The bottom left pane is usually the **console pane**, where you can type in commands and view output messages. The right panes have several different tabs that show you information about your code. You can change the location of panes and what tabs are shown under **`Preferences > Pane Layout`**.
Your browser does not support the video tag.
### 1\.3\.3 Configure RStudio
In this class, you will be learning how to do [reproducible research](https://psyteachr.github.io/glossary/r#reproducible-research "Research that documents all of the steps between raw data and results in a way that can be verified."). This involves writing scripts that completely and transparently perform some analysis from start to finish in a way that yields the same result for different people using the same software on different computers. Transparency is a key value of science, as embodied in the “trust but verify” motto.
When you do things reproducibly, others can understand and check your work. This benefits science, but there is a selfish reason, too: the most important person who will benefit from a reproducible script is your future self. When you return to an analysis after two weeks of vacation, you will thank your earlier self for doing things in a transparent, reproducible way, as you can easily pick up right where you left off.
There are two tweaks that you should do to your RStudio installation to maximize reproducibility. Go to **`Global Options...`** under the **`Tools`** menu (⌘,), and uncheck the box that says **`Restore .RData into workspace at startup`**. If you keep things around in your workspace, things will get messy, and unexpected things will happen. You should always start with a clear workspace. This also means that you never want to save your workspace when you exit, so set this to **`Never`**. The only thing you want to save are your scripts.
Figure 1\.3: Alter these settings for increased reproducibility.
Your settings should have:
* Restore .RData into workspace at startup: Checked Not Checked
* Save workspace to .RData on exit: Always Never Ask
1\.4 Getting Started
--------------------
### 1\.4\.1 Console commands
We are first going to learn about how to interact with the [console](https://psyteachr.github.io/glossary/c#console "The pane in RStudio where you can type in commands and view output messages."). In general, you will be developing R [script](https://psyteachr.github.io/glossary/s#scripts "NA") or [R Markdown](https://psyteachr.github.io/glossary/r#r-markdown "The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.") files, rather than working directly in the console window. However, you can consider the console a kind of “sandbox” where you can try out lines of code and adapt them until you get them to do what you want. Then you can copy them back into the script editor.
Mostly, however, you will be typing into the script editor window (either into an R script or an R Markdown file) and then sending the commands to the console by placing the cursor on the line and holding down the Ctrl key while you press Enter. The Ctrl\+Enter key sequence sends the command in the script to the console.
One simple way to learn about the R console is to use it as a calculator. Enter the lines of code below and see if your results match. Be prepared to make lots of typos (at first).
```
1 + 1
```
```
## [1] 2
```
The R console remembers a history of the commands you typed in the past. Use the up and down arrow keys on your keyboard to scroll backwards and forwards through your history. It’s a lot faster than re\-typing.
```
1 + 1 + 3
```
```
## [1] 5
```
You can break up mathematical expressions over multiple lines; R waits for a complete expression before processing it.
```
## here comes a long expression
## let's break it over multiple lines
1 + 2 + 3 + 4 + 5 + 6 +
7 + 8 + 9 +
10
```
```
## [1] 55
```
Text inside quotes is called a [string](https://psyteachr.github.io/glossary/s#string "A piece of text inside of quotes.").
```
"Good afternoon"
```
```
## [1] "Good afternoon"
```
You can break up text over multiple lines; R waits for a close quote before processing it. If you want to include a double quote inside this quoted string, [escape](https://psyteachr.github.io/glossary/e#escape "Include special characters like \" inside of a string by prefacing them with a backslash.") it with a backslash.
```
africa <- "I hear the drums echoing tonight
But she hears only whispers of some quiet conversation
She's coming in, 12:30 flight
The moonlit wings reflect the stars that guide me towards salvation
I stopped an old man along the way
Hoping to find some old forgotten words or ancient melodies
He turned to me as if to say, \"Hurry boy, it's waiting there for you\"
- Toto"
cat(africa) # cat() prints the string
```
```
## I hear the drums echoing tonight
## But she hears only whispers of some quiet conversation
## She's coming in, 12:30 flight
## The moonlit wings reflect the stars that guide me towards salvation
## I stopped an old man along the way
## Hoping to find some old forgotten words or ancient melodies
## He turned to me as if to say, "Hurry boy, it's waiting there for you"
##
## - Toto
```
### 1\.4\.2 Objects
Often you want to store the result of some computation for later use. You can store it in an [object](https://psyteachr.github.io/glossary/o#object "A word that identifies and stores the value of some data for later use.") (also sometimes called a [variable](https://psyteachr.github.io/glossary/v#variable "A word that identifies and stores the value of some data for later use.")). An object in R:
* contains only letters, numbers, full stops, and underscores
* starts with a letter or a full stop and a letter
* distinguishes uppercase and lowercase letters (`rickastley` is not the same as `RickAstley`)
The following are valid and different objects:
* songdata
* SongData
* song\_data
* song.data
* .song.data
* never\_gonna\_give\_you\_up\_never\_gonna\_let\_you\_down
The following are not valid objects:
* \_song\_data
* 1song
* .1song
* song data
* song\-data
Use the [assignment operator](https://psyteachr.github.io/glossary/a#assignment-operator "The symbol <-, which functions like = and assigns the value on the right to the object on the left")\<\-\` to assign the value on the right to the object named on the left.
```
## use the assignment operator '<-'
## R stores the number in the object
x <- 5
```
Now that we have set `x` to a value, we can do something with it:
```
x * 2
## R evaluates the expression and stores the result in the object boring_calculation
boring_calculation <- 2 + 2
```
```
## [1] 10
```
Note that it doesn’t print the result back at you when it’s stored. To view the result, just type the object name on a blank line.
```
boring_calculation
```
```
## [1] 4
```
Once an object is assigned a value, its value doesn’t change unless you reassign the object, even if the objects you used to calculate it change. Predict what the code below does and test yourself:
```
this_year <- 2019
my_birth_year <- 1976
my_age <- this_year - my_birth_year
this_year <- 2020
```
After all the code above is run:
* `this_year` \= 43 44 1976 2019 2020
* `my_birth_year` \= 43 44 1976 2019 2020
* `my_age` \= 43 44 1976 2019 2020
### 1\.4\.3 The environment
Anytime you assign something to a new object, R creates a new entry in the [global environment](https://psyteachr.github.io/glossary/g#global-environment "The interactive workspace where your script runs"). Objects in the global environment exist until you end your session; then they disappear forever (unless you save them).
Look at the **Environment** tab in the upper right pane. It lists all of the objects you have created. Click the broom icon to clear all of the objects and start fresh. You can also use the following functions in the console to view all objects, remove one object, or remove all objects.
```
ls() # print the objects in the global environment
rm("x") # remove the object named x from the global environment
rm(list = ls()) # clear out the global environment
```
In the upper right corner of the Environment tab, change **`List`** to **`Grid`**. Now you can see the type, length, and size of your objects, and reorder the list by any of these attributes.
### 1\.4\.4 Whitespace
R mostly ignores [whitespace](https://psyteachr.github.io/glossary/w#whitespace "Spaces, tabs and line breaks"): spaces, tabs, and line breaks. This means that you can use whitespace to help you organise your code.
```
# a and b are identical
a <- list(ctl = "Control Condition", exp1 = "Experimental Condition 1", exp2 = "Experimental Condition 2")
# but b is much easier to read
b <- list(ctl = "Control Condition",
exp1 = "Experimental Condition 1",
exp2 = "Experimental Condition 2")
```
When you see `>` at the beginning of a line, that means R is waiting for you to start a new command. However, if you see a `+` instead of `>` at the start of the line, that means R is waiting for you to finish a command you started on a previous line. If you want to cancel whatever command you started, just press the Esc key in the console window and you’ll get back to the `>` command prompt.
```
# R waits until next line for evaluation
(3 + 2) *
5
```
```
## [1] 25
```
It is often useful to break up long functions onto several lines.
```
cat("3, 6, 9, the goose drank wine",
"The monkey chewed tobacco on the streetcar line",
"The line broke, the monkey got choked",
"And they all went to heaven in a little rowboat",
sep = " \n")
```
```
## 3, 6, 9, the goose drank wine
## The monkey chewed tobacco on the streetcar line
## The line broke, the monkey got choked
## And they all went to heaven in a little rowboat
```
### 1\.4\.5 Function syntax
A lot of what you do in R involves calling a [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") and storing the results. A function is a named section of code that can be reused.
For example, `sd` is a function that returns the [standard deviation](https://psyteachr.github.io/glossary/s#standard-deviation "A descriptive statistic that measures how spread out data are relative to the mean.") of the [vector](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings.") of numbers that you provide as the input [argument](https://psyteachr.github.io/glossary/a#argument "A variable that provides input to a function."). Functions are set up like this:
`function_name(argument1, argument2 = "value")`.
The arguments in parentheses can be named (like, `argument1 = 10`) or you can skip the names if you put them in the exact same order that they’re defined in the function. You can check this by typing `?sd` (or whatever function name you’re looking up) into the console and the Help pane will show you the default order under **Usage**. You can also skip arguments that have a default value specified.
Most functions return a value, but may also produce side effects like printing to the console.
To illustrate, the function `rnorm()` generates random numbers from the standard [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable."). The help page for `rnorm()` (accessed by typing `?rnorm` in the console) shows that it has the syntax
`rnorm(n, mean = 0, sd = 1)`
where `n` is the number of randomly generated numbers you want, `mean` is the mean of the distribution, and `sd` is the standard deviation. The default mean is 0, and the default standard deviation is 1\. There is no default for `n`, which means you’ll get an error if you don’t specify it:
```
rnorm()
```
```
## Error in rnorm(): argument "n" is missing, with no default
```
If you want 10 random numbers from a normal distribution with mean of 0 and standard deviation, you can just use the defaults.
```
rnorm(10)
```
```
## [1] -0.04234663 -2.00393149 0.83611187 -1.46404127 1.31714428 0.42608581
## [7] -0.46673798 -0.01670509 1.64072295 0.85876439
```
If you want 10 numbers from a normal distribution with a mean of 100:
```
rnorm(10, 100)
```
```
## [1] 101.34917 99.86059 100.36287 99.65575 100.66818 100.04771 99.76782
## [8] 102.57691 100.05575 99.42490
```
This would be an equivalent but less efficient way of calling the function:
```
rnorm(n = 10, mean = 100)
```
```
## [1] 100.52773 99.40241 100.39641 101.01629 99.41961 100.52202 98.09828
## [8] 99.52169 100.25677 99.92092
```
We don’t need to name the arguments because R will recognize that we intended to fill in the first and second arguments by their position in the function call. However, if we want to change the default for an argument coming later in the list, then we need to name it. For instance, if we wanted to keep the default `mean = 0` but change the standard deviation to 100 we would do it this way:
```
rnorm(10, sd = 100)
```
```
## [1] -68.254349 -17.636619 140.047575 7.570674 -68.309751 -2.378786
## [7] 117.356343 -104.772092 -40.163750 54.358941
```
Some functions give a list of options after an argument; this means the default value is the first option. The usage entry for the `power.t.test()` function looks like this:
```
power.t.test(n = NULL, delta = NULL, sd = 1, sig.level = 0.05,
power = NULL,
type = c("two.sample", "one.sample", "paired"),
alternative = c("two.sided", "one.sided"),
strict = FALSE, tol = .Machine$double.eps^0.25)
```
* What is the default value for `sd`? NULL 1 0\.05 two.sample
* What is the default value for `type`? NULL two.sample one.sample paired
* Which is equivalent to `power.t.test(100, 0.5)`? power.t.test(100, 0\.5, sig.level \= 1, sd \= 0\.05\) power.t.test() power.t.test(n \= 100\) power.t.test(delta \= 0\.5, n \= 100\)
### 1\.4\.6 Getting help
Start up help in a browser using the function `help.start()`.
If a function is in [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") or a loaded [package](https://psyteachr.github.io/glossary/p#package "A group of R functions."), you can use the `help("function_name")` function or the `?function_name` shortcut to access the help file. If the package isn’t loaded, specify the package name as the second argument to the help function.
```
# these methods are all equivalent ways of getting help
help("rnorm")
?rnorm
help("rnorm", package="stats")
```
When the package isn’t loaded or you aren’t sure what package the function is in, use the shortcut `??function_name`.
* What is the first argument to the `mean` function? trim na.rm mean x
* What package is `read_excel` in? readr readxl base stats
### 1\.4\.1 Console commands
We are first going to learn about how to interact with the [console](https://psyteachr.github.io/glossary/c#console "The pane in RStudio where you can type in commands and view output messages."). In general, you will be developing R [script](https://psyteachr.github.io/glossary/s#scripts "NA") or [R Markdown](https://psyteachr.github.io/glossary/r#r-markdown "The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.") files, rather than working directly in the console window. However, you can consider the console a kind of “sandbox” where you can try out lines of code and adapt them until you get them to do what you want. Then you can copy them back into the script editor.
Mostly, however, you will be typing into the script editor window (either into an R script or an R Markdown file) and then sending the commands to the console by placing the cursor on the line and holding down the Ctrl key while you press Enter. The Ctrl\+Enter key sequence sends the command in the script to the console.
One simple way to learn about the R console is to use it as a calculator. Enter the lines of code below and see if your results match. Be prepared to make lots of typos (at first).
```
1 + 1
```
```
## [1] 2
```
The R console remembers a history of the commands you typed in the past. Use the up and down arrow keys on your keyboard to scroll backwards and forwards through your history. It’s a lot faster than re\-typing.
```
1 + 1 + 3
```
```
## [1] 5
```
You can break up mathematical expressions over multiple lines; R waits for a complete expression before processing it.
```
## here comes a long expression
## let's break it over multiple lines
1 + 2 + 3 + 4 + 5 + 6 +
7 + 8 + 9 +
10
```
```
## [1] 55
```
Text inside quotes is called a [string](https://psyteachr.github.io/glossary/s#string "A piece of text inside of quotes.").
```
"Good afternoon"
```
```
## [1] "Good afternoon"
```
You can break up text over multiple lines; R waits for a close quote before processing it. If you want to include a double quote inside this quoted string, [escape](https://psyteachr.github.io/glossary/e#escape "Include special characters like \" inside of a string by prefacing them with a backslash.") it with a backslash.
```
africa <- "I hear the drums echoing tonight
But she hears only whispers of some quiet conversation
She's coming in, 12:30 flight
The moonlit wings reflect the stars that guide me towards salvation
I stopped an old man along the way
Hoping to find some old forgotten words or ancient melodies
He turned to me as if to say, \"Hurry boy, it's waiting there for you\"
- Toto"
cat(africa) # cat() prints the string
```
```
## I hear the drums echoing tonight
## But she hears only whispers of some quiet conversation
## She's coming in, 12:30 flight
## The moonlit wings reflect the stars that guide me towards salvation
## I stopped an old man along the way
## Hoping to find some old forgotten words or ancient melodies
## He turned to me as if to say, "Hurry boy, it's waiting there for you"
##
## - Toto
```
### 1\.4\.2 Objects
Often you want to store the result of some computation for later use. You can store it in an [object](https://psyteachr.github.io/glossary/o#object "A word that identifies and stores the value of some data for later use.") (also sometimes called a [variable](https://psyteachr.github.io/glossary/v#variable "A word that identifies and stores the value of some data for later use.")). An object in R:
* contains only letters, numbers, full stops, and underscores
* starts with a letter or a full stop and a letter
* distinguishes uppercase and lowercase letters (`rickastley` is not the same as `RickAstley`)
The following are valid and different objects:
* songdata
* SongData
* song\_data
* song.data
* .song.data
* never\_gonna\_give\_you\_up\_never\_gonna\_let\_you\_down
The following are not valid objects:
* \_song\_data
* 1song
* .1song
* song data
* song\-data
Use the [assignment operator](https://psyteachr.github.io/glossary/a#assignment-operator "The symbol <-, which functions like = and assigns the value on the right to the object on the left")\<\-\` to assign the value on the right to the object named on the left.
```
## use the assignment operator '<-'
## R stores the number in the object
x <- 5
```
Now that we have set `x` to a value, we can do something with it:
```
x * 2
## R evaluates the expression and stores the result in the object boring_calculation
boring_calculation <- 2 + 2
```
```
## [1] 10
```
Note that it doesn’t print the result back at you when it’s stored. To view the result, just type the object name on a blank line.
```
boring_calculation
```
```
## [1] 4
```
Once an object is assigned a value, its value doesn’t change unless you reassign the object, even if the objects you used to calculate it change. Predict what the code below does and test yourself:
```
this_year <- 2019
my_birth_year <- 1976
my_age <- this_year - my_birth_year
this_year <- 2020
```
After all the code above is run:
* `this_year` \= 43 44 1976 2019 2020
* `my_birth_year` \= 43 44 1976 2019 2020
* `my_age` \= 43 44 1976 2019 2020
### 1\.4\.3 The environment
Anytime you assign something to a new object, R creates a new entry in the [global environment](https://psyteachr.github.io/glossary/g#global-environment "The interactive workspace where your script runs"). Objects in the global environment exist until you end your session; then they disappear forever (unless you save them).
Look at the **Environment** tab in the upper right pane. It lists all of the objects you have created. Click the broom icon to clear all of the objects and start fresh. You can also use the following functions in the console to view all objects, remove one object, or remove all objects.
```
ls() # print the objects in the global environment
rm("x") # remove the object named x from the global environment
rm(list = ls()) # clear out the global environment
```
In the upper right corner of the Environment tab, change **`List`** to **`Grid`**. Now you can see the type, length, and size of your objects, and reorder the list by any of these attributes.
### 1\.4\.4 Whitespace
R mostly ignores [whitespace](https://psyteachr.github.io/glossary/w#whitespace "Spaces, tabs and line breaks"): spaces, tabs, and line breaks. This means that you can use whitespace to help you organise your code.
```
# a and b are identical
a <- list(ctl = "Control Condition", exp1 = "Experimental Condition 1", exp2 = "Experimental Condition 2")
# but b is much easier to read
b <- list(ctl = "Control Condition",
exp1 = "Experimental Condition 1",
exp2 = "Experimental Condition 2")
```
When you see `>` at the beginning of a line, that means R is waiting for you to start a new command. However, if you see a `+` instead of `>` at the start of the line, that means R is waiting for you to finish a command you started on a previous line. If you want to cancel whatever command you started, just press the Esc key in the console window and you’ll get back to the `>` command prompt.
```
# R waits until next line for evaluation
(3 + 2) *
5
```
```
## [1] 25
```
It is often useful to break up long functions onto several lines.
```
cat("3, 6, 9, the goose drank wine",
"The monkey chewed tobacco on the streetcar line",
"The line broke, the monkey got choked",
"And they all went to heaven in a little rowboat",
sep = " \n")
```
```
## 3, 6, 9, the goose drank wine
## The monkey chewed tobacco on the streetcar line
## The line broke, the monkey got choked
## And they all went to heaven in a little rowboat
```
### 1\.4\.5 Function syntax
A lot of what you do in R involves calling a [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") and storing the results. A function is a named section of code that can be reused.
For example, `sd` is a function that returns the [standard deviation](https://psyteachr.github.io/glossary/s#standard-deviation "A descriptive statistic that measures how spread out data are relative to the mean.") of the [vector](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings.") of numbers that you provide as the input [argument](https://psyteachr.github.io/glossary/a#argument "A variable that provides input to a function."). Functions are set up like this:
`function_name(argument1, argument2 = "value")`.
The arguments in parentheses can be named (like, `argument1 = 10`) or you can skip the names if you put them in the exact same order that they’re defined in the function. You can check this by typing `?sd` (or whatever function name you’re looking up) into the console and the Help pane will show you the default order under **Usage**. You can also skip arguments that have a default value specified.
Most functions return a value, but may also produce side effects like printing to the console.
To illustrate, the function `rnorm()` generates random numbers from the standard [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable."). The help page for `rnorm()` (accessed by typing `?rnorm` in the console) shows that it has the syntax
`rnorm(n, mean = 0, sd = 1)`
where `n` is the number of randomly generated numbers you want, `mean` is the mean of the distribution, and `sd` is the standard deviation. The default mean is 0, and the default standard deviation is 1\. There is no default for `n`, which means you’ll get an error if you don’t specify it:
```
rnorm()
```
```
## Error in rnorm(): argument "n" is missing, with no default
```
If you want 10 random numbers from a normal distribution with mean of 0 and standard deviation, you can just use the defaults.
```
rnorm(10)
```
```
## [1] -0.04234663 -2.00393149 0.83611187 -1.46404127 1.31714428 0.42608581
## [7] -0.46673798 -0.01670509 1.64072295 0.85876439
```
If you want 10 numbers from a normal distribution with a mean of 100:
```
rnorm(10, 100)
```
```
## [1] 101.34917 99.86059 100.36287 99.65575 100.66818 100.04771 99.76782
## [8] 102.57691 100.05575 99.42490
```
This would be an equivalent but less efficient way of calling the function:
```
rnorm(n = 10, mean = 100)
```
```
## [1] 100.52773 99.40241 100.39641 101.01629 99.41961 100.52202 98.09828
## [8] 99.52169 100.25677 99.92092
```
We don’t need to name the arguments because R will recognize that we intended to fill in the first and second arguments by their position in the function call. However, if we want to change the default for an argument coming later in the list, then we need to name it. For instance, if we wanted to keep the default `mean = 0` but change the standard deviation to 100 we would do it this way:
```
rnorm(10, sd = 100)
```
```
## [1] -68.254349 -17.636619 140.047575 7.570674 -68.309751 -2.378786
## [7] 117.356343 -104.772092 -40.163750 54.358941
```
Some functions give a list of options after an argument; this means the default value is the first option. The usage entry for the `power.t.test()` function looks like this:
```
power.t.test(n = NULL, delta = NULL, sd = 1, sig.level = 0.05,
power = NULL,
type = c("two.sample", "one.sample", "paired"),
alternative = c("two.sided", "one.sided"),
strict = FALSE, tol = .Machine$double.eps^0.25)
```
* What is the default value for `sd`? NULL 1 0\.05 two.sample
* What is the default value for `type`? NULL two.sample one.sample paired
* Which is equivalent to `power.t.test(100, 0.5)`? power.t.test(100, 0\.5, sig.level \= 1, sd \= 0\.05\) power.t.test() power.t.test(n \= 100\) power.t.test(delta \= 0\.5, n \= 100\)
### 1\.4\.6 Getting help
Start up help in a browser using the function `help.start()`.
If a function is in [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") or a loaded [package](https://psyteachr.github.io/glossary/p#package "A group of R functions."), you can use the `help("function_name")` function or the `?function_name` shortcut to access the help file. If the package isn’t loaded, specify the package name as the second argument to the help function.
```
# these methods are all equivalent ways of getting help
help("rnorm")
?rnorm
help("rnorm", package="stats")
```
When the package isn’t loaded or you aren’t sure what package the function is in, use the shortcut `??function_name`.
* What is the first argument to the `mean` function? trim na.rm mean x
* What package is `read_excel` in? readr readxl base stats
1\.5 Add\-on packages
---------------------
One of the great things about R is that it is **user extensible**: anyone can create a new add\-on software package that extends its functionality. There are currently thousands of add\-on packages that R users have created to solve many different kinds of problems, or just simply to have fun. There are packages for data visualisation, machine learning, neuroimaging, eyetracking, web scraping, and playing games such as Sudoku.
Add\-on packages are not distributed with base R, but have to be downloaded and installed from an archive, in the same way that you would, for instance, download and install a fitness app on your smartphone.
The main repository where packages reside is called CRAN, the Comprehensive R Archive Network. A package has to pass strict tests devised by the R core team to be allowed to be part of the CRAN archive. You can install from the CRAN archive through R using the `install.packages()` function.
There is an important distinction between **installing** a package and **loading** a package.
### 1\.5\.1 Installing a package
This is done using `install.packages()`. This is like installing an app on your phone: you only have to do it once and the app will remain installed until you remove it. For instance, if you want to use PokemonGo on your phone, you install it once from the App Store or Play Store, and you don’t have to re\-install it each time you want to use it. Once you launch the app, it will run in the background until you close it or restart your phone. Likewise, when you install a package, the package will be available (but not *loaded*) every time you open up R.
You may only be able to permanently install packages if you are using R on your own system; you may not be able to do this on public workstations if you lack the appropriate privileges.
Install the `ggExtra` package on your system. This package lets you create plots with marginal histograms.
```
install.packages("ggExtra")
```
If you don’t already have packages like ggplot2 and shiny installed, it will also install these **dependencies** for you. If you don’t get an error message at the end, the installation was successful.
### 1\.5\.2 Loading a package
This is done using `library(packagename)`. This is like **launching** an app on your phone: the functionality is only there where the app is launched and remains there until you close the app or restart. Likewise, when you run `library(packagename)` within a session, the functionality of the package referred to by `packagename` will be made available for your R session. The next time you start R, you will need to run the `library()` function again if you want to access its functionality.
You can load the functions in `ggExtra` for your current R session as follows:
```
library(ggExtra)
```
You might get some red text when you load a package, this is normal. It is usually warning you that this package has functions that have the same name as other packages you’ve already loaded.
You can use the convention `package::function()` to indicate in which add\-on package a function resides. For instance, if you see `readr::read_csv()`, that refers to the function `read_csv()` in the `readr` add\-on package.
Now you can run the function `ggExtra::runExample()`, which runs an interactive example of marginal plots using shiny.
```
ggExtra::runExample()
```
### 1\.5\.3 Install from GitHub
Many R packages are not yet on [CRAN](https://psyteachr.github.io/glossary/c#cran "The Comprehensive R Archive Network: a network of ftp and web servers around the world that store identical, up-to-date, versions of code and documentation for R.") because they are still in development. Increasingly, datasets and code for papers are available as packages you can download from github. You’ll need to install the devtools package to be able to install packages from github. Check if you have a package installed by trying to load it (e.g., if you don’t have devtools installed, `library("devtools")` will display an error message) or by searching for it in the packages tab in the lower right pane. All listed packages are installed; all checked packages are currently loaded.
Figure 1\.4: Check installed and loaded packages in the packages tab in the lower right pane.
```
# install devtools if you get
# Error in loadNamespace(name) : there is no package called ‘devtools’
# install.packages("devtools")
devtools::install_github("psyteachr/msc-data-skills")
```
After you install the dataskills package, load it using the `library()` function. You can then try out some of the functions below.
* `book()` opens a local copy of this book in your web browser.
* `app("plotdemo")` opens a shiny app that lets you see how simulated data would look in different plot styles
* `exercise(1)` creates and opens a file containing the exercises for this chapter
* `?disgust` shows you the documentation for the built\-in dataset `disgust`, which we will be using in future lessons
```
library(dataskills)
book()
app("plotdemo")
exercise(1)
?disgust
```
How many different ways can you find to discover what functions are available in the dataskills package?
### 1\.5\.1 Installing a package
This is done using `install.packages()`. This is like installing an app on your phone: you only have to do it once and the app will remain installed until you remove it. For instance, if you want to use PokemonGo on your phone, you install it once from the App Store or Play Store, and you don’t have to re\-install it each time you want to use it. Once you launch the app, it will run in the background until you close it or restart your phone. Likewise, when you install a package, the package will be available (but not *loaded*) every time you open up R.
You may only be able to permanently install packages if you are using R on your own system; you may not be able to do this on public workstations if you lack the appropriate privileges.
Install the `ggExtra` package on your system. This package lets you create plots with marginal histograms.
```
install.packages("ggExtra")
```
If you don’t already have packages like ggplot2 and shiny installed, it will also install these **dependencies** for you. If you don’t get an error message at the end, the installation was successful.
### 1\.5\.2 Loading a package
This is done using `library(packagename)`. This is like **launching** an app on your phone: the functionality is only there where the app is launched and remains there until you close the app or restart. Likewise, when you run `library(packagename)` within a session, the functionality of the package referred to by `packagename` will be made available for your R session. The next time you start R, you will need to run the `library()` function again if you want to access its functionality.
You can load the functions in `ggExtra` for your current R session as follows:
```
library(ggExtra)
```
You might get some red text when you load a package, this is normal. It is usually warning you that this package has functions that have the same name as other packages you’ve already loaded.
You can use the convention `package::function()` to indicate in which add\-on package a function resides. For instance, if you see `readr::read_csv()`, that refers to the function `read_csv()` in the `readr` add\-on package.
Now you can run the function `ggExtra::runExample()`, which runs an interactive example of marginal plots using shiny.
```
ggExtra::runExample()
```
### 1\.5\.3 Install from GitHub
Many R packages are not yet on [CRAN](https://psyteachr.github.io/glossary/c#cran "The Comprehensive R Archive Network: a network of ftp and web servers around the world that store identical, up-to-date, versions of code and documentation for R.") because they are still in development. Increasingly, datasets and code for papers are available as packages you can download from github. You’ll need to install the devtools package to be able to install packages from github. Check if you have a package installed by trying to load it (e.g., if you don’t have devtools installed, `library("devtools")` will display an error message) or by searching for it in the packages tab in the lower right pane. All listed packages are installed; all checked packages are currently loaded.
Figure 1\.4: Check installed and loaded packages in the packages tab in the lower right pane.
```
# install devtools if you get
# Error in loadNamespace(name) : there is no package called ‘devtools’
# install.packages("devtools")
devtools::install_github("psyteachr/msc-data-skills")
```
After you install the dataskills package, load it using the `library()` function. You can then try out some of the functions below.
* `book()` opens a local copy of this book in your web browser.
* `app("plotdemo")` opens a shiny app that lets you see how simulated data would look in different plot styles
* `exercise(1)` creates and opens a file containing the exercises for this chapter
* `?disgust` shows you the documentation for the built\-in dataset `disgust`, which we will be using in future lessons
```
library(dataskills)
book()
app("plotdemo")
exercise(1)
?disgust
```
How many different ways can you find to discover what functions are available in the dataskills package?
1\.6 Organising a project
-------------------------
[Projects](https://psyteachr.github.io/glossary/p#project "A way to organise related files in RStudio") in RStudio are a way to group all of the files you need for one project. Most projects include scripts, data files, and output files like the PDF version of the script or images.
Make a new directory where you will keep all of your materials for this class. If you’re using a lab computer, make sure you make this directory in your network drive so you can access it from other computers.
Choose **`New Project…`** under the **`File`** menu to create a new project called `01-intro` in this directory.
### 1\.6\.1 Structure
Here is what an R script looks like. Don’t worry about the details for now.
```
# load add-on packages
library(tidyverse)
# set object ----
n <- 100
# simulate data ----
data <- data.frame(
id = 1:n,
dv = c(rnorm(n/2, 0), rnorm(n/2, 1)),
condition = rep(c("A", "B"), each = n/2)
)
# plot data ----
ggplot(data, aes(condition, dv)) +
geom_violin(trim = FALSE) +
geom_boxplot(width = 0.25,
aes(fill = condition),
show.legend = FALSE)
# save plot ----
ggsave("sim_data.png", width = 8, height = 6)
```
It’s best if you follow the following structure when developing your own scripts:
* load in any add\-on packages you need to use
* define any custom functions
* load or simulate the data you will be working with
* work with the data
* save anything you need to save
Often when you are working on a script, you will realize that you need to load another add\-on package. Don’t bury the call to `library(package_I_need)` way down in the script. Put it in the top, so the user has an overview of what packages are needed.
You can add comments to an R script by with the hash symbol (`#`). The R interpreter will ignore characters from the hash to the end of the line.
```
## comments: any text from '#' on is ignored until end of line
22 / 7 # approximation to pi
```
```
## [1] 3.142857
```
If you add 4 or more dashes to the end of a comment, it acts like a header and creates an outline that you can see in the document outline (⇧⌘O).
### 1\.6\.2 Reproducible reports with R Markdown
We will make reproducible reports following the principles of [literate programming](https://en.wikipedia.org/wiki/Literate_programming). The basic idea is to have the text of the report together in a single document along with the code needed to perform all analyses and generate the tables. The report is then “compiled” from the original format into some other, more portable format, such as HTML or PDF. This is different from traditional cutting and pasting approaches where, for instance, you create a graph in Microsoft Excel or a statistics program like SPSS and then paste it into Microsoft Word.
We will use [R Markdown](https://psyteachr.github.io/glossary/r#r-markdown "The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.") to create reproducible reports, which enables mixing of text and code. A reproducible script will contain sections of code in code blocks. A code block starts and ends with backtick symbols in a row, with some infomation about the code between curly brackets, such as `{r chunk-name, echo=FALSE}` (this runs the code, but does not show the text of the code block in the compiled document). The text outside of code blocks is written in [markdown](https://psyteachr.github.io/glossary/m#markdown "A way to specify formatting, such as headers, paragraphs, lists, bolding, and links."), which is a way to specify formatting, such as headers, paragraphs, lists, bolding, and links.
Figure 1\.5: A reproducible script.
If you open up a new R Markdown file from a template, you will see an example document with several code blocks in it. To create an HTML or PDF report from an R Markdown (Rmd) document, you compile it. Compiling a document is called [knitting](https://psyteachr.github.io/glossary/k#knit "To create an HTML, PDF, or Word document from an R Markdown (Rmd) document") in RStudio. There is a button that looks like a ball of yarn with needles through it that you click on to compile your file into a report.
Create a new R Markdown file from the **`File > New File > R Markdown…`** menu. Change the title and author, then click the knit button to create an html file.
### 1\.6\.3 Working Directory
Where should you put all of your files? When developing an analysis, you usually want to have all of your scripts and data files in one subtree of your computer’s directory structure. Usually there is a single [working directory](https://psyteachr.github.io/glossary/w#working-directory "The filepath where R is currently reading and writing files.") where your data and scripts are stored.
Your script should only reference files in three locations, using the appropriate format.
| Where | Example |
| --- | --- |
| on the web | “[https://psyteachr.github.io/msc\-data\-skills/data/disgust\_scores.csv](https://psyteachr.github.io/msc-data-skills/data/disgust_scores.csv)” |
| in the working directory | “disgust\_scores.csv” |
| in a subdirectory | “data/disgust\_scores.csv” |
Never set or change your working directory in a script.
If you are working with an R Markdown file, it will automatically use the same directory the .Rmd file is in as the working directory.
If you are working with R scripts, store your main script file in the top\-level directory and manually set your working directory to that location. You will have to reset the working directory each time you open RStudio, unless you create a [project](https://psyteachr.github.io/glossary/p#project "A way to organise related files in RStudio") and access the script from the project.
For instance, if you are on a Windows machine your data and scripts are in the directory `C:\Carla's_files\thesis2\my_thesis\new_analysis`, you will set your working directory in one of two ways: (1\) by going to the `Session` pull down menu in RStudio and choosing `Set Working Directory`, or (2\) by typing `setwd("C:\Carla's_files\thesis2\my_thesis\new_analysis")` in the console window.
It’s tempting to make your life simple by putting the `setwd()` command in your script. Don’t do this! Others will not have the same directory tree as you (and when your laptop dies and you get a new one, neither will you).
When manually setting the working directory, always do so by using the **`Session > Set Working Directory`** pull\-down option or by typing `setwd()` in the console.
If your script needs a file in a subdirectory of `new_analysis`, say, `data/questionnaire.csv`, load it in using a [relative path](https://psyteachr.github.io/glossary/r#relative-path "The location of a file in relation to the working directory.") so that it is accessible if you move the folder `new_analysis` to another location or computer:
```
dat <- read_csv("data/questionnaire.csv") # correct
```
Do not load it in using an [absolute path](https://psyteachr.github.io/glossary/a#absolute-path "A file path that starts with / and is not appended to the working directory"):
```
dat <- read_csv("C:/Carla's_files/thesis22/my_thesis/new_analysis/data/questionnaire.csv") # wrong
```
Also note the convention of using forward slashes, unlike the Windows\-specific convention of using backward slashes. This is to make references to files platform independent.
### 1\.6\.1 Structure
Here is what an R script looks like. Don’t worry about the details for now.
```
# load add-on packages
library(tidyverse)
# set object ----
n <- 100
# simulate data ----
data <- data.frame(
id = 1:n,
dv = c(rnorm(n/2, 0), rnorm(n/2, 1)),
condition = rep(c("A", "B"), each = n/2)
)
# plot data ----
ggplot(data, aes(condition, dv)) +
geom_violin(trim = FALSE) +
geom_boxplot(width = 0.25,
aes(fill = condition),
show.legend = FALSE)
# save plot ----
ggsave("sim_data.png", width = 8, height = 6)
```
It’s best if you follow the following structure when developing your own scripts:
* load in any add\-on packages you need to use
* define any custom functions
* load or simulate the data you will be working with
* work with the data
* save anything you need to save
Often when you are working on a script, you will realize that you need to load another add\-on package. Don’t bury the call to `library(package_I_need)` way down in the script. Put it in the top, so the user has an overview of what packages are needed.
You can add comments to an R script by with the hash symbol (`#`). The R interpreter will ignore characters from the hash to the end of the line.
```
## comments: any text from '#' on is ignored until end of line
22 / 7 # approximation to pi
```
```
## [1] 3.142857
```
If you add 4 or more dashes to the end of a comment, it acts like a header and creates an outline that you can see in the document outline (⇧⌘O).
### 1\.6\.2 Reproducible reports with R Markdown
We will make reproducible reports following the principles of [literate programming](https://en.wikipedia.org/wiki/Literate_programming). The basic idea is to have the text of the report together in a single document along with the code needed to perform all analyses and generate the tables. The report is then “compiled” from the original format into some other, more portable format, such as HTML or PDF. This is different from traditional cutting and pasting approaches where, for instance, you create a graph in Microsoft Excel or a statistics program like SPSS and then paste it into Microsoft Word.
We will use [R Markdown](https://psyteachr.github.io/glossary/r#r-markdown "The R-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code.") to create reproducible reports, which enables mixing of text and code. A reproducible script will contain sections of code in code blocks. A code block starts and ends with backtick symbols in a row, with some infomation about the code between curly brackets, such as `{r chunk-name, echo=FALSE}` (this runs the code, but does not show the text of the code block in the compiled document). The text outside of code blocks is written in [markdown](https://psyteachr.github.io/glossary/m#markdown "A way to specify formatting, such as headers, paragraphs, lists, bolding, and links."), which is a way to specify formatting, such as headers, paragraphs, lists, bolding, and links.
Figure 1\.5: A reproducible script.
If you open up a new R Markdown file from a template, you will see an example document with several code blocks in it. To create an HTML or PDF report from an R Markdown (Rmd) document, you compile it. Compiling a document is called [knitting](https://psyteachr.github.io/glossary/k#knit "To create an HTML, PDF, or Word document from an R Markdown (Rmd) document") in RStudio. There is a button that looks like a ball of yarn with needles through it that you click on to compile your file into a report.
Create a new R Markdown file from the **`File > New File > R Markdown…`** menu. Change the title and author, then click the knit button to create an html file.
### 1\.6\.3 Working Directory
Where should you put all of your files? When developing an analysis, you usually want to have all of your scripts and data files in one subtree of your computer’s directory structure. Usually there is a single [working directory](https://psyteachr.github.io/glossary/w#working-directory "The filepath where R is currently reading and writing files.") where your data and scripts are stored.
Your script should only reference files in three locations, using the appropriate format.
| Where | Example |
| --- | --- |
| on the web | “[https://psyteachr.github.io/msc\-data\-skills/data/disgust\_scores.csv](https://psyteachr.github.io/msc-data-skills/data/disgust_scores.csv)” |
| in the working directory | “disgust\_scores.csv” |
| in a subdirectory | “data/disgust\_scores.csv” |
Never set or change your working directory in a script.
If you are working with an R Markdown file, it will automatically use the same directory the .Rmd file is in as the working directory.
If you are working with R scripts, store your main script file in the top\-level directory and manually set your working directory to that location. You will have to reset the working directory each time you open RStudio, unless you create a [project](https://psyteachr.github.io/glossary/p#project "A way to organise related files in RStudio") and access the script from the project.
For instance, if you are on a Windows machine your data and scripts are in the directory `C:\Carla's_files\thesis2\my_thesis\new_analysis`, you will set your working directory in one of two ways: (1\) by going to the `Session` pull down menu in RStudio and choosing `Set Working Directory`, or (2\) by typing `setwd("C:\Carla's_files\thesis2\my_thesis\new_analysis")` in the console window.
It’s tempting to make your life simple by putting the `setwd()` command in your script. Don’t do this! Others will not have the same directory tree as you (and when your laptop dies and you get a new one, neither will you).
When manually setting the working directory, always do so by using the **`Session > Set Working Directory`** pull\-down option or by typing `setwd()` in the console.
If your script needs a file in a subdirectory of `new_analysis`, say, `data/questionnaire.csv`, load it in using a [relative path](https://psyteachr.github.io/glossary/r#relative-path "The location of a file in relation to the working directory.") so that it is accessible if you move the folder `new_analysis` to another location or computer:
```
dat <- read_csv("data/questionnaire.csv") # correct
```
Do not load it in using an [absolute path](https://psyteachr.github.io/glossary/a#absolute-path "A file path that starts with / and is not appended to the working directory"):
```
dat <- read_csv("C:/Carla's_files/thesis22/my_thesis/new_analysis/data/questionnaire.csv") # wrong
```
Also note the convention of using forward slashes, unlike the Windows\-specific convention of using backward slashes. This is to make references to files platform independent.
1\.7 Glossary
-------------
Each chapter ends with a glossary table defining the jargon introduced in this chapter. The links below take you to the [glossary book](https://psyteachr.github.io/glossary), which you can also download for offline use with `devtools::install_github("psyteachr/glossary")` and access the glossary offline with `glossary::book()`.
| term | definition |
| --- | --- |
| [absolute path](https://psyteachr.github.io/glossary/a#absolute.path) | A file path that starts with / and is not appended to the working directory |
| [argument](https://psyteachr.github.io/glossary/a#argument) | A variable that provides input to a function. |
| [assignment operator](https://psyteachr.github.io/glossary/a#assignment.operator) | The symbol \<\-, which functions like \= and assigns the value on the right to the object on the left |
| [base r](https://psyteachr.github.io/glossary/b#base.r) | The set of R functions that come with a basic installation of R, before you add external packages |
| [console](https://psyteachr.github.io/glossary/c#console) | The pane in RStudio where you can type in commands and view output messages. |
| [cran](https://psyteachr.github.io/glossary/c#cran) | The Comprehensive R Archive Network: a network of ftp and web servers around the world that store identical, up\-to\-date, versions of code and documentation for R. |
| [escape](https://psyteachr.github.io/glossary/e#escape) | Include special characters like " inside of a string by prefacing them with a backslash. |
| [function](https://psyteachr.github.io/glossary/f#function.) | A named section of code that can be reused. |
| [global environment](https://psyteachr.github.io/glossary/g#global.environment) | The interactive workspace where your script runs |
| [ide](https://psyteachr.github.io/glossary/i#ide) | Integrated Development Environment: a program that serves as a text editor, file manager, and provides functions to help you read and write code. RStudio is an IDE for R. |
| [knit](https://psyteachr.github.io/glossary/k#knit) | To create an HTML, PDF, or Word document from an R Markdown (Rmd) document |
| [markdown](https://psyteachr.github.io/glossary/m#markdown) | A way to specify formatting, such as headers, paragraphs, lists, bolding, and links. |
| [normal distribution](https://psyteachr.github.io/glossary/n#normal.distribution) | A symmetric distribution of data where values near the centre are most probable. |
| [object](https://psyteachr.github.io/glossary/o#object) | A word that identifies and stores the value of some data for later use. |
| [package](https://psyteachr.github.io/glossary/p#package) | A group of R functions. |
| [panes](https://psyteachr.github.io/glossary/p#panes) | RStudio is arranged with four window “panes.” |
| [project](https://psyteachr.github.io/glossary/p#project) | A way to organise related files in RStudio |
| [r markdown](https://psyteachr.github.io/glossary/r#r.markdown) | The R\-specific version of markdown: a way to specify formatting, such as headers, paragraphs, lists, bolding, and links, as well as code blocks and inline code. |
| [relative path](https://psyteachr.github.io/glossary/r#relative.path) | The location of a file in relation to the working directory. |
| [reproducible research](https://psyteachr.github.io/glossary/r#reproducible.research) | Research that documents all of the steps between raw data and results in a way that can be verified. |
| [script](https://psyteachr.github.io/glossary/s#script) | A plain\-text file that contains commands in a coding language, such as R. |
| [scripts](https://psyteachr.github.io/glossary/s#scripts) | NA |
| [standard deviation](https://psyteachr.github.io/glossary/s#standard.deviation) | A descriptive statistic that measures how spread out data are relative to the mean. |
| [string](https://psyteachr.github.io/glossary/s#string) | A piece of text inside of quotes. |
| [variable](https://psyteachr.github.io/glossary/v#variable) | A word that identifies and stores the value of some data for later use. |
| [vector](https://psyteachr.github.io/glossary/v#vector) | A type of data structure that is basically a list of things like T/F values, numbers, or strings. |
| [whitespace](https://psyteachr.github.io/glossary/w#whitespace) | Spaces, tabs and line breaks |
| [working directory](https://psyteachr.github.io/glossary/w#working.directory) | The filepath where R is currently reading and writing files. |
1\.8 Exercises
--------------
Download the first set of [exercises](exercises/01_intro_exercise.Rmd) and put it in the project directory you created earlier for today’s exercises. See the [answers](exercises/01_intro_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(1)
# run this to access the answers
dataskills::exercise(1, answers = TRUE)
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/data.html |
Chapter 2 Working with Data
===========================
2\.1 Learning Objectives
------------------------
1. Load [built\-in datasets](data.html#builtin) [(video)](https://youtu.be/Z5fK5VGmzlY)
2. [Import data](data.html#import_data) from CSV and Excel files [(video)](https://youtu.be/a7Ra-hnB8l8)
3. Create a [data table](data.html#tables) [(video)](https://youtu.be/k-aqhurepb4)
4. Understand the use the [basic data types](data.html#data_types) [(video)](https://youtu.be/jXQrF18Jaac)
5. Understand and use the [basic container types](data.html#containers) (list, vector) [(video)](https://youtu.be/4xU7uKNdoig)
6. Use [vectorized operations](data.html#vectorized_ops) [(video)](https://youtu.be/9I5MdS7UWmI)
7. Be able to [troubleshoot](#Troubleshooting) common data import problems [(video)](https://youtu.be/gcxn4LJ_vAI)
2\.2 Resources
--------------
* [Chapter 11: Data Import](http://r4ds.had.co.nz/data-import.html) in *R for Data Science*
* [RStudio Data Import Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/data-import.pdf)
* [Scottish Babynames](https://www.nrscotland.gov.uk/files//statistics/babies-first-names-full-list/summary-records/babies-names16-all-names-years.csv)
* [Developing an analysis in R/RStudio: Scottish babynames (1/2\)](https://www.youtube.com/watch?v=lAaVPMcMs1w)
* [Developing an analysis in R/RStudio: Scottish babynames (2/2\)](https://www.youtube.com/watch?v=lzdTHCcClqo)
2\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(dataskills)
```
2\.4 Data tables
----------------
### 2\.4\.1 Built\-in data
R comes with built\-in datasets. Some packages, like tidyr and dataskills, also contain data. The `data()` function lists the datasets available in a package.
```
# lists datasets in dataskills
data(package = "dataskills")
```
Type the name of a dataset into the console to see the data. Type `?smalldata` into the console to see the dataset description.
```
smalldata
```
| id | group | pre | post |
| --- | --- | --- | --- |
| S01 | control | 98\.46606 | 106\.70508 |
| S02 | control | 104\.39774 | 89\.09030 |
| S03 | control | 105\.13377 | 123\.67230 |
| S04 | control | 92\.42574 | 70\.70178 |
| S05 | control | 123\.53268 | 124\.95526 |
| S06 | exp | 97\.48676 | 101\.61697 |
| S07 | exp | 87\.75594 | 126\.30077 |
| S08 | exp | 77\.15375 | 72\.31229 |
| S09 | exp | 97\.00283 | 108\.80713 |
| S10 | exp | 102\.32338 | 113\.74732 |
You can also use the `data()` function to load a dataset into your [global environment](https://psyteachr.github.io/glossary/g#global-environment "The interactive workspace where your script runs").
```
# loads smalldata into the environment
data("smalldata")
```
Always, always, always, look at your data once you’ve created or loaded a table. Also look at it after each step that transforms your table. There are three main ways to look at your tibble: `print()`, `glimpse()`, and `View()`.
The `print()` method can be run explicitly, but is more commonly called by just typing the variable name on the blank line. The default is not to print the entire table, but just the first 10 rows. It’s rare to print your data in a script; that is something you usually are doing for a sanity check, and you should just do it in the console.
Let’s look at the `smalldata` table that we made above.
```
smalldata
```
| id | group | pre | post |
| --- | --- | --- | --- |
| S01 | control | 98\.46606 | 106\.70508 |
| S02 | control | 104\.39774 | 89\.09030 |
| S03 | control | 105\.13377 | 123\.67230 |
| S04 | control | 92\.42574 | 70\.70178 |
| S05 | control | 123\.53268 | 124\.95526 |
| S06 | exp | 97\.48676 | 101\.61697 |
| S07 | exp | 87\.75594 | 126\.30077 |
| S08 | exp | 77\.15375 | 72\.31229 |
| S09 | exp | 97\.00283 | 108\.80713 |
| S10 | exp | 102\.32338 | 113\.74732 |
The function `glimpse()` gives a sideways version of the tibble. This is useful if the table is very wide and you can’t see all of the columns. It also tells you the data type of each column in angled brackets after each column name. We’ll learn about [data types](data.html#data_types) below.
```
glimpse(smalldata)
```
```
## Rows: 10
## Columns: 4
## $ id <chr> "S01", "S02", "S03", "S04", "S05", "S06", "S07", "S08", "S09", "…
## $ group <chr> "control", "control", "control", "control", "control", "exp", "e…
## $ pre <dbl> 98.46606, 104.39774, 105.13377, 92.42574, 123.53268, 97.48676, 8…
## $ post <dbl> 106.70508, 89.09030, 123.67230, 70.70178, 124.95526, 101.61697, …
```
The other way to look at the table is a more graphical spreadsheet\-like version given by `View()` (capital ‘V’). It can be useful in the console, but don’t ever put this one in a script because it will create an annoying pop\-up window when the user runs it.
Now you can click on `smalldata` in the environment pane to open it up in a viewer that looks a bit like Excel.
You can get a quick summary of a dataset with the `summary()` function.
```
summary(smalldata)
```
```
## id group pre post
## Length:10 Length:10 Min. : 77.15 Min. : 70.70
## Class :character Class :character 1st Qu.: 93.57 1st Qu.: 92.22
## Mode :character Mode :character Median : 97.98 Median :107.76
## Mean : 98.57 Mean :103.79
## 3rd Qu.:103.88 3rd Qu.:121.19
## Max. :123.53 Max. :126.30
```
You can even do things like calculate the difference between the means of two columns.
```
pre_mean <- mean(smalldata$pre)
post_mean <- mean(smalldata$post)
post_mean - pre_mean
```
```
## [1] 5.223055
```
### 2\.4\.2 Importing data
Built\-in data are nice for examples, but you’re probably more interested in your own data. There are many different types of files that you might work with when doing data analysis. These different file types are usually distinguished by the three letter [extension](https://psyteachr.github.io/glossary/e#extension "The end part of a file name that tells you what type of file it is (e.g., .R or .Rmd).") following a period at the end of the file name. Here are some examples of different types of files and the functions you would use to read them in or write them out.
| Extension | File Type | Reading | Writing |
| --- | --- | --- | --- |
| .csv | Comma\-separated values | `readr::read_csv()` | `readr::write_csv()` |
| .tsv, .txt | Tab\-separated values | `readr::read_tsv()` | `readr::write_tsv()` |
| .xls, .xlsx | Excel workbook | `readxl::read_excel()` | NA |
| .sav, .mat, … | Multiple types | `rio::import()` | NA |
The double colon means that the function on the right comes from the package on the left, so `readr::read_csv()` refers to the `read_csv()` function in the `readr` package, and `readxl::read_excel()` refers to the function `read_excel()` in the package `readxl`. The function `rio::import()` from the `rio` package will read almost any type of data file, including SPSS and Matlab. Check the help with `?rio::import` to see a full list.
You can get a directory of data files used in this class for tutorials and exercises with the following code, which will create a directory called “data” in your project directory. Alternatively, you can download a [zip file of the datasets](data/data.zip).
```
dataskills::getdata()
```
Probably the most common file type you will encounter is [.csv](https://psyteachr.github.io/glossary/c#csv "Comma-separated variable: a file type for representing data where each variable is separated from the next by a comma.") (comma\-separated values). As the name suggests, a CSV file distinguishes which values go with which variable by separating them with commas, and text values are sometimes enclosed in double quotes. The first line of a file usually provides the names of the variables.
For example, here are the first few lines of a CSV containing personality scores:
```
subj_id,O,C,E,A,N
S01,4.428571429,4.5,3.333333333,5.142857143,1.625
S02,5.714285714,2.9,3.222222222,3,2.625
S03,5.142857143,2.8,6,3.571428571,2.5
S04,3.142857143,5.2,1.333333333,1.571428571,3.125
S05,5.428571429,4.4,2.444444444,4.714285714,1.625
```
There are six variables in this dataset, and their names are given in the first line of the file: `subj_id`, `O`, `C`, `E`, `A`, and `N`. You can see that the values for each of these variables are given in order, separated by commas, on each subsequent line of the file.
When you read in CSV files, it is best practice to use the `readr::read_csv()` function. The `readr` package is automatically loaded as part of the `tidyverse` package, which we will be using in almost every script. Note that you would normally want to store the result of the `read_csv()` function to an object, as so:
```r
csv_data <- read_csv("data/5factor.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## subj_id = col_character(),
## O = col_double(),
## C = col_double(),
## E = col_double(),
## A = col_double(),
## N = col_double()
## )
```
The `read_csv()` and `read_tsv()` functions will give you some information about the data you just read in so you can check the column names and [data types](data.html#data_types). For now, it’s enough to know that `col_double()` refers to columns with numbers and `col_character()` refers to columns with words. We’ll learn in the [toroubleshooting](data.html#troubleshooting) section below how to fix it if the function guesses the wrong data type.
```
tsv_data <- read_tsv("data/5factor.txt")
xls_data <- readxl::read_xls("data/5factor.xls")
# you can load sheets from excel files by name or number
rep_data <- readxl::read_xls("data/5factor.xls", sheet = "replication")
spss_data <- rio::import("data/5factor.sav")
```
Once loaded, you can view your data using the data viewer. In the upper right hand window of RStudio, under the Environment tab, you will see the object `babynames` listed.
If you click on the View icon (, it will bring up a table view of the data you loaded in the top left pane of RStudio.
This allows you to check that the data have been loaded in properly. You can close the tab when you’re done looking at it, it won’t remove the object.
### 2\.4\.3 Creating data
If we are creating a data table from scratch, we can use the `tibble::tibble()` function, and type the data right in. The `tibble` package is part of the [tidyverse](https://psyteachr.github.io/glossary/t#tidyverse "A set of R packages that help you create and work with tidy data") package that we loaded at the start of this chapter.
Let’s create a small table with the names of three Avatar characters and their bending type. The `tibble()` function takes arguments with the names that you want your columns to have. The values are vectors that list the column values in order.
If you don’t know the value for one of the cells, you can enter `NA`, which we have to do for Sokka because he doesn’t have any bending ability. If all the values in the column are the same, you can just enter one value and it will be copied for each row.
```
avatar <- tibble(
name = c("Katara", "Toph", "Sokka"),
bends = c("water", "earth", NA),
friendly = TRUE
)
# print it
avatar
```
| name | bends | friendly |
| --- | --- | --- |
| Katara | water | TRUE |
| Toph | earth | TRUE |
| Sokka | NA | TRUE |
### 2\.4\.4 Writing Data
If you have data that you want to save to a CSV file, use `readr::write_csv()`, as follows.
```
write_csv(avatar, "avatar.csv")
```
This will save the data in CSV format to your working directory.
* Create a new table called `family` with the first name, last name, and age of your family members.
* Save it to a CSV file called “family.csv.”
* Clear the object from your environment by restarting R or with the code `remove(family)`.
* Load the data back in and view it.
We’ll be working with [tabular data](https://psyteachr.github.io/glossary/t#tabular-data "Data in a rectangular table format, where each row has an entry for each column.") a lot in this class, but tabular data is made up of [vectors](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings."), which group together data with the same basic [data type](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object."). The following sections explain some of this terminology to help you understand the functions we’ll be learning to process and analyse data.
2\.5 Basic data types
---------------------
Data can be numbers, words, true/false values or combinations of these. In order to understand some later concepts, it’s useful to have a basic understanding of [data types](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object.") in R: [numeric](https://psyteachr.github.io/glossary/n#numeric "A data type representing a real decimal number or integer."), [character](https://psyteachr.github.io/glossary/c#character "A data type representing strings of text."), and [logical](https://psyteachr.github.io/glossary/l#logical "A data type representing TRUE or FALSE values.") There is also a specific data type called a [factor](https://psyteachr.github.io/glossary/f#factor "A data type where a specific set of values are stored with labels; An explanatory variable manipulated by the experimenter"), which will probably give you a headache sooner or later, but we can ignore it for now.
### 2\.5\.1 Numeric data
All of the real numbers are [numeric](https://psyteachr.github.io/glossary/n#numeric "A data type representing a real decimal number or integer.") data types (imaginary numbers are “complex”). There are two types of numeric data, [integer](https://psyteachr.github.io/glossary/i#integer "A data type representing whole numbers.") and [double](https://psyteachr.github.io/glossary/d#double "A data type representing a real decimal number"). Integers are the whole numbers, like \-1, 0 and 1\. Doubles are numbers that can have fractional amounts. If you just type a plain number such as `10`, it is stored as a double, even if it doesn’t have a decimal point. If you want it to be an exact integer, use the `L` suffix (10L).
If you ever want to know the data type of something, use the `typeof` function.
```
typeof(10) # double
typeof(10.0) # double
typeof(10L) # integer
typeof(10i) # complex
```
```
## [1] "double"
## [1] "double"
## [1] "integer"
## [1] "complex"
```
If you want to know if something is numeric (a double or an integer), you can use the function `is.numeric()` and it will tell you if it is numeric (`TRUE`) or not (`FALSE`).
```
is.numeric(10L)
is.numeric(10.0)
is.numeric("Not a number")
```
```
## [1] TRUE
## [1] TRUE
## [1] FALSE
```
### 2\.5\.2 Character data
[Character](https://psyteachr.github.io/glossary/c#character "A data type representing strings of text.") strings are any text between quotation marks.
```
typeof("This is a character string")
typeof('You can use double or single quotes')
```
```
## [1] "character"
## [1] "character"
```
This can include quotes, but you have to [escape](https://psyteachr.github.io/glossary/e#escape "Include special characters like \" inside of a string by prefacing them with a backslash.") it using a backslash to signal the the quote isn’t meant to be the end of the string.
```
my_string <- "The instructor said, \"R is cool,\" and the class agreed."
cat(my_string) # cat() prints the arguments
```
```
## The instructor said, "R is cool," and the class agreed.
```
### 2\.5\.3 Logical Data
[Logical](https://psyteachr.github.io/glossary/l#logical "A data type representing TRUE or FALSE values.") data (also sometimes called “boolean” values) is one of two values: true or false. In R, we always write them in uppercase: `TRUE` and `FALSE`.
```
class(TRUE)
class(FALSE)
```
```
## [1] "logical"
## [1] "logical"
```
When you compare two values with an [operator](https://psyteachr.github.io/glossary/o#operator "A symbol that performs a mathematical operation, such as +, -, *, /"), such as checking to see if 10 is greater than 5, the resulting value is logical.
```
is.logical(10 > 5)
```
```
## [1] TRUE
```
You might also see logical values abbreviated as `T` and `F`, or `0` and `1`. This can cause some problems down the road, so we will always spell out the whole thing.
What data types are these:
* `100` integer double character logical factor
* `100L` integer double character logical factor
* `"100"` integer double character logical factor
* `100.0` integer double character logical factor
* `-100L` integer double character logical factor
* `factor(100)` integer double character logical factor
* `TRUE` integer double character logical factor
* `"TRUE"` integer double character logical factor
* `FALSE` integer double character logical factor
* `1 == 2` integer double character logical factor
2\.6 Basic container types
--------------------------
Individual data values can be grouped together into containers. The main types of containers we’ll work with are vectors, lists, and data tables.
### 2\.6\.1 Vectors
A [vector](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings.") in R is like a vector in mathematics: a set of ordered elements. All of the elements in a vector must be of the same **data type** (numeric, character, logical). You can create a vector by enclosing the elements in the function `c()`.
```
## put information into a vector using c(...)
c(1, 2, 3, 4)
c("this", "is", "cool")
1:6 # shortcut to make a vector of all integers x:y
```
```
## [1] 1 2 3 4
## [1] "this" "is" "cool"
## [1] 1 2 3 4 5 6
```
What happens when you mix types? What class is the variable `mixed`?
```
mixed <- c(2, "good", 2L, "b", TRUE)
```
You can’t mix data types in a vector; all elements of the vector must be the same data type. If you mix them, R will “coerce” them so that they are all the same. If you mix doubles and integers, the integers will be changed to doubles. If you mix characters and numeric types, the numbers will be coerced to characters, so `10` would turn into “10\.”
#### 2\.6\.1\.1 Selecting values from a vector
If we wanted to pick specific values out of a vector by position, we can use square brackets (an [extract operator](https://psyteachr.github.io/glossary/e#extract-operator "A symbol used to get values from a container object, such as [, [[, or $"), or `[]`) after the vector.
```
values <- c(10, 20, 30, 40, 50)
values[2] # selects the second value
```
```
## [1] 20
```
You can select more than one value from the vector by putting a vector of numbers inside the square brackets. For example, you can select the 18th, 19th, 20th, 21st, 4th, 9th and 15th letter from the built\-in vector `LETTERS` (which gives all the uppercase letters in the Latin alphabet).
```
word <- c(18, 19, 20, 21, 4, 9, 15)
LETTERS[word]
```
```
## [1] "R" "S" "T" "U" "D" "I" "O"
```
Can you decode the secret message?
```
secret <- c(14, 5, 22, 5, 18, 7, 15, 14, 14, 1, 7, 9, 22, 5, 25, 15, 21, 21, 16)
```
You can also create ‘named’ vectors, where each element has a name. For example:
```
vec <- c(first = 77.9, second = -13.2, third = 100.1)
vec
```
```
## first second third
## 77.9 -13.2 100.1
```
We can then access elements by name using a character vector within the square brackets. We can put them in any order we want, and we can repeat elements:
```
vec[c("third", "second", "second")]
```
```
## third second second
## 100.1 -13.2 -13.2
```
We can get the vector of names using the `names()` function, and we can set or change them using something like `names(vec2) <- c(“n1,” “n2,” “n3”)`.
Another way to access elements is by using a logical vector within the square brackets. This will pull out the elements of the vector for which the corresponding element of the logical vector is `TRUE`. If the logical vector doesn’t have the same length as the original, it will repeat. You can find out how long a vector is using the `length()` function.
```
length(LETTERS)
LETTERS[c(TRUE, FALSE)]
```
```
## [1] 26
## [1] "A" "C" "E" "G" "I" "K" "M" "O" "Q" "S" "U" "W" "Y"
```
#### 2\.6\.1\.2 Repeating Sequences
Here are some useful tricks to save typing when creating vectors.
In the command `x:y` the `:` operator would give you the sequence of number starting at `x`, and going to `y` in increments of 1\.
```
1:10
15.3:20.5
0:-10
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
## [1] 15.3 16.3 17.3 18.3 19.3 20.3
## [1] 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10
```
What if you want to create a sequence but with something other than integer steps? You can use the `seq()` function. Look at the examples below and work out what the arguments do.
```
seq(from = -1, to = 1, by = 0.2)
seq(0, 100, length.out = 11)
seq(0, 10, along.with = LETTERS)
```
```
## [1] -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
## [1] 0 10 20 30 40 50 60 70 80 90 100
## [1] 0.0 0.4 0.8 1.2 1.6 2.0 2.4 2.8 3.2 3.6 4.0 4.4 4.8 5.2 5.6
## [16] 6.0 6.4 6.8 7.2 7.6 8.0 8.4 8.8 9.2 9.6 10.0
```
What if you want to repeat a vector many times? You could either type it out (painful) or use the `rep()` function, which can repeat vectors in different ways.
```
rep(0, 10) # ten zeroes
rep(c(1L, 3L), times = 7) # alternating 1 and 3, 7 times
rep(c("A", "B", "C"), each = 2) # A to C, 2 times each
```
```
## [1] 0 0 0 0 0 0 0 0 0 0
## [1] 1 3 1 3 1 3 1 3 1 3 1 3 1 3
## [1] "A" "A" "B" "B" "C" "C"
```
The `rep()` function is useful to create a vector of logical values (`TRUE`/`FALSE` or `1`/`0`) to select values from another vector.
```
# Get subject IDs in the pattern Y Y N N ...
subject_ids <- 1:40
yynn <- rep(c(TRUE, FALSE), each = 2,
length.out = length(subject_ids))
subject_ids[yynn]
```
```
## [1] 1 2 5 6 9 10 13 14 17 18 21 22 25 26 29 30 33 34 37 38
```
#### 2\.6\.1\.3 Vectorized Operations
R performs calculations on vectors in a special way. Let’s look at an example using \\(z\\)\-scores. A \\(z\\)\-score is a [deviation score](https://psyteachr.github.io/glossary/d#deviation-score "A score minus the mean")(a score minus a mean) divided by a standard deviation. Let’s say we have a set of four IQ scores.
```
## example IQ scores: mu = 100, sigma = 15
iq <- c(86, 101, 127, 99)
```
If we want to subtract the mean from these four scores, we just use the following code:
```
iq - 100
```
```
## [1] -14 1 27 -1
```
This subtracts 100 from each element of the vector. R automatically assumes that this is what you wanted to do; it is called a [vectorized](https://psyteachr.github.io/glossary/v#vectorized "An operator or function that acts on each element in a vector") operation and it makes it possible to express operations more efficiently.
To calculate \\(z\\)\-scores we use the formula:
\\(z \= \\frac{X \- \\mu}{\\sigma}\\)
where X are the scores, \\(\\mu\\) is the mean, and \\(\\sigma\\) is the standard deviation. We can expression this formula in R as follows:
```
## z-scores
(iq - 100) / 15
```
```
## [1] -0.93333333 0.06666667 1.80000000 -0.06666667
```
You can see that it computed all four \\(z\\)\-scores with a single line of code. In later chapters, we’ll use vectorised operations to process our data, such as reverse\-scoring some questionnaire items.
### 2\.6\.2 Lists
Recall that vectors can contain data of only one type. What if you want to store a collection of data of different data types? For that purpose you would use a [list](https://psyteachr.github.io/glossary/l#list "A container data type that allows items with different data types to be grouped together."). Define a list using the `list()` function.
```
data_types <- list(
double = 10.0,
integer = 10L,
character = "10",
logical = TRUE
)
str(data_types) # str() prints lists in a condensed format
```
```
## List of 4
## $ double : num 10
## $ integer : int 10
## $ character: chr "10"
## $ logical : logi TRUE
```
You can refer to elements of a list using square brackets like a vector, but you can also use the dollar sign notation (`$`) if the list items have names.
```
data_types$logical
```
```
## [1] TRUE
```
Explore the 5 ways shown below to extract a value from a list. What data type is each object? What is the difference between the single and double brackets? Which one is the same as the dollar sign?
```
bracket1 <- data_types[1]
bracket2 <- data_types[[1]]
name1 <- data_types["double"]
name2 <- data_types[["double"]]
dollar <- data_types$double
```
### 2\.6\.3 Tables
The built\-in, imported, and created data above are [tabular data](https://psyteachr.github.io/glossary/t#tabular-data "Data in a rectangular table format, where each row has an entry for each column."), data arranged in the form of a table.
Tabular data structures allow for a collection of data of different types (characters, integers, logical, etc.) but subject to the constraint that each “column” of the table (element of the list) must have the same number of elements. The base R version of a table is called a `data.frame`, while the ‘tidyverse’ version is called a `tibble`. Tibbles are far easier to work with, so we’ll be using those. To learn more about differences between these two data structures, see `vignette("tibble")`.
Tabular data becomes especially important for when we talk about [tidy data](https://psyteachr.github.io/glossary/t#tidy-data "A format for data that maps the meaning onto the structure.") in [chapter 4](tidyr.html#tidyr), which consists of a set of simple principles for structuring data.
#### 2\.6\.3\.1 Creating a table
We learned how to create a table by importing a Excel or CSV file, and creating a table from scratch using the `tibble()` function. You can also use the `tibble::tribble()` function to create a table by row, rather than by column. You start by listing the column names, each preceded by a tilde (`~`), then you list the values for each column, row by row, separated by commas (don’t forget a comma at the end of each row). This method can be easier for some data, but doesn’t let you use shortcuts, like setting all of the values in a column to the same value or a [repeating sequence](data.html#rep_seq).
```
# by column using tibble
avatar_by_col <- tibble(
name = c("Katara", "Toph", "Sokka", "Azula"),
bends = c("water", "earth", NA, "fire"),
friendly = rep(c(TRUE, FALSE), c(3, 1))
)
# by row using tribble
avatar_by_row <- tribble(
~name, ~bends, ~friendly,
"Katara", "water", TRUE,
"Toph", "earth", TRUE,
"Sokka", NA, TRUE,
"Azula", "fire", FALSE
)
```
#### 2\.6\.3\.2 Table info
We can get information about the table using the functions `ncol()` (number of columns), `nrow()` (number of rows), `dim()` (the number of rows and number of columns), and `name()` (the column names).
```
nrow(avatar) # how many rows?
ncol(avatar) # how many columns?
dim(avatar) # what are the table dimensions?
names(avatar) # what are the column names?
```
```
## [1] 3
## [1] 3
## [1] 3 3
## [1] "name" "bends" "friendly"
```
#### 2\.6\.3\.3 Accessing rows and columns
There are various ways of accessing specific columns or rows from a table. The ones below are from [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") and are useful to know about, but you’ll be learning easier (and more readable) ways in the [tidyr](tidyr.html#tidyr) and [dplyr](dplyr.html#dplyr) lessons. Examples of these base R accessing functions are provided here for reference, since you might see them in other people’s scripts.
```
katara <- avatar[1, ] # first row
type <- avatar[, 2] # second column (bends)
benders <- avatar[c(1, 2), ] # selected rows (by number)
bends_name <- avatar[, c("bends", "name")] # selected columns (by name)
friendly <- avatar$friendly # by column name
```
2\.7 Troubleshooting
--------------------
What if you import some data and it guesses the wrong column type? The most common reason is that a numeric column has some non\-numbers in it somewhere. Maybe someone wrote a note in an otherwise numeric column. Columns have to be all one data type, so if there are any characters, the whole column is converted to character strings, and numbers like `1.2` get represented as “1\.2,” which will cause very weird errors like `"100" < "9" == TRUE`. You can catch this by looking at the output from `read_csv()` or using `glimpse()` to check your data.
The data directory you created with `dataskills::getdata()` contains a file called “mess.csv.” Let’s try loading this dataset.
```
mess <- read_csv("data/mess.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## `This is my messy dataset` = col_character()
## )
```
```
## Warning: 27 parsing failures.
## row col expected actual file
## 1 -- 1 columns 7 columns 'data/mess.csv'
## 2 -- 1 columns 7 columns 'data/mess.csv'
## 3 -- 1 columns 7 columns 'data/mess.csv'
## 4 -- 1 columns 7 columns 'data/mess.csv'
## 5 -- 1 columns 7 columns 'data/mess.csv'
## ... ... ......... ......... ...............
## See problems(...) for more details.
```
You’ll get a warning with many parsing errors and `mess` is just a single column of the word “junk.” View the file `data/mess.csv` by clicking on it in the File pane, and choosing “View File.” Here are the first 10 lines. What went wrong?
```
This is my messy dataset
junk,order,score,letter,good,min_max,date
junk,1,-1,a,1,1 - 2,2020-01-1
junk,missing,0.72,b,1,2 - 3,2020-01-2
junk,3,-0.62,c,FALSE,3 - 4,2020-01-3
junk,4,2.03,d,T,4 - 5,2020-01-4
```
First, the file starts with a note: “This is my messy dataset.” We want to skip the first two lines. You can do this with the argument `skip` in `read_csv()`.
```
mess <- read_csv("data/mess.csv", skip = 2)
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## junk = col_character(),
## order = col_character(),
## score = col_double(),
## letter = col_character(),
## good = col_character(),
## min_max = col_character(),
## date = col_character()
## )
```
```
mess
```
| junk | order | score | letter | good | min\_max | date |
| --- | --- | --- | --- | --- | --- | --- |
| junk | 1 | \-1\.00 | a | 1 | 1 \- 2 | 2020\-01\-1 |
| junk | missing | 0\.72 | b | 1 | 2 \- 3 | 2020\-01\-2 |
| junk | 3 | \-0\.62 | c | FALSE | 3 \- 4 | 2020\-01\-3 |
| junk | 4 | 2\.03 | d | T | 4 \- 5 | 2020\-01\-4 |
| junk | 5 | NA | e | 1 | 5 \- 6 | 2020\-01\-5 |
| junk | 6 | 0\.99 | f | 0 | 6 \- 7 | 2020\-01\-6 |
| junk | 7 | 0\.03 | g | T | 7 \- 8 | 2020\-01\-7 |
| junk | 8 | 0\.67 | h | TRUE | 8 \- 9 | 2020\-01\-8 |
| junk | 9 | 0\.57 | i | 1 | 9 \- 10 | 2020\-01\-9 |
| junk | 10 | 0\.90 | j | T | 10 \- 11 | 2020\-01\-10 |
| junk | 11 | \-1\.55 | k | F | 11 \- 12 | 2020\-01\-11 |
| junk | 12 | NA | l | FALSE | 12 \- 13 | 2020\-01\-12 |
| junk | 13 | 0\.15 | m | T | 13 \- 14 | 2020\-01\-13 |
| junk | 14 | \-0\.66 | n | TRUE | 14 \- 15 | 2020\-01\-14 |
| junk | 15 | \-0\.99 | o | 1 | 15 \- 16 | 2020\-01\-15 |
| junk | 16 | 1\.97 | p | T | 16 \- 17 | 2020\-01\-16 |
| junk | 17 | \-0\.44 | q | TRUE | 17 \- 18 | 2020\-01\-17 |
| junk | 18 | \-0\.90 | r | F | 18 \- 19 | 2020\-01\-18 |
| junk | 19 | \-0\.15 | s | FALSE | 19 \- 20 | 2020\-01\-19 |
| junk | 20 | \-0\.83 | t | 0 | 20 \- 21 | 2020\-01\-20 |
| junk | 21 | 1\.99 | u | T | 21 \- 22 | 2020\-01\-21 |
| junk | 22 | 0\.04 | v | F | 22 \- 23 | 2020\-01\-22 |
| junk | 23 | \-0\.40 | w | F | 23 \- 24 | 2020\-01\-23 |
| junk | 24 | \-0\.47 | x | 0 | 24 \- 25 | 2020\-01\-24 |
| junk | 25 | \-0\.41 | y | TRUE | 25 \- 26 | 2020\-01\-25 |
| junk | 26 | 0\.68 | z | 0 | 26 \- 27 | 2020\-01\-26 |
OK, that’s a little better, but this table is still a serious mess in several ways:
* `junk` is a column that we don’t need
* `order` should be an integer column
* `good` should be a logical column
* `good` uses all kinds of different ways to record TRUE and FALSE values
* `min_max` contains two pieces of numeric information, but is a character column
* `date` should be a date column
We’ll learn how to deal with this mess in the chapters on [tidy data](tidyr.html#tidyr) and [data wrangling](dplyr.html#dplyr), but we can fix a few things by setting the `col_types` argument in `read_csv()` to specify the column types for our two columns that were guessed wrong and skip the “junk” column. The argument `col_types` takes a list where the name of each item in the list is a column name and the value is from the table below. You can use the function, like `col_double()` or the abbreviation, like `"l"`. Omitted column names are guessed.
| function | | abbreviation |
| --- | --- | --- |
| col\_logical() | l | logical values |
| col\_integer() | i | integer values |
| col\_double() | d | numeric values |
| col\_character() | c | strings |
| col\_factor(levels, ordered) | f | a fixed set of values |
| col\_date(format \= "") | D | with the locale’s date\_format |
| col\_time(format \= "") | t | with the locale’s time\_format |
| col\_datetime(format \= "") | T | ISO8601 date time |
| col\_number() | n | numbers containing the grouping\_mark |
| col\_skip() | \_, \- | don’t import this column |
| col\_guess() | ? | parse using the “best” type based on the input |
```
# omitted values are guessed
# ?col_date for format options
ct <- list(
junk = "-", # skip this column
order = "i",
good = "l",
date = col_date(format = "%Y-%m-%d")
)
tidier <- read_csv("data/mess.csv",
skip = 2,
col_types = ct)
```
```
## Warning: 1 parsing failure.
## row col expected actual file
## 2 order an integer missing 'data/mess.csv'
```
You will get a message about “1 parsing failure” when you run this. Warnings look scary at first, but always start by reading the message. The table tells you what row (`2`) and column (`order`) the error was found in, what kind of data was expected (`integer`), and what the actual value was (`missing`). If you specifically tell `read_csv()` to import a column as an integer, any characters in the column will produce a warning like this and then be recorded as `NA`. You can manually set what the missing values are recorded as with the `na` argument.
```
tidiest <- read_csv("data/mess.csv",
skip = 2,
na = "missing",
col_types = ct)
```
Now `order` is an integer where “missing” is now `NA`, `good` is a logical value, where `0` and `F` are converted to `FALSE` and `1` and `T` are converted to `TRUE`, and `date` is a date type (adding leading zeros to the day). We’ll learn in later chapters how to fix the other problems.
```
tidiest
```
| order | score | letter | good | min\_max | date |
| --- | --- | --- | --- | --- | --- |
| 1 | \-1 | a | TRUE | 1 \- 2 | 2020\-01\-01 |
| NA | 0\.72 | b | TRUE | 2 \- 3 | 2020\-01\-02 |
| 3 | \-0\.62 | c | FALSE | 3 \- 4 | 2020\-01\-03 |
| 4 | 2\.03 | d | TRUE | 4 \- 5 | 2020\-01\-04 |
| 5 | NA | e | TRUE | 5 \- 6 | 2020\-01\-05 |
| 6 | 0\.99 | f | FALSE | 6 \- 7 | 2020\-01\-06 |
| 7 | 0\.03 | g | TRUE | 7 \- 8 | 2020\-01\-07 |
| 8 | 0\.67 | h | TRUE | 8 \- 9 | 2020\-01\-08 |
| 9 | 0\.57 | i | TRUE | 9 \- 10 | 2020\-01\-09 |
| 10 | 0\.9 | j | TRUE | 10 \- 11 | 2020\-01\-10 |
| 11 | \-1\.55 | k | FALSE | 11 \- 12 | 2020\-01\-11 |
| 12 | NA | l | FALSE | 12 \- 13 | 2020\-01\-12 |
| 13 | 0\.15 | m | TRUE | 13 \- 14 | 2020\-01\-13 |
| 14 | \-0\.66 | n | TRUE | 14 \- 15 | 2020\-01\-14 |
| 15 | \-0\.99 | o | TRUE | 15 \- 16 | 2020\-01\-15 |
| 16 | 1\.97 | p | TRUE | 16 \- 17 | 2020\-01\-16 |
| 17 | \-0\.44 | q | TRUE | 17 \- 18 | 2020\-01\-17 |
| 18 | \-0\.9 | r | FALSE | 18 \- 19 | 2020\-01\-18 |
| 19 | \-0\.15 | s | FALSE | 19 \- 20 | 2020\-01\-19 |
| 20 | \-0\.83 | t | FALSE | 20 \- 21 | 2020\-01\-20 |
| 21 | 1\.99 | u | TRUE | 21 \- 22 | 2020\-01\-21 |
| 22 | 0\.04 | v | FALSE | 22 \- 23 | 2020\-01\-22 |
| 23 | \-0\.4 | w | FALSE | 23 \- 24 | 2020\-01\-23 |
| 24 | \-0\.47 | x | FALSE | 24 \- 25 | 2020\-01\-24 |
| 25 | \-0\.41 | y | TRUE | 25 \- 26 | 2020\-01\-25 |
| 26 | 0\.68 | z | FALSE | 26 \- 27 | 2020\-01\-26 |
2\.8 Glossary
-------------
| term | definition |
| --- | --- |
| [base r](https://psyteachr.github.io/glossary/b#base.r) | The set of R functions that come with a basic installation of R, before you add external packages |
| [character](https://psyteachr.github.io/glossary/c#character) | A data type representing strings of text. |
| [csv](https://psyteachr.github.io/glossary/c#csv) | Comma\-separated variable: a file type for representing data where each variable is separated from the next by a comma. |
| [data type](https://psyteachr.github.io/glossary/d#data.type) | The kind of data represented by an object. |
| [deviation score](https://psyteachr.github.io/glossary/d#deviation.score) | A score minus the mean |
| [double](https://psyteachr.github.io/glossary/d#double) | A data type representing a real decimal number |
| [escape](https://psyteachr.github.io/glossary/e#escape) | Include special characters like " inside of a string by prefacing them with a backslash. |
| [extension](https://psyteachr.github.io/glossary/e#extension) | The end part of a file name that tells you what type of file it is (e.g., .R or .Rmd). |
| [extract operator](https://psyteachr.github.io/glossary/e#extract.operator) | A symbol used to get values from a container object, such as \[, \[\[, or $ |
| [factor](https://psyteachr.github.io/glossary/f#factor) | A data type where a specific set of values are stored with labels; An explanatory variable manipulated by the experimenter |
| [global environment](https://psyteachr.github.io/glossary/g#global.environment) | The interactive workspace where your script runs |
| [integer](https://psyteachr.github.io/glossary/i#integer) | A data type representing whole numbers. |
| [list](https://psyteachr.github.io/glossary/l#list) | A container data type that allows items with different data types to be grouped together. |
| [logical](https://psyteachr.github.io/glossary/l#logical) | A data type representing TRUE or FALSE values. |
| [numeric](https://psyteachr.github.io/glossary/n#numeric) | A data type representing a real decimal number or integer. |
| [operator](https://psyteachr.github.io/glossary/o#operator) | A symbol that performs a mathematical operation, such as \+, \-, \*, / |
| [tabular data](https://psyteachr.github.io/glossary/t#tabular.data) | Data in a rectangular table format, where each row has an entry for each column. |
| [tidy data](https://psyteachr.github.io/glossary/t#tidy.data) | A format for data that maps the meaning onto the structure. |
| [tidyverse](https://psyteachr.github.io/glossary/t#tidyverse) | A set of R packages that help you create and work with tidy data |
| [vector](https://psyteachr.github.io/glossary/v#vector) | A type of data structure that is basically a list of things like T/F values, numbers, or strings. |
| [vectorized](https://psyteachr.github.io/glossary/v#vectorized) | An operator or function that acts on each element in a vector |
2\.9 Exercises
--------------
Download the [exercises](exercises/02_data_exercise.Rmd). See the [answers](exercises/02_data_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(2)
# run this to access the answers
dataskills::exercise(2, answers = TRUE)
```
2\.1 Learning Objectives
------------------------
1. Load [built\-in datasets](data.html#builtin) [(video)](https://youtu.be/Z5fK5VGmzlY)
2. [Import data](data.html#import_data) from CSV and Excel files [(video)](https://youtu.be/a7Ra-hnB8l8)
3. Create a [data table](data.html#tables) [(video)](https://youtu.be/k-aqhurepb4)
4. Understand the use the [basic data types](data.html#data_types) [(video)](https://youtu.be/jXQrF18Jaac)
5. Understand and use the [basic container types](data.html#containers) (list, vector) [(video)](https://youtu.be/4xU7uKNdoig)
6. Use [vectorized operations](data.html#vectorized_ops) [(video)](https://youtu.be/9I5MdS7UWmI)
7. Be able to [troubleshoot](#Troubleshooting) common data import problems [(video)](https://youtu.be/gcxn4LJ_vAI)
2\.2 Resources
--------------
* [Chapter 11: Data Import](http://r4ds.had.co.nz/data-import.html) in *R for Data Science*
* [RStudio Data Import Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/data-import.pdf)
* [Scottish Babynames](https://www.nrscotland.gov.uk/files//statistics/babies-first-names-full-list/summary-records/babies-names16-all-names-years.csv)
* [Developing an analysis in R/RStudio: Scottish babynames (1/2\)](https://www.youtube.com/watch?v=lAaVPMcMs1w)
* [Developing an analysis in R/RStudio: Scottish babynames (2/2\)](https://www.youtube.com/watch?v=lzdTHCcClqo)
2\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(dataskills)
```
2\.4 Data tables
----------------
### 2\.4\.1 Built\-in data
R comes with built\-in datasets. Some packages, like tidyr and dataskills, also contain data. The `data()` function lists the datasets available in a package.
```
# lists datasets in dataskills
data(package = "dataskills")
```
Type the name of a dataset into the console to see the data. Type `?smalldata` into the console to see the dataset description.
```
smalldata
```
| id | group | pre | post |
| --- | --- | --- | --- |
| S01 | control | 98\.46606 | 106\.70508 |
| S02 | control | 104\.39774 | 89\.09030 |
| S03 | control | 105\.13377 | 123\.67230 |
| S04 | control | 92\.42574 | 70\.70178 |
| S05 | control | 123\.53268 | 124\.95526 |
| S06 | exp | 97\.48676 | 101\.61697 |
| S07 | exp | 87\.75594 | 126\.30077 |
| S08 | exp | 77\.15375 | 72\.31229 |
| S09 | exp | 97\.00283 | 108\.80713 |
| S10 | exp | 102\.32338 | 113\.74732 |
You can also use the `data()` function to load a dataset into your [global environment](https://psyteachr.github.io/glossary/g#global-environment "The interactive workspace where your script runs").
```
# loads smalldata into the environment
data("smalldata")
```
Always, always, always, look at your data once you’ve created or loaded a table. Also look at it after each step that transforms your table. There are three main ways to look at your tibble: `print()`, `glimpse()`, and `View()`.
The `print()` method can be run explicitly, but is more commonly called by just typing the variable name on the blank line. The default is not to print the entire table, but just the first 10 rows. It’s rare to print your data in a script; that is something you usually are doing for a sanity check, and you should just do it in the console.
Let’s look at the `smalldata` table that we made above.
```
smalldata
```
| id | group | pre | post |
| --- | --- | --- | --- |
| S01 | control | 98\.46606 | 106\.70508 |
| S02 | control | 104\.39774 | 89\.09030 |
| S03 | control | 105\.13377 | 123\.67230 |
| S04 | control | 92\.42574 | 70\.70178 |
| S05 | control | 123\.53268 | 124\.95526 |
| S06 | exp | 97\.48676 | 101\.61697 |
| S07 | exp | 87\.75594 | 126\.30077 |
| S08 | exp | 77\.15375 | 72\.31229 |
| S09 | exp | 97\.00283 | 108\.80713 |
| S10 | exp | 102\.32338 | 113\.74732 |
The function `glimpse()` gives a sideways version of the tibble. This is useful if the table is very wide and you can’t see all of the columns. It also tells you the data type of each column in angled brackets after each column name. We’ll learn about [data types](data.html#data_types) below.
```
glimpse(smalldata)
```
```
## Rows: 10
## Columns: 4
## $ id <chr> "S01", "S02", "S03", "S04", "S05", "S06", "S07", "S08", "S09", "…
## $ group <chr> "control", "control", "control", "control", "control", "exp", "e…
## $ pre <dbl> 98.46606, 104.39774, 105.13377, 92.42574, 123.53268, 97.48676, 8…
## $ post <dbl> 106.70508, 89.09030, 123.67230, 70.70178, 124.95526, 101.61697, …
```
The other way to look at the table is a more graphical spreadsheet\-like version given by `View()` (capital ‘V’). It can be useful in the console, but don’t ever put this one in a script because it will create an annoying pop\-up window when the user runs it.
Now you can click on `smalldata` in the environment pane to open it up in a viewer that looks a bit like Excel.
You can get a quick summary of a dataset with the `summary()` function.
```
summary(smalldata)
```
```
## id group pre post
## Length:10 Length:10 Min. : 77.15 Min. : 70.70
## Class :character Class :character 1st Qu.: 93.57 1st Qu.: 92.22
## Mode :character Mode :character Median : 97.98 Median :107.76
## Mean : 98.57 Mean :103.79
## 3rd Qu.:103.88 3rd Qu.:121.19
## Max. :123.53 Max. :126.30
```
You can even do things like calculate the difference between the means of two columns.
```
pre_mean <- mean(smalldata$pre)
post_mean <- mean(smalldata$post)
post_mean - pre_mean
```
```
## [1] 5.223055
```
### 2\.4\.2 Importing data
Built\-in data are nice for examples, but you’re probably more interested in your own data. There are many different types of files that you might work with when doing data analysis. These different file types are usually distinguished by the three letter [extension](https://psyteachr.github.io/glossary/e#extension "The end part of a file name that tells you what type of file it is (e.g., .R or .Rmd).") following a period at the end of the file name. Here are some examples of different types of files and the functions you would use to read them in or write them out.
| Extension | File Type | Reading | Writing |
| --- | --- | --- | --- |
| .csv | Comma\-separated values | `readr::read_csv()` | `readr::write_csv()` |
| .tsv, .txt | Tab\-separated values | `readr::read_tsv()` | `readr::write_tsv()` |
| .xls, .xlsx | Excel workbook | `readxl::read_excel()` | NA |
| .sav, .mat, … | Multiple types | `rio::import()` | NA |
The double colon means that the function on the right comes from the package on the left, so `readr::read_csv()` refers to the `read_csv()` function in the `readr` package, and `readxl::read_excel()` refers to the function `read_excel()` in the package `readxl`. The function `rio::import()` from the `rio` package will read almost any type of data file, including SPSS and Matlab. Check the help with `?rio::import` to see a full list.
You can get a directory of data files used in this class for tutorials and exercises with the following code, which will create a directory called “data” in your project directory. Alternatively, you can download a [zip file of the datasets](data/data.zip).
```
dataskills::getdata()
```
Probably the most common file type you will encounter is [.csv](https://psyteachr.github.io/glossary/c#csv "Comma-separated variable: a file type for representing data where each variable is separated from the next by a comma.") (comma\-separated values). As the name suggests, a CSV file distinguishes which values go with which variable by separating them with commas, and text values are sometimes enclosed in double quotes. The first line of a file usually provides the names of the variables.
For example, here are the first few lines of a CSV containing personality scores:
```
subj_id,O,C,E,A,N
S01,4.428571429,4.5,3.333333333,5.142857143,1.625
S02,5.714285714,2.9,3.222222222,3,2.625
S03,5.142857143,2.8,6,3.571428571,2.5
S04,3.142857143,5.2,1.333333333,1.571428571,3.125
S05,5.428571429,4.4,2.444444444,4.714285714,1.625
```
There are six variables in this dataset, and their names are given in the first line of the file: `subj_id`, `O`, `C`, `E`, `A`, and `N`. You can see that the values for each of these variables are given in order, separated by commas, on each subsequent line of the file.
When you read in CSV files, it is best practice to use the `readr::read_csv()` function. The `readr` package is automatically loaded as part of the `tidyverse` package, which we will be using in almost every script. Note that you would normally want to store the result of the `read_csv()` function to an object, as so:
```r
csv_data <- read_csv("data/5factor.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## subj_id = col_character(),
## O = col_double(),
## C = col_double(),
## E = col_double(),
## A = col_double(),
## N = col_double()
## )
```
The `read_csv()` and `read_tsv()` functions will give you some information about the data you just read in so you can check the column names and [data types](data.html#data_types). For now, it’s enough to know that `col_double()` refers to columns with numbers and `col_character()` refers to columns with words. We’ll learn in the [toroubleshooting](data.html#troubleshooting) section below how to fix it if the function guesses the wrong data type.
```
tsv_data <- read_tsv("data/5factor.txt")
xls_data <- readxl::read_xls("data/5factor.xls")
# you can load sheets from excel files by name or number
rep_data <- readxl::read_xls("data/5factor.xls", sheet = "replication")
spss_data <- rio::import("data/5factor.sav")
```
Once loaded, you can view your data using the data viewer. In the upper right hand window of RStudio, under the Environment tab, you will see the object `babynames` listed.
If you click on the View icon (, it will bring up a table view of the data you loaded in the top left pane of RStudio.
This allows you to check that the data have been loaded in properly. You can close the tab when you’re done looking at it, it won’t remove the object.
### 2\.4\.3 Creating data
If we are creating a data table from scratch, we can use the `tibble::tibble()` function, and type the data right in. The `tibble` package is part of the [tidyverse](https://psyteachr.github.io/glossary/t#tidyverse "A set of R packages that help you create and work with tidy data") package that we loaded at the start of this chapter.
Let’s create a small table with the names of three Avatar characters and their bending type. The `tibble()` function takes arguments with the names that you want your columns to have. The values are vectors that list the column values in order.
If you don’t know the value for one of the cells, you can enter `NA`, which we have to do for Sokka because he doesn’t have any bending ability. If all the values in the column are the same, you can just enter one value and it will be copied for each row.
```
avatar <- tibble(
name = c("Katara", "Toph", "Sokka"),
bends = c("water", "earth", NA),
friendly = TRUE
)
# print it
avatar
```
| name | bends | friendly |
| --- | --- | --- |
| Katara | water | TRUE |
| Toph | earth | TRUE |
| Sokka | NA | TRUE |
### 2\.4\.4 Writing Data
If you have data that you want to save to a CSV file, use `readr::write_csv()`, as follows.
```
write_csv(avatar, "avatar.csv")
```
This will save the data in CSV format to your working directory.
* Create a new table called `family` with the first name, last name, and age of your family members.
* Save it to a CSV file called “family.csv.”
* Clear the object from your environment by restarting R or with the code `remove(family)`.
* Load the data back in and view it.
We’ll be working with [tabular data](https://psyteachr.github.io/glossary/t#tabular-data "Data in a rectangular table format, where each row has an entry for each column.") a lot in this class, but tabular data is made up of [vectors](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings."), which group together data with the same basic [data type](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object."). The following sections explain some of this terminology to help you understand the functions we’ll be learning to process and analyse data.
### 2\.4\.1 Built\-in data
R comes with built\-in datasets. Some packages, like tidyr and dataskills, also contain data. The `data()` function lists the datasets available in a package.
```
# lists datasets in dataskills
data(package = "dataskills")
```
Type the name of a dataset into the console to see the data. Type `?smalldata` into the console to see the dataset description.
```
smalldata
```
| id | group | pre | post |
| --- | --- | --- | --- |
| S01 | control | 98\.46606 | 106\.70508 |
| S02 | control | 104\.39774 | 89\.09030 |
| S03 | control | 105\.13377 | 123\.67230 |
| S04 | control | 92\.42574 | 70\.70178 |
| S05 | control | 123\.53268 | 124\.95526 |
| S06 | exp | 97\.48676 | 101\.61697 |
| S07 | exp | 87\.75594 | 126\.30077 |
| S08 | exp | 77\.15375 | 72\.31229 |
| S09 | exp | 97\.00283 | 108\.80713 |
| S10 | exp | 102\.32338 | 113\.74732 |
You can also use the `data()` function to load a dataset into your [global environment](https://psyteachr.github.io/glossary/g#global-environment "The interactive workspace where your script runs").
```
# loads smalldata into the environment
data("smalldata")
```
Always, always, always, look at your data once you’ve created or loaded a table. Also look at it after each step that transforms your table. There are three main ways to look at your tibble: `print()`, `glimpse()`, and `View()`.
The `print()` method can be run explicitly, but is more commonly called by just typing the variable name on the blank line. The default is not to print the entire table, but just the first 10 rows. It’s rare to print your data in a script; that is something you usually are doing for a sanity check, and you should just do it in the console.
Let’s look at the `smalldata` table that we made above.
```
smalldata
```
| id | group | pre | post |
| --- | --- | --- | --- |
| S01 | control | 98\.46606 | 106\.70508 |
| S02 | control | 104\.39774 | 89\.09030 |
| S03 | control | 105\.13377 | 123\.67230 |
| S04 | control | 92\.42574 | 70\.70178 |
| S05 | control | 123\.53268 | 124\.95526 |
| S06 | exp | 97\.48676 | 101\.61697 |
| S07 | exp | 87\.75594 | 126\.30077 |
| S08 | exp | 77\.15375 | 72\.31229 |
| S09 | exp | 97\.00283 | 108\.80713 |
| S10 | exp | 102\.32338 | 113\.74732 |
The function `glimpse()` gives a sideways version of the tibble. This is useful if the table is very wide and you can’t see all of the columns. It also tells you the data type of each column in angled brackets after each column name. We’ll learn about [data types](data.html#data_types) below.
```
glimpse(smalldata)
```
```
## Rows: 10
## Columns: 4
## $ id <chr> "S01", "S02", "S03", "S04", "S05", "S06", "S07", "S08", "S09", "…
## $ group <chr> "control", "control", "control", "control", "control", "exp", "e…
## $ pre <dbl> 98.46606, 104.39774, 105.13377, 92.42574, 123.53268, 97.48676, 8…
## $ post <dbl> 106.70508, 89.09030, 123.67230, 70.70178, 124.95526, 101.61697, …
```
The other way to look at the table is a more graphical spreadsheet\-like version given by `View()` (capital ‘V’). It can be useful in the console, but don’t ever put this one in a script because it will create an annoying pop\-up window when the user runs it.
Now you can click on `smalldata` in the environment pane to open it up in a viewer that looks a bit like Excel.
You can get a quick summary of a dataset with the `summary()` function.
```
summary(smalldata)
```
```
## id group pre post
## Length:10 Length:10 Min. : 77.15 Min. : 70.70
## Class :character Class :character 1st Qu.: 93.57 1st Qu.: 92.22
## Mode :character Mode :character Median : 97.98 Median :107.76
## Mean : 98.57 Mean :103.79
## 3rd Qu.:103.88 3rd Qu.:121.19
## Max. :123.53 Max. :126.30
```
You can even do things like calculate the difference between the means of two columns.
```
pre_mean <- mean(smalldata$pre)
post_mean <- mean(smalldata$post)
post_mean - pre_mean
```
```
## [1] 5.223055
```
### 2\.4\.2 Importing data
Built\-in data are nice for examples, but you’re probably more interested in your own data. There are many different types of files that you might work with when doing data analysis. These different file types are usually distinguished by the three letter [extension](https://psyteachr.github.io/glossary/e#extension "The end part of a file name that tells you what type of file it is (e.g., .R or .Rmd).") following a period at the end of the file name. Here are some examples of different types of files and the functions you would use to read them in or write them out.
| Extension | File Type | Reading | Writing |
| --- | --- | --- | --- |
| .csv | Comma\-separated values | `readr::read_csv()` | `readr::write_csv()` |
| .tsv, .txt | Tab\-separated values | `readr::read_tsv()` | `readr::write_tsv()` |
| .xls, .xlsx | Excel workbook | `readxl::read_excel()` | NA |
| .sav, .mat, … | Multiple types | `rio::import()` | NA |
The double colon means that the function on the right comes from the package on the left, so `readr::read_csv()` refers to the `read_csv()` function in the `readr` package, and `readxl::read_excel()` refers to the function `read_excel()` in the package `readxl`. The function `rio::import()` from the `rio` package will read almost any type of data file, including SPSS and Matlab. Check the help with `?rio::import` to see a full list.
You can get a directory of data files used in this class for tutorials and exercises with the following code, which will create a directory called “data” in your project directory. Alternatively, you can download a [zip file of the datasets](data/data.zip).
```
dataskills::getdata()
```
Probably the most common file type you will encounter is [.csv](https://psyteachr.github.io/glossary/c#csv "Comma-separated variable: a file type for representing data where each variable is separated from the next by a comma.") (comma\-separated values). As the name suggests, a CSV file distinguishes which values go with which variable by separating them with commas, and text values are sometimes enclosed in double quotes. The first line of a file usually provides the names of the variables.
For example, here are the first few lines of a CSV containing personality scores:
```
subj_id,O,C,E,A,N
S01,4.428571429,4.5,3.333333333,5.142857143,1.625
S02,5.714285714,2.9,3.222222222,3,2.625
S03,5.142857143,2.8,6,3.571428571,2.5
S04,3.142857143,5.2,1.333333333,1.571428571,3.125
S05,5.428571429,4.4,2.444444444,4.714285714,1.625
```
There are six variables in this dataset, and their names are given in the first line of the file: `subj_id`, `O`, `C`, `E`, `A`, and `N`. You can see that the values for each of these variables are given in order, separated by commas, on each subsequent line of the file.
When you read in CSV files, it is best practice to use the `readr::read_csv()` function. The `readr` package is automatically loaded as part of the `tidyverse` package, which we will be using in almost every script. Note that you would normally want to store the result of the `read_csv()` function to an object, as so:
```r
csv_data <- read_csv("data/5factor.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## subj_id = col_character(),
## O = col_double(),
## C = col_double(),
## E = col_double(),
## A = col_double(),
## N = col_double()
## )
```
The `read_csv()` and `read_tsv()` functions will give you some information about the data you just read in so you can check the column names and [data types](data.html#data_types). For now, it’s enough to know that `col_double()` refers to columns with numbers and `col_character()` refers to columns with words. We’ll learn in the [toroubleshooting](data.html#troubleshooting) section below how to fix it if the function guesses the wrong data type.
```
tsv_data <- read_tsv("data/5factor.txt")
xls_data <- readxl::read_xls("data/5factor.xls")
# you can load sheets from excel files by name or number
rep_data <- readxl::read_xls("data/5factor.xls", sheet = "replication")
spss_data <- rio::import("data/5factor.sav")
```
Once loaded, you can view your data using the data viewer. In the upper right hand window of RStudio, under the Environment tab, you will see the object `babynames` listed.
If you click on the View icon (, it will bring up a table view of the data you loaded in the top left pane of RStudio.
This allows you to check that the data have been loaded in properly. You can close the tab when you’re done looking at it, it won’t remove the object.
### 2\.4\.3 Creating data
If we are creating a data table from scratch, we can use the `tibble::tibble()` function, and type the data right in. The `tibble` package is part of the [tidyverse](https://psyteachr.github.io/glossary/t#tidyverse "A set of R packages that help you create and work with tidy data") package that we loaded at the start of this chapter.
Let’s create a small table with the names of three Avatar characters and their bending type. The `tibble()` function takes arguments with the names that you want your columns to have. The values are vectors that list the column values in order.
If you don’t know the value for one of the cells, you can enter `NA`, which we have to do for Sokka because he doesn’t have any bending ability. If all the values in the column are the same, you can just enter one value and it will be copied for each row.
```
avatar <- tibble(
name = c("Katara", "Toph", "Sokka"),
bends = c("water", "earth", NA),
friendly = TRUE
)
# print it
avatar
```
| name | bends | friendly |
| --- | --- | --- |
| Katara | water | TRUE |
| Toph | earth | TRUE |
| Sokka | NA | TRUE |
### 2\.4\.4 Writing Data
If you have data that you want to save to a CSV file, use `readr::write_csv()`, as follows.
```
write_csv(avatar, "avatar.csv")
```
This will save the data in CSV format to your working directory.
* Create a new table called `family` with the first name, last name, and age of your family members.
* Save it to a CSV file called “family.csv.”
* Clear the object from your environment by restarting R or with the code `remove(family)`.
* Load the data back in and view it.
We’ll be working with [tabular data](https://psyteachr.github.io/glossary/t#tabular-data "Data in a rectangular table format, where each row has an entry for each column.") a lot in this class, but tabular data is made up of [vectors](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings."), which group together data with the same basic [data type](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object."). The following sections explain some of this terminology to help you understand the functions we’ll be learning to process and analyse data.
2\.5 Basic data types
---------------------
Data can be numbers, words, true/false values or combinations of these. In order to understand some later concepts, it’s useful to have a basic understanding of [data types](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object.") in R: [numeric](https://psyteachr.github.io/glossary/n#numeric "A data type representing a real decimal number or integer."), [character](https://psyteachr.github.io/glossary/c#character "A data type representing strings of text."), and [logical](https://psyteachr.github.io/glossary/l#logical "A data type representing TRUE or FALSE values.") There is also a specific data type called a [factor](https://psyteachr.github.io/glossary/f#factor "A data type where a specific set of values are stored with labels; An explanatory variable manipulated by the experimenter"), which will probably give you a headache sooner or later, but we can ignore it for now.
### 2\.5\.1 Numeric data
All of the real numbers are [numeric](https://psyteachr.github.io/glossary/n#numeric "A data type representing a real decimal number or integer.") data types (imaginary numbers are “complex”). There are two types of numeric data, [integer](https://psyteachr.github.io/glossary/i#integer "A data type representing whole numbers.") and [double](https://psyteachr.github.io/glossary/d#double "A data type representing a real decimal number"). Integers are the whole numbers, like \-1, 0 and 1\. Doubles are numbers that can have fractional amounts. If you just type a plain number such as `10`, it is stored as a double, even if it doesn’t have a decimal point. If you want it to be an exact integer, use the `L` suffix (10L).
If you ever want to know the data type of something, use the `typeof` function.
```
typeof(10) # double
typeof(10.0) # double
typeof(10L) # integer
typeof(10i) # complex
```
```
## [1] "double"
## [1] "double"
## [1] "integer"
## [1] "complex"
```
If you want to know if something is numeric (a double or an integer), you can use the function `is.numeric()` and it will tell you if it is numeric (`TRUE`) or not (`FALSE`).
```
is.numeric(10L)
is.numeric(10.0)
is.numeric("Not a number")
```
```
## [1] TRUE
## [1] TRUE
## [1] FALSE
```
### 2\.5\.2 Character data
[Character](https://psyteachr.github.io/glossary/c#character "A data type representing strings of text.") strings are any text between quotation marks.
```
typeof("This is a character string")
typeof('You can use double or single quotes')
```
```
## [1] "character"
## [1] "character"
```
This can include quotes, but you have to [escape](https://psyteachr.github.io/glossary/e#escape "Include special characters like \" inside of a string by prefacing them with a backslash.") it using a backslash to signal the the quote isn’t meant to be the end of the string.
```
my_string <- "The instructor said, \"R is cool,\" and the class agreed."
cat(my_string) # cat() prints the arguments
```
```
## The instructor said, "R is cool," and the class agreed.
```
### 2\.5\.3 Logical Data
[Logical](https://psyteachr.github.io/glossary/l#logical "A data type representing TRUE or FALSE values.") data (also sometimes called “boolean” values) is one of two values: true or false. In R, we always write them in uppercase: `TRUE` and `FALSE`.
```
class(TRUE)
class(FALSE)
```
```
## [1] "logical"
## [1] "logical"
```
When you compare two values with an [operator](https://psyteachr.github.io/glossary/o#operator "A symbol that performs a mathematical operation, such as +, -, *, /"), such as checking to see if 10 is greater than 5, the resulting value is logical.
```
is.logical(10 > 5)
```
```
## [1] TRUE
```
You might also see logical values abbreviated as `T` and `F`, or `0` and `1`. This can cause some problems down the road, so we will always spell out the whole thing.
What data types are these:
* `100` integer double character logical factor
* `100L` integer double character logical factor
* `"100"` integer double character logical factor
* `100.0` integer double character logical factor
* `-100L` integer double character logical factor
* `factor(100)` integer double character logical factor
* `TRUE` integer double character logical factor
* `"TRUE"` integer double character logical factor
* `FALSE` integer double character logical factor
* `1 == 2` integer double character logical factor
### 2\.5\.1 Numeric data
All of the real numbers are [numeric](https://psyteachr.github.io/glossary/n#numeric "A data type representing a real decimal number or integer.") data types (imaginary numbers are “complex”). There are two types of numeric data, [integer](https://psyteachr.github.io/glossary/i#integer "A data type representing whole numbers.") and [double](https://psyteachr.github.io/glossary/d#double "A data type representing a real decimal number"). Integers are the whole numbers, like \-1, 0 and 1\. Doubles are numbers that can have fractional amounts. If you just type a plain number such as `10`, it is stored as a double, even if it doesn’t have a decimal point. If you want it to be an exact integer, use the `L` suffix (10L).
If you ever want to know the data type of something, use the `typeof` function.
```
typeof(10) # double
typeof(10.0) # double
typeof(10L) # integer
typeof(10i) # complex
```
```
## [1] "double"
## [1] "double"
## [1] "integer"
## [1] "complex"
```
If you want to know if something is numeric (a double or an integer), you can use the function `is.numeric()` and it will tell you if it is numeric (`TRUE`) or not (`FALSE`).
```
is.numeric(10L)
is.numeric(10.0)
is.numeric("Not a number")
```
```
## [1] TRUE
## [1] TRUE
## [1] FALSE
```
### 2\.5\.2 Character data
[Character](https://psyteachr.github.io/glossary/c#character "A data type representing strings of text.") strings are any text between quotation marks.
```
typeof("This is a character string")
typeof('You can use double or single quotes')
```
```
## [1] "character"
## [1] "character"
```
This can include quotes, but you have to [escape](https://psyteachr.github.io/glossary/e#escape "Include special characters like \" inside of a string by prefacing them with a backslash.") it using a backslash to signal the the quote isn’t meant to be the end of the string.
```
my_string <- "The instructor said, \"R is cool,\" and the class agreed."
cat(my_string) # cat() prints the arguments
```
```
## The instructor said, "R is cool," and the class agreed.
```
### 2\.5\.3 Logical Data
[Logical](https://psyteachr.github.io/glossary/l#logical "A data type representing TRUE or FALSE values.") data (also sometimes called “boolean” values) is one of two values: true or false. In R, we always write them in uppercase: `TRUE` and `FALSE`.
```
class(TRUE)
class(FALSE)
```
```
## [1] "logical"
## [1] "logical"
```
When you compare two values with an [operator](https://psyteachr.github.io/glossary/o#operator "A symbol that performs a mathematical operation, such as +, -, *, /"), such as checking to see if 10 is greater than 5, the resulting value is logical.
```
is.logical(10 > 5)
```
```
## [1] TRUE
```
You might also see logical values abbreviated as `T` and `F`, or `0` and `1`. This can cause some problems down the road, so we will always spell out the whole thing.
What data types are these:
* `100` integer double character logical factor
* `100L` integer double character logical factor
* `"100"` integer double character logical factor
* `100.0` integer double character logical factor
* `-100L` integer double character logical factor
* `factor(100)` integer double character logical factor
* `TRUE` integer double character logical factor
* `"TRUE"` integer double character logical factor
* `FALSE` integer double character logical factor
* `1 == 2` integer double character logical factor
2\.6 Basic container types
--------------------------
Individual data values can be grouped together into containers. The main types of containers we’ll work with are vectors, lists, and data tables.
### 2\.6\.1 Vectors
A [vector](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings.") in R is like a vector in mathematics: a set of ordered elements. All of the elements in a vector must be of the same **data type** (numeric, character, logical). You can create a vector by enclosing the elements in the function `c()`.
```
## put information into a vector using c(...)
c(1, 2, 3, 4)
c("this", "is", "cool")
1:6 # shortcut to make a vector of all integers x:y
```
```
## [1] 1 2 3 4
## [1] "this" "is" "cool"
## [1] 1 2 3 4 5 6
```
What happens when you mix types? What class is the variable `mixed`?
```
mixed <- c(2, "good", 2L, "b", TRUE)
```
You can’t mix data types in a vector; all elements of the vector must be the same data type. If you mix them, R will “coerce” them so that they are all the same. If you mix doubles and integers, the integers will be changed to doubles. If you mix characters and numeric types, the numbers will be coerced to characters, so `10` would turn into “10\.”
#### 2\.6\.1\.1 Selecting values from a vector
If we wanted to pick specific values out of a vector by position, we can use square brackets (an [extract operator](https://psyteachr.github.io/glossary/e#extract-operator "A symbol used to get values from a container object, such as [, [[, or $"), or `[]`) after the vector.
```
values <- c(10, 20, 30, 40, 50)
values[2] # selects the second value
```
```
## [1] 20
```
You can select more than one value from the vector by putting a vector of numbers inside the square brackets. For example, you can select the 18th, 19th, 20th, 21st, 4th, 9th and 15th letter from the built\-in vector `LETTERS` (which gives all the uppercase letters in the Latin alphabet).
```
word <- c(18, 19, 20, 21, 4, 9, 15)
LETTERS[word]
```
```
## [1] "R" "S" "T" "U" "D" "I" "O"
```
Can you decode the secret message?
```
secret <- c(14, 5, 22, 5, 18, 7, 15, 14, 14, 1, 7, 9, 22, 5, 25, 15, 21, 21, 16)
```
You can also create ‘named’ vectors, where each element has a name. For example:
```
vec <- c(first = 77.9, second = -13.2, third = 100.1)
vec
```
```
## first second third
## 77.9 -13.2 100.1
```
We can then access elements by name using a character vector within the square brackets. We can put them in any order we want, and we can repeat elements:
```
vec[c("third", "second", "second")]
```
```
## third second second
## 100.1 -13.2 -13.2
```
We can get the vector of names using the `names()` function, and we can set or change them using something like `names(vec2) <- c(“n1,” “n2,” “n3”)`.
Another way to access elements is by using a logical vector within the square brackets. This will pull out the elements of the vector for which the corresponding element of the logical vector is `TRUE`. If the logical vector doesn’t have the same length as the original, it will repeat. You can find out how long a vector is using the `length()` function.
```
length(LETTERS)
LETTERS[c(TRUE, FALSE)]
```
```
## [1] 26
## [1] "A" "C" "E" "G" "I" "K" "M" "O" "Q" "S" "U" "W" "Y"
```
#### 2\.6\.1\.2 Repeating Sequences
Here are some useful tricks to save typing when creating vectors.
In the command `x:y` the `:` operator would give you the sequence of number starting at `x`, and going to `y` in increments of 1\.
```
1:10
15.3:20.5
0:-10
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
## [1] 15.3 16.3 17.3 18.3 19.3 20.3
## [1] 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10
```
What if you want to create a sequence but with something other than integer steps? You can use the `seq()` function. Look at the examples below and work out what the arguments do.
```
seq(from = -1, to = 1, by = 0.2)
seq(0, 100, length.out = 11)
seq(0, 10, along.with = LETTERS)
```
```
## [1] -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
## [1] 0 10 20 30 40 50 60 70 80 90 100
## [1] 0.0 0.4 0.8 1.2 1.6 2.0 2.4 2.8 3.2 3.6 4.0 4.4 4.8 5.2 5.6
## [16] 6.0 6.4 6.8 7.2 7.6 8.0 8.4 8.8 9.2 9.6 10.0
```
What if you want to repeat a vector many times? You could either type it out (painful) or use the `rep()` function, which can repeat vectors in different ways.
```
rep(0, 10) # ten zeroes
rep(c(1L, 3L), times = 7) # alternating 1 and 3, 7 times
rep(c("A", "B", "C"), each = 2) # A to C, 2 times each
```
```
## [1] 0 0 0 0 0 0 0 0 0 0
## [1] 1 3 1 3 1 3 1 3 1 3 1 3 1 3
## [1] "A" "A" "B" "B" "C" "C"
```
The `rep()` function is useful to create a vector of logical values (`TRUE`/`FALSE` or `1`/`0`) to select values from another vector.
```
# Get subject IDs in the pattern Y Y N N ...
subject_ids <- 1:40
yynn <- rep(c(TRUE, FALSE), each = 2,
length.out = length(subject_ids))
subject_ids[yynn]
```
```
## [1] 1 2 5 6 9 10 13 14 17 18 21 22 25 26 29 30 33 34 37 38
```
#### 2\.6\.1\.3 Vectorized Operations
R performs calculations on vectors in a special way. Let’s look at an example using \\(z\\)\-scores. A \\(z\\)\-score is a [deviation score](https://psyteachr.github.io/glossary/d#deviation-score "A score minus the mean")(a score minus a mean) divided by a standard deviation. Let’s say we have a set of four IQ scores.
```
## example IQ scores: mu = 100, sigma = 15
iq <- c(86, 101, 127, 99)
```
If we want to subtract the mean from these four scores, we just use the following code:
```
iq - 100
```
```
## [1] -14 1 27 -1
```
This subtracts 100 from each element of the vector. R automatically assumes that this is what you wanted to do; it is called a [vectorized](https://psyteachr.github.io/glossary/v#vectorized "An operator or function that acts on each element in a vector") operation and it makes it possible to express operations more efficiently.
To calculate \\(z\\)\-scores we use the formula:
\\(z \= \\frac{X \- \\mu}{\\sigma}\\)
where X are the scores, \\(\\mu\\) is the mean, and \\(\\sigma\\) is the standard deviation. We can expression this formula in R as follows:
```
## z-scores
(iq - 100) / 15
```
```
## [1] -0.93333333 0.06666667 1.80000000 -0.06666667
```
You can see that it computed all four \\(z\\)\-scores with a single line of code. In later chapters, we’ll use vectorised operations to process our data, such as reverse\-scoring some questionnaire items.
### 2\.6\.2 Lists
Recall that vectors can contain data of only one type. What if you want to store a collection of data of different data types? For that purpose you would use a [list](https://psyteachr.github.io/glossary/l#list "A container data type that allows items with different data types to be grouped together."). Define a list using the `list()` function.
```
data_types <- list(
double = 10.0,
integer = 10L,
character = "10",
logical = TRUE
)
str(data_types) # str() prints lists in a condensed format
```
```
## List of 4
## $ double : num 10
## $ integer : int 10
## $ character: chr "10"
## $ logical : logi TRUE
```
You can refer to elements of a list using square brackets like a vector, but you can also use the dollar sign notation (`$`) if the list items have names.
```
data_types$logical
```
```
## [1] TRUE
```
Explore the 5 ways shown below to extract a value from a list. What data type is each object? What is the difference between the single and double brackets? Which one is the same as the dollar sign?
```
bracket1 <- data_types[1]
bracket2 <- data_types[[1]]
name1 <- data_types["double"]
name2 <- data_types[["double"]]
dollar <- data_types$double
```
### 2\.6\.3 Tables
The built\-in, imported, and created data above are [tabular data](https://psyteachr.github.io/glossary/t#tabular-data "Data in a rectangular table format, where each row has an entry for each column."), data arranged in the form of a table.
Tabular data structures allow for a collection of data of different types (characters, integers, logical, etc.) but subject to the constraint that each “column” of the table (element of the list) must have the same number of elements. The base R version of a table is called a `data.frame`, while the ‘tidyverse’ version is called a `tibble`. Tibbles are far easier to work with, so we’ll be using those. To learn more about differences between these two data structures, see `vignette("tibble")`.
Tabular data becomes especially important for when we talk about [tidy data](https://psyteachr.github.io/glossary/t#tidy-data "A format for data that maps the meaning onto the structure.") in [chapter 4](tidyr.html#tidyr), which consists of a set of simple principles for structuring data.
#### 2\.6\.3\.1 Creating a table
We learned how to create a table by importing a Excel or CSV file, and creating a table from scratch using the `tibble()` function. You can also use the `tibble::tribble()` function to create a table by row, rather than by column. You start by listing the column names, each preceded by a tilde (`~`), then you list the values for each column, row by row, separated by commas (don’t forget a comma at the end of each row). This method can be easier for some data, but doesn’t let you use shortcuts, like setting all of the values in a column to the same value or a [repeating sequence](data.html#rep_seq).
```
# by column using tibble
avatar_by_col <- tibble(
name = c("Katara", "Toph", "Sokka", "Azula"),
bends = c("water", "earth", NA, "fire"),
friendly = rep(c(TRUE, FALSE), c(3, 1))
)
# by row using tribble
avatar_by_row <- tribble(
~name, ~bends, ~friendly,
"Katara", "water", TRUE,
"Toph", "earth", TRUE,
"Sokka", NA, TRUE,
"Azula", "fire", FALSE
)
```
#### 2\.6\.3\.2 Table info
We can get information about the table using the functions `ncol()` (number of columns), `nrow()` (number of rows), `dim()` (the number of rows and number of columns), and `name()` (the column names).
```
nrow(avatar) # how many rows?
ncol(avatar) # how many columns?
dim(avatar) # what are the table dimensions?
names(avatar) # what are the column names?
```
```
## [1] 3
## [1] 3
## [1] 3 3
## [1] "name" "bends" "friendly"
```
#### 2\.6\.3\.3 Accessing rows and columns
There are various ways of accessing specific columns or rows from a table. The ones below are from [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") and are useful to know about, but you’ll be learning easier (and more readable) ways in the [tidyr](tidyr.html#tidyr) and [dplyr](dplyr.html#dplyr) lessons. Examples of these base R accessing functions are provided here for reference, since you might see them in other people’s scripts.
```
katara <- avatar[1, ] # first row
type <- avatar[, 2] # second column (bends)
benders <- avatar[c(1, 2), ] # selected rows (by number)
bends_name <- avatar[, c("bends", "name")] # selected columns (by name)
friendly <- avatar$friendly # by column name
```
### 2\.6\.1 Vectors
A [vector](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings.") in R is like a vector in mathematics: a set of ordered elements. All of the elements in a vector must be of the same **data type** (numeric, character, logical). You can create a vector by enclosing the elements in the function `c()`.
```
## put information into a vector using c(...)
c(1, 2, 3, 4)
c("this", "is", "cool")
1:6 # shortcut to make a vector of all integers x:y
```
```
## [1] 1 2 3 4
## [1] "this" "is" "cool"
## [1] 1 2 3 4 5 6
```
What happens when you mix types? What class is the variable `mixed`?
```
mixed <- c(2, "good", 2L, "b", TRUE)
```
You can’t mix data types in a vector; all elements of the vector must be the same data type. If you mix them, R will “coerce” them so that they are all the same. If you mix doubles and integers, the integers will be changed to doubles. If you mix characters and numeric types, the numbers will be coerced to characters, so `10` would turn into “10\.”
#### 2\.6\.1\.1 Selecting values from a vector
If we wanted to pick specific values out of a vector by position, we can use square brackets (an [extract operator](https://psyteachr.github.io/glossary/e#extract-operator "A symbol used to get values from a container object, such as [, [[, or $"), or `[]`) after the vector.
```
values <- c(10, 20, 30, 40, 50)
values[2] # selects the second value
```
```
## [1] 20
```
You can select more than one value from the vector by putting a vector of numbers inside the square brackets. For example, you can select the 18th, 19th, 20th, 21st, 4th, 9th and 15th letter from the built\-in vector `LETTERS` (which gives all the uppercase letters in the Latin alphabet).
```
word <- c(18, 19, 20, 21, 4, 9, 15)
LETTERS[word]
```
```
## [1] "R" "S" "T" "U" "D" "I" "O"
```
Can you decode the secret message?
```
secret <- c(14, 5, 22, 5, 18, 7, 15, 14, 14, 1, 7, 9, 22, 5, 25, 15, 21, 21, 16)
```
You can also create ‘named’ vectors, where each element has a name. For example:
```
vec <- c(first = 77.9, second = -13.2, third = 100.1)
vec
```
```
## first second third
## 77.9 -13.2 100.1
```
We can then access elements by name using a character vector within the square brackets. We can put them in any order we want, and we can repeat elements:
```
vec[c("third", "second", "second")]
```
```
## third second second
## 100.1 -13.2 -13.2
```
We can get the vector of names using the `names()` function, and we can set or change them using something like `names(vec2) <- c(“n1,” “n2,” “n3”)`.
Another way to access elements is by using a logical vector within the square brackets. This will pull out the elements of the vector for which the corresponding element of the logical vector is `TRUE`. If the logical vector doesn’t have the same length as the original, it will repeat. You can find out how long a vector is using the `length()` function.
```
length(LETTERS)
LETTERS[c(TRUE, FALSE)]
```
```
## [1] 26
## [1] "A" "C" "E" "G" "I" "K" "M" "O" "Q" "S" "U" "W" "Y"
```
#### 2\.6\.1\.2 Repeating Sequences
Here are some useful tricks to save typing when creating vectors.
In the command `x:y` the `:` operator would give you the sequence of number starting at `x`, and going to `y` in increments of 1\.
```
1:10
15.3:20.5
0:-10
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
## [1] 15.3 16.3 17.3 18.3 19.3 20.3
## [1] 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10
```
What if you want to create a sequence but with something other than integer steps? You can use the `seq()` function. Look at the examples below and work out what the arguments do.
```
seq(from = -1, to = 1, by = 0.2)
seq(0, 100, length.out = 11)
seq(0, 10, along.with = LETTERS)
```
```
## [1] -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
## [1] 0 10 20 30 40 50 60 70 80 90 100
## [1] 0.0 0.4 0.8 1.2 1.6 2.0 2.4 2.8 3.2 3.6 4.0 4.4 4.8 5.2 5.6
## [16] 6.0 6.4 6.8 7.2 7.6 8.0 8.4 8.8 9.2 9.6 10.0
```
What if you want to repeat a vector many times? You could either type it out (painful) or use the `rep()` function, which can repeat vectors in different ways.
```
rep(0, 10) # ten zeroes
rep(c(1L, 3L), times = 7) # alternating 1 and 3, 7 times
rep(c("A", "B", "C"), each = 2) # A to C, 2 times each
```
```
## [1] 0 0 0 0 0 0 0 0 0 0
## [1] 1 3 1 3 1 3 1 3 1 3 1 3 1 3
## [1] "A" "A" "B" "B" "C" "C"
```
The `rep()` function is useful to create a vector of logical values (`TRUE`/`FALSE` or `1`/`0`) to select values from another vector.
```
# Get subject IDs in the pattern Y Y N N ...
subject_ids <- 1:40
yynn <- rep(c(TRUE, FALSE), each = 2,
length.out = length(subject_ids))
subject_ids[yynn]
```
```
## [1] 1 2 5 6 9 10 13 14 17 18 21 22 25 26 29 30 33 34 37 38
```
#### 2\.6\.1\.3 Vectorized Operations
R performs calculations on vectors in a special way. Let’s look at an example using \\(z\\)\-scores. A \\(z\\)\-score is a [deviation score](https://psyteachr.github.io/glossary/d#deviation-score "A score minus the mean")(a score minus a mean) divided by a standard deviation. Let’s say we have a set of four IQ scores.
```
## example IQ scores: mu = 100, sigma = 15
iq <- c(86, 101, 127, 99)
```
If we want to subtract the mean from these four scores, we just use the following code:
```
iq - 100
```
```
## [1] -14 1 27 -1
```
This subtracts 100 from each element of the vector. R automatically assumes that this is what you wanted to do; it is called a [vectorized](https://psyteachr.github.io/glossary/v#vectorized "An operator or function that acts on each element in a vector") operation and it makes it possible to express operations more efficiently.
To calculate \\(z\\)\-scores we use the formula:
\\(z \= \\frac{X \- \\mu}{\\sigma}\\)
where X are the scores, \\(\\mu\\) is the mean, and \\(\\sigma\\) is the standard deviation. We can expression this formula in R as follows:
```
## z-scores
(iq - 100) / 15
```
```
## [1] -0.93333333 0.06666667 1.80000000 -0.06666667
```
You can see that it computed all four \\(z\\)\-scores with a single line of code. In later chapters, we’ll use vectorised operations to process our data, such as reverse\-scoring some questionnaire items.
#### 2\.6\.1\.1 Selecting values from a vector
If we wanted to pick specific values out of a vector by position, we can use square brackets (an [extract operator](https://psyteachr.github.io/glossary/e#extract-operator "A symbol used to get values from a container object, such as [, [[, or $"), or `[]`) after the vector.
```
values <- c(10, 20, 30, 40, 50)
values[2] # selects the second value
```
```
## [1] 20
```
You can select more than one value from the vector by putting a vector of numbers inside the square brackets. For example, you can select the 18th, 19th, 20th, 21st, 4th, 9th and 15th letter from the built\-in vector `LETTERS` (which gives all the uppercase letters in the Latin alphabet).
```
word <- c(18, 19, 20, 21, 4, 9, 15)
LETTERS[word]
```
```
## [1] "R" "S" "T" "U" "D" "I" "O"
```
Can you decode the secret message?
```
secret <- c(14, 5, 22, 5, 18, 7, 15, 14, 14, 1, 7, 9, 22, 5, 25, 15, 21, 21, 16)
```
You can also create ‘named’ vectors, where each element has a name. For example:
```
vec <- c(first = 77.9, second = -13.2, third = 100.1)
vec
```
```
## first second third
## 77.9 -13.2 100.1
```
We can then access elements by name using a character vector within the square brackets. We can put them in any order we want, and we can repeat elements:
```
vec[c("third", "second", "second")]
```
```
## third second second
## 100.1 -13.2 -13.2
```
We can get the vector of names using the `names()` function, and we can set or change them using something like `names(vec2) <- c(“n1,” “n2,” “n3”)`.
Another way to access elements is by using a logical vector within the square brackets. This will pull out the elements of the vector for which the corresponding element of the logical vector is `TRUE`. If the logical vector doesn’t have the same length as the original, it will repeat. You can find out how long a vector is using the `length()` function.
```
length(LETTERS)
LETTERS[c(TRUE, FALSE)]
```
```
## [1] 26
## [1] "A" "C" "E" "G" "I" "K" "M" "O" "Q" "S" "U" "W" "Y"
```
#### 2\.6\.1\.2 Repeating Sequences
Here are some useful tricks to save typing when creating vectors.
In the command `x:y` the `:` operator would give you the sequence of number starting at `x`, and going to `y` in increments of 1\.
```
1:10
15.3:20.5
0:-10
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
## [1] 15.3 16.3 17.3 18.3 19.3 20.3
## [1] 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10
```
What if you want to create a sequence but with something other than integer steps? You can use the `seq()` function. Look at the examples below and work out what the arguments do.
```
seq(from = -1, to = 1, by = 0.2)
seq(0, 100, length.out = 11)
seq(0, 10, along.with = LETTERS)
```
```
## [1] -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
## [1] 0 10 20 30 40 50 60 70 80 90 100
## [1] 0.0 0.4 0.8 1.2 1.6 2.0 2.4 2.8 3.2 3.6 4.0 4.4 4.8 5.2 5.6
## [16] 6.0 6.4 6.8 7.2 7.6 8.0 8.4 8.8 9.2 9.6 10.0
```
What if you want to repeat a vector many times? You could either type it out (painful) or use the `rep()` function, which can repeat vectors in different ways.
```
rep(0, 10) # ten zeroes
rep(c(1L, 3L), times = 7) # alternating 1 and 3, 7 times
rep(c("A", "B", "C"), each = 2) # A to C, 2 times each
```
```
## [1] 0 0 0 0 0 0 0 0 0 0
## [1] 1 3 1 3 1 3 1 3 1 3 1 3 1 3
## [1] "A" "A" "B" "B" "C" "C"
```
The `rep()` function is useful to create a vector of logical values (`TRUE`/`FALSE` or `1`/`0`) to select values from another vector.
```
# Get subject IDs in the pattern Y Y N N ...
subject_ids <- 1:40
yynn <- rep(c(TRUE, FALSE), each = 2,
length.out = length(subject_ids))
subject_ids[yynn]
```
```
## [1] 1 2 5 6 9 10 13 14 17 18 21 22 25 26 29 30 33 34 37 38
```
#### 2\.6\.1\.3 Vectorized Operations
R performs calculations on vectors in a special way. Let’s look at an example using \\(z\\)\-scores. A \\(z\\)\-score is a [deviation score](https://psyteachr.github.io/glossary/d#deviation-score "A score minus the mean")(a score minus a mean) divided by a standard deviation. Let’s say we have a set of four IQ scores.
```
## example IQ scores: mu = 100, sigma = 15
iq <- c(86, 101, 127, 99)
```
If we want to subtract the mean from these four scores, we just use the following code:
```
iq - 100
```
```
## [1] -14 1 27 -1
```
This subtracts 100 from each element of the vector. R automatically assumes that this is what you wanted to do; it is called a [vectorized](https://psyteachr.github.io/glossary/v#vectorized "An operator or function that acts on each element in a vector") operation and it makes it possible to express operations more efficiently.
To calculate \\(z\\)\-scores we use the formula:
\\(z \= \\frac{X \- \\mu}{\\sigma}\\)
where X are the scores, \\(\\mu\\) is the mean, and \\(\\sigma\\) is the standard deviation. We can expression this formula in R as follows:
```
## z-scores
(iq - 100) / 15
```
```
## [1] -0.93333333 0.06666667 1.80000000 -0.06666667
```
You can see that it computed all four \\(z\\)\-scores with a single line of code. In later chapters, we’ll use vectorised operations to process our data, such as reverse\-scoring some questionnaire items.
### 2\.6\.2 Lists
Recall that vectors can contain data of only one type. What if you want to store a collection of data of different data types? For that purpose you would use a [list](https://psyteachr.github.io/glossary/l#list "A container data type that allows items with different data types to be grouped together."). Define a list using the `list()` function.
```
data_types <- list(
double = 10.0,
integer = 10L,
character = "10",
logical = TRUE
)
str(data_types) # str() prints lists in a condensed format
```
```
## List of 4
## $ double : num 10
## $ integer : int 10
## $ character: chr "10"
## $ logical : logi TRUE
```
You can refer to elements of a list using square brackets like a vector, but you can also use the dollar sign notation (`$`) if the list items have names.
```
data_types$logical
```
```
## [1] TRUE
```
Explore the 5 ways shown below to extract a value from a list. What data type is each object? What is the difference between the single and double brackets? Which one is the same as the dollar sign?
```
bracket1 <- data_types[1]
bracket2 <- data_types[[1]]
name1 <- data_types["double"]
name2 <- data_types[["double"]]
dollar <- data_types$double
```
### 2\.6\.3 Tables
The built\-in, imported, and created data above are [tabular data](https://psyteachr.github.io/glossary/t#tabular-data "Data in a rectangular table format, where each row has an entry for each column."), data arranged in the form of a table.
Tabular data structures allow for a collection of data of different types (characters, integers, logical, etc.) but subject to the constraint that each “column” of the table (element of the list) must have the same number of elements. The base R version of a table is called a `data.frame`, while the ‘tidyverse’ version is called a `tibble`. Tibbles are far easier to work with, so we’ll be using those. To learn more about differences between these two data structures, see `vignette("tibble")`.
Tabular data becomes especially important for when we talk about [tidy data](https://psyteachr.github.io/glossary/t#tidy-data "A format for data that maps the meaning onto the structure.") in [chapter 4](tidyr.html#tidyr), which consists of a set of simple principles for structuring data.
#### 2\.6\.3\.1 Creating a table
We learned how to create a table by importing a Excel or CSV file, and creating a table from scratch using the `tibble()` function. You can also use the `tibble::tribble()` function to create a table by row, rather than by column. You start by listing the column names, each preceded by a tilde (`~`), then you list the values for each column, row by row, separated by commas (don’t forget a comma at the end of each row). This method can be easier for some data, but doesn’t let you use shortcuts, like setting all of the values in a column to the same value or a [repeating sequence](data.html#rep_seq).
```
# by column using tibble
avatar_by_col <- tibble(
name = c("Katara", "Toph", "Sokka", "Azula"),
bends = c("water", "earth", NA, "fire"),
friendly = rep(c(TRUE, FALSE), c(3, 1))
)
# by row using tribble
avatar_by_row <- tribble(
~name, ~bends, ~friendly,
"Katara", "water", TRUE,
"Toph", "earth", TRUE,
"Sokka", NA, TRUE,
"Azula", "fire", FALSE
)
```
#### 2\.6\.3\.2 Table info
We can get information about the table using the functions `ncol()` (number of columns), `nrow()` (number of rows), `dim()` (the number of rows and number of columns), and `name()` (the column names).
```
nrow(avatar) # how many rows?
ncol(avatar) # how many columns?
dim(avatar) # what are the table dimensions?
names(avatar) # what are the column names?
```
```
## [1] 3
## [1] 3
## [1] 3 3
## [1] "name" "bends" "friendly"
```
#### 2\.6\.3\.3 Accessing rows and columns
There are various ways of accessing specific columns or rows from a table. The ones below are from [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") and are useful to know about, but you’ll be learning easier (and more readable) ways in the [tidyr](tidyr.html#tidyr) and [dplyr](dplyr.html#dplyr) lessons. Examples of these base R accessing functions are provided here for reference, since you might see them in other people’s scripts.
```
katara <- avatar[1, ] # first row
type <- avatar[, 2] # second column (bends)
benders <- avatar[c(1, 2), ] # selected rows (by number)
bends_name <- avatar[, c("bends", "name")] # selected columns (by name)
friendly <- avatar$friendly # by column name
```
#### 2\.6\.3\.1 Creating a table
We learned how to create a table by importing a Excel or CSV file, and creating a table from scratch using the `tibble()` function. You can also use the `tibble::tribble()` function to create a table by row, rather than by column. You start by listing the column names, each preceded by a tilde (`~`), then you list the values for each column, row by row, separated by commas (don’t forget a comma at the end of each row). This method can be easier for some data, but doesn’t let you use shortcuts, like setting all of the values in a column to the same value or a [repeating sequence](data.html#rep_seq).
```
# by column using tibble
avatar_by_col <- tibble(
name = c("Katara", "Toph", "Sokka", "Azula"),
bends = c("water", "earth", NA, "fire"),
friendly = rep(c(TRUE, FALSE), c(3, 1))
)
# by row using tribble
avatar_by_row <- tribble(
~name, ~bends, ~friendly,
"Katara", "water", TRUE,
"Toph", "earth", TRUE,
"Sokka", NA, TRUE,
"Azula", "fire", FALSE
)
```
#### 2\.6\.3\.2 Table info
We can get information about the table using the functions `ncol()` (number of columns), `nrow()` (number of rows), `dim()` (the number of rows and number of columns), and `name()` (the column names).
```
nrow(avatar) # how many rows?
ncol(avatar) # how many columns?
dim(avatar) # what are the table dimensions?
names(avatar) # what are the column names?
```
```
## [1] 3
## [1] 3
## [1] 3 3
## [1] "name" "bends" "friendly"
```
#### 2\.6\.3\.3 Accessing rows and columns
There are various ways of accessing specific columns or rows from a table. The ones below are from [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") and are useful to know about, but you’ll be learning easier (and more readable) ways in the [tidyr](tidyr.html#tidyr) and [dplyr](dplyr.html#dplyr) lessons. Examples of these base R accessing functions are provided here for reference, since you might see them in other people’s scripts.
```
katara <- avatar[1, ] # first row
type <- avatar[, 2] # second column (bends)
benders <- avatar[c(1, 2), ] # selected rows (by number)
bends_name <- avatar[, c("bends", "name")] # selected columns (by name)
friendly <- avatar$friendly # by column name
```
2\.7 Troubleshooting
--------------------
What if you import some data and it guesses the wrong column type? The most common reason is that a numeric column has some non\-numbers in it somewhere. Maybe someone wrote a note in an otherwise numeric column. Columns have to be all one data type, so if there are any characters, the whole column is converted to character strings, and numbers like `1.2` get represented as “1\.2,” which will cause very weird errors like `"100" < "9" == TRUE`. You can catch this by looking at the output from `read_csv()` or using `glimpse()` to check your data.
The data directory you created with `dataskills::getdata()` contains a file called “mess.csv.” Let’s try loading this dataset.
```
mess <- read_csv("data/mess.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## `This is my messy dataset` = col_character()
## )
```
```
## Warning: 27 parsing failures.
## row col expected actual file
## 1 -- 1 columns 7 columns 'data/mess.csv'
## 2 -- 1 columns 7 columns 'data/mess.csv'
## 3 -- 1 columns 7 columns 'data/mess.csv'
## 4 -- 1 columns 7 columns 'data/mess.csv'
## 5 -- 1 columns 7 columns 'data/mess.csv'
## ... ... ......... ......... ...............
## See problems(...) for more details.
```
You’ll get a warning with many parsing errors and `mess` is just a single column of the word “junk.” View the file `data/mess.csv` by clicking on it in the File pane, and choosing “View File.” Here are the first 10 lines. What went wrong?
```
This is my messy dataset
junk,order,score,letter,good,min_max,date
junk,1,-1,a,1,1 - 2,2020-01-1
junk,missing,0.72,b,1,2 - 3,2020-01-2
junk,3,-0.62,c,FALSE,3 - 4,2020-01-3
junk,4,2.03,d,T,4 - 5,2020-01-4
```
First, the file starts with a note: “This is my messy dataset.” We want to skip the first two lines. You can do this with the argument `skip` in `read_csv()`.
```
mess <- read_csv("data/mess.csv", skip = 2)
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## junk = col_character(),
## order = col_character(),
## score = col_double(),
## letter = col_character(),
## good = col_character(),
## min_max = col_character(),
## date = col_character()
## )
```
```
mess
```
| junk | order | score | letter | good | min\_max | date |
| --- | --- | --- | --- | --- | --- | --- |
| junk | 1 | \-1\.00 | a | 1 | 1 \- 2 | 2020\-01\-1 |
| junk | missing | 0\.72 | b | 1 | 2 \- 3 | 2020\-01\-2 |
| junk | 3 | \-0\.62 | c | FALSE | 3 \- 4 | 2020\-01\-3 |
| junk | 4 | 2\.03 | d | T | 4 \- 5 | 2020\-01\-4 |
| junk | 5 | NA | e | 1 | 5 \- 6 | 2020\-01\-5 |
| junk | 6 | 0\.99 | f | 0 | 6 \- 7 | 2020\-01\-6 |
| junk | 7 | 0\.03 | g | T | 7 \- 8 | 2020\-01\-7 |
| junk | 8 | 0\.67 | h | TRUE | 8 \- 9 | 2020\-01\-8 |
| junk | 9 | 0\.57 | i | 1 | 9 \- 10 | 2020\-01\-9 |
| junk | 10 | 0\.90 | j | T | 10 \- 11 | 2020\-01\-10 |
| junk | 11 | \-1\.55 | k | F | 11 \- 12 | 2020\-01\-11 |
| junk | 12 | NA | l | FALSE | 12 \- 13 | 2020\-01\-12 |
| junk | 13 | 0\.15 | m | T | 13 \- 14 | 2020\-01\-13 |
| junk | 14 | \-0\.66 | n | TRUE | 14 \- 15 | 2020\-01\-14 |
| junk | 15 | \-0\.99 | o | 1 | 15 \- 16 | 2020\-01\-15 |
| junk | 16 | 1\.97 | p | T | 16 \- 17 | 2020\-01\-16 |
| junk | 17 | \-0\.44 | q | TRUE | 17 \- 18 | 2020\-01\-17 |
| junk | 18 | \-0\.90 | r | F | 18 \- 19 | 2020\-01\-18 |
| junk | 19 | \-0\.15 | s | FALSE | 19 \- 20 | 2020\-01\-19 |
| junk | 20 | \-0\.83 | t | 0 | 20 \- 21 | 2020\-01\-20 |
| junk | 21 | 1\.99 | u | T | 21 \- 22 | 2020\-01\-21 |
| junk | 22 | 0\.04 | v | F | 22 \- 23 | 2020\-01\-22 |
| junk | 23 | \-0\.40 | w | F | 23 \- 24 | 2020\-01\-23 |
| junk | 24 | \-0\.47 | x | 0 | 24 \- 25 | 2020\-01\-24 |
| junk | 25 | \-0\.41 | y | TRUE | 25 \- 26 | 2020\-01\-25 |
| junk | 26 | 0\.68 | z | 0 | 26 \- 27 | 2020\-01\-26 |
OK, that’s a little better, but this table is still a serious mess in several ways:
* `junk` is a column that we don’t need
* `order` should be an integer column
* `good` should be a logical column
* `good` uses all kinds of different ways to record TRUE and FALSE values
* `min_max` contains two pieces of numeric information, but is a character column
* `date` should be a date column
We’ll learn how to deal with this mess in the chapters on [tidy data](tidyr.html#tidyr) and [data wrangling](dplyr.html#dplyr), but we can fix a few things by setting the `col_types` argument in `read_csv()` to specify the column types for our two columns that were guessed wrong and skip the “junk” column. The argument `col_types` takes a list where the name of each item in the list is a column name and the value is from the table below. You can use the function, like `col_double()` or the abbreviation, like `"l"`. Omitted column names are guessed.
| function | | abbreviation |
| --- | --- | --- |
| col\_logical() | l | logical values |
| col\_integer() | i | integer values |
| col\_double() | d | numeric values |
| col\_character() | c | strings |
| col\_factor(levels, ordered) | f | a fixed set of values |
| col\_date(format \= "") | D | with the locale’s date\_format |
| col\_time(format \= "") | t | with the locale’s time\_format |
| col\_datetime(format \= "") | T | ISO8601 date time |
| col\_number() | n | numbers containing the grouping\_mark |
| col\_skip() | \_, \- | don’t import this column |
| col\_guess() | ? | parse using the “best” type based on the input |
```
# omitted values are guessed
# ?col_date for format options
ct <- list(
junk = "-", # skip this column
order = "i",
good = "l",
date = col_date(format = "%Y-%m-%d")
)
tidier <- read_csv("data/mess.csv",
skip = 2,
col_types = ct)
```
```
## Warning: 1 parsing failure.
## row col expected actual file
## 2 order an integer missing 'data/mess.csv'
```
You will get a message about “1 parsing failure” when you run this. Warnings look scary at first, but always start by reading the message. The table tells you what row (`2`) and column (`order`) the error was found in, what kind of data was expected (`integer`), and what the actual value was (`missing`). If you specifically tell `read_csv()` to import a column as an integer, any characters in the column will produce a warning like this and then be recorded as `NA`. You can manually set what the missing values are recorded as with the `na` argument.
```
tidiest <- read_csv("data/mess.csv",
skip = 2,
na = "missing",
col_types = ct)
```
Now `order` is an integer where “missing” is now `NA`, `good` is a logical value, where `0` and `F` are converted to `FALSE` and `1` and `T` are converted to `TRUE`, and `date` is a date type (adding leading zeros to the day). We’ll learn in later chapters how to fix the other problems.
```
tidiest
```
| order | score | letter | good | min\_max | date |
| --- | --- | --- | --- | --- | --- |
| 1 | \-1 | a | TRUE | 1 \- 2 | 2020\-01\-01 |
| NA | 0\.72 | b | TRUE | 2 \- 3 | 2020\-01\-02 |
| 3 | \-0\.62 | c | FALSE | 3 \- 4 | 2020\-01\-03 |
| 4 | 2\.03 | d | TRUE | 4 \- 5 | 2020\-01\-04 |
| 5 | NA | e | TRUE | 5 \- 6 | 2020\-01\-05 |
| 6 | 0\.99 | f | FALSE | 6 \- 7 | 2020\-01\-06 |
| 7 | 0\.03 | g | TRUE | 7 \- 8 | 2020\-01\-07 |
| 8 | 0\.67 | h | TRUE | 8 \- 9 | 2020\-01\-08 |
| 9 | 0\.57 | i | TRUE | 9 \- 10 | 2020\-01\-09 |
| 10 | 0\.9 | j | TRUE | 10 \- 11 | 2020\-01\-10 |
| 11 | \-1\.55 | k | FALSE | 11 \- 12 | 2020\-01\-11 |
| 12 | NA | l | FALSE | 12 \- 13 | 2020\-01\-12 |
| 13 | 0\.15 | m | TRUE | 13 \- 14 | 2020\-01\-13 |
| 14 | \-0\.66 | n | TRUE | 14 \- 15 | 2020\-01\-14 |
| 15 | \-0\.99 | o | TRUE | 15 \- 16 | 2020\-01\-15 |
| 16 | 1\.97 | p | TRUE | 16 \- 17 | 2020\-01\-16 |
| 17 | \-0\.44 | q | TRUE | 17 \- 18 | 2020\-01\-17 |
| 18 | \-0\.9 | r | FALSE | 18 \- 19 | 2020\-01\-18 |
| 19 | \-0\.15 | s | FALSE | 19 \- 20 | 2020\-01\-19 |
| 20 | \-0\.83 | t | FALSE | 20 \- 21 | 2020\-01\-20 |
| 21 | 1\.99 | u | TRUE | 21 \- 22 | 2020\-01\-21 |
| 22 | 0\.04 | v | FALSE | 22 \- 23 | 2020\-01\-22 |
| 23 | \-0\.4 | w | FALSE | 23 \- 24 | 2020\-01\-23 |
| 24 | \-0\.47 | x | FALSE | 24 \- 25 | 2020\-01\-24 |
| 25 | \-0\.41 | y | TRUE | 25 \- 26 | 2020\-01\-25 |
| 26 | 0\.68 | z | FALSE | 26 \- 27 | 2020\-01\-26 |
2\.8 Glossary
-------------
| term | definition |
| --- | --- |
| [base r](https://psyteachr.github.io/glossary/b#base.r) | The set of R functions that come with a basic installation of R, before you add external packages |
| [character](https://psyteachr.github.io/glossary/c#character) | A data type representing strings of text. |
| [csv](https://psyteachr.github.io/glossary/c#csv) | Comma\-separated variable: a file type for representing data where each variable is separated from the next by a comma. |
| [data type](https://psyteachr.github.io/glossary/d#data.type) | The kind of data represented by an object. |
| [deviation score](https://psyteachr.github.io/glossary/d#deviation.score) | A score minus the mean |
| [double](https://psyteachr.github.io/glossary/d#double) | A data type representing a real decimal number |
| [escape](https://psyteachr.github.io/glossary/e#escape) | Include special characters like " inside of a string by prefacing them with a backslash. |
| [extension](https://psyteachr.github.io/glossary/e#extension) | The end part of a file name that tells you what type of file it is (e.g., .R or .Rmd). |
| [extract operator](https://psyteachr.github.io/glossary/e#extract.operator) | A symbol used to get values from a container object, such as \[, \[\[, or $ |
| [factor](https://psyteachr.github.io/glossary/f#factor) | A data type where a specific set of values are stored with labels; An explanatory variable manipulated by the experimenter |
| [global environment](https://psyteachr.github.io/glossary/g#global.environment) | The interactive workspace where your script runs |
| [integer](https://psyteachr.github.io/glossary/i#integer) | A data type representing whole numbers. |
| [list](https://psyteachr.github.io/glossary/l#list) | A container data type that allows items with different data types to be grouped together. |
| [logical](https://psyteachr.github.io/glossary/l#logical) | A data type representing TRUE or FALSE values. |
| [numeric](https://psyteachr.github.io/glossary/n#numeric) | A data type representing a real decimal number or integer. |
| [operator](https://psyteachr.github.io/glossary/o#operator) | A symbol that performs a mathematical operation, such as \+, \-, \*, / |
| [tabular data](https://psyteachr.github.io/glossary/t#tabular.data) | Data in a rectangular table format, where each row has an entry for each column. |
| [tidy data](https://psyteachr.github.io/glossary/t#tidy.data) | A format for data that maps the meaning onto the structure. |
| [tidyverse](https://psyteachr.github.io/glossary/t#tidyverse) | A set of R packages that help you create and work with tidy data |
| [vector](https://psyteachr.github.io/glossary/v#vector) | A type of data structure that is basically a list of things like T/F values, numbers, or strings. |
| [vectorized](https://psyteachr.github.io/glossary/v#vectorized) | An operator or function that acts on each element in a vector |
2\.9 Exercises
--------------
Download the [exercises](exercises/02_data_exercise.Rmd). See the [answers](exercises/02_data_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(2)
# run this to access the answers
dataskills::exercise(2, answers = TRUE)
```
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/data.html |
Chapter 2 Working with Data
===========================
2\.1 Learning Objectives
------------------------
1. Load [built\-in datasets](data.html#builtin) [(video)](https://youtu.be/Z5fK5VGmzlY)
2. [Import data](data.html#import_data) from CSV and Excel files [(video)](https://youtu.be/a7Ra-hnB8l8)
3. Create a [data table](data.html#tables) [(video)](https://youtu.be/k-aqhurepb4)
4. Understand the use the [basic data types](data.html#data_types) [(video)](https://youtu.be/jXQrF18Jaac)
5. Understand and use the [basic container types](data.html#containers) (list, vector) [(video)](https://youtu.be/4xU7uKNdoig)
6. Use [vectorized operations](data.html#vectorized_ops) [(video)](https://youtu.be/9I5MdS7UWmI)
7. Be able to [troubleshoot](#Troubleshooting) common data import problems [(video)](https://youtu.be/gcxn4LJ_vAI)
2\.2 Resources
--------------
* [Chapter 11: Data Import](http://r4ds.had.co.nz/data-import.html) in *R for Data Science*
* [RStudio Data Import Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/data-import.pdf)
* [Scottish Babynames](https://www.nrscotland.gov.uk/files//statistics/babies-first-names-full-list/summary-records/babies-names16-all-names-years.csv)
* [Developing an analysis in R/RStudio: Scottish babynames (1/2\)](https://www.youtube.com/watch?v=lAaVPMcMs1w)
* [Developing an analysis in R/RStudio: Scottish babynames (2/2\)](https://www.youtube.com/watch?v=lzdTHCcClqo)
2\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(dataskills)
```
2\.4 Data tables
----------------
### 2\.4\.1 Built\-in data
R comes with built\-in datasets. Some packages, like tidyr and dataskills, also contain data. The `data()` function lists the datasets available in a package.
```
# lists datasets in dataskills
data(package = "dataskills")
```
Type the name of a dataset into the console to see the data. Type `?smalldata` into the console to see the dataset description.
```
smalldata
```
| id | group | pre | post |
| --- | --- | --- | --- |
| S01 | control | 98\.46606 | 106\.70508 |
| S02 | control | 104\.39774 | 89\.09030 |
| S03 | control | 105\.13377 | 123\.67230 |
| S04 | control | 92\.42574 | 70\.70178 |
| S05 | control | 123\.53268 | 124\.95526 |
| S06 | exp | 97\.48676 | 101\.61697 |
| S07 | exp | 87\.75594 | 126\.30077 |
| S08 | exp | 77\.15375 | 72\.31229 |
| S09 | exp | 97\.00283 | 108\.80713 |
| S10 | exp | 102\.32338 | 113\.74732 |
You can also use the `data()` function to load a dataset into your [global environment](https://psyteachr.github.io/glossary/g#global-environment "The interactive workspace where your script runs").
```
# loads smalldata into the environment
data("smalldata")
```
Always, always, always, look at your data once you’ve created or loaded a table. Also look at it after each step that transforms your table. There are three main ways to look at your tibble: `print()`, `glimpse()`, and `View()`.
The `print()` method can be run explicitly, but is more commonly called by just typing the variable name on the blank line. The default is not to print the entire table, but just the first 10 rows. It’s rare to print your data in a script; that is something you usually are doing for a sanity check, and you should just do it in the console.
Let’s look at the `smalldata` table that we made above.
```
smalldata
```
| id | group | pre | post |
| --- | --- | --- | --- |
| S01 | control | 98\.46606 | 106\.70508 |
| S02 | control | 104\.39774 | 89\.09030 |
| S03 | control | 105\.13377 | 123\.67230 |
| S04 | control | 92\.42574 | 70\.70178 |
| S05 | control | 123\.53268 | 124\.95526 |
| S06 | exp | 97\.48676 | 101\.61697 |
| S07 | exp | 87\.75594 | 126\.30077 |
| S08 | exp | 77\.15375 | 72\.31229 |
| S09 | exp | 97\.00283 | 108\.80713 |
| S10 | exp | 102\.32338 | 113\.74732 |
The function `glimpse()` gives a sideways version of the tibble. This is useful if the table is very wide and you can’t see all of the columns. It also tells you the data type of each column in angled brackets after each column name. We’ll learn about [data types](data.html#data_types) below.
```
glimpse(smalldata)
```
```
## Rows: 10
## Columns: 4
## $ id <chr> "S01", "S02", "S03", "S04", "S05", "S06", "S07", "S08", "S09", "…
## $ group <chr> "control", "control", "control", "control", "control", "exp", "e…
## $ pre <dbl> 98.46606, 104.39774, 105.13377, 92.42574, 123.53268, 97.48676, 8…
## $ post <dbl> 106.70508, 89.09030, 123.67230, 70.70178, 124.95526, 101.61697, …
```
The other way to look at the table is a more graphical spreadsheet\-like version given by `View()` (capital ‘V’). It can be useful in the console, but don’t ever put this one in a script because it will create an annoying pop\-up window when the user runs it.
Now you can click on `smalldata` in the environment pane to open it up in a viewer that looks a bit like Excel.
You can get a quick summary of a dataset with the `summary()` function.
```
summary(smalldata)
```
```
## id group pre post
## Length:10 Length:10 Min. : 77.15 Min. : 70.70
## Class :character Class :character 1st Qu.: 93.57 1st Qu.: 92.22
## Mode :character Mode :character Median : 97.98 Median :107.76
## Mean : 98.57 Mean :103.79
## 3rd Qu.:103.88 3rd Qu.:121.19
## Max. :123.53 Max. :126.30
```
You can even do things like calculate the difference between the means of two columns.
```
pre_mean <- mean(smalldata$pre)
post_mean <- mean(smalldata$post)
post_mean - pre_mean
```
```
## [1] 5.223055
```
### 2\.4\.2 Importing data
Built\-in data are nice for examples, but you’re probably more interested in your own data. There are many different types of files that you might work with when doing data analysis. These different file types are usually distinguished by the three letter [extension](https://psyteachr.github.io/glossary/e#extension "The end part of a file name that tells you what type of file it is (e.g., .R or .Rmd).") following a period at the end of the file name. Here are some examples of different types of files and the functions you would use to read them in or write them out.
| Extension | File Type | Reading | Writing |
| --- | --- | --- | --- |
| .csv | Comma\-separated values | `readr::read_csv()` | `readr::write_csv()` |
| .tsv, .txt | Tab\-separated values | `readr::read_tsv()` | `readr::write_tsv()` |
| .xls, .xlsx | Excel workbook | `readxl::read_excel()` | NA |
| .sav, .mat, … | Multiple types | `rio::import()` | NA |
The double colon means that the function on the right comes from the package on the left, so `readr::read_csv()` refers to the `read_csv()` function in the `readr` package, and `readxl::read_excel()` refers to the function `read_excel()` in the package `readxl`. The function `rio::import()` from the `rio` package will read almost any type of data file, including SPSS and Matlab. Check the help with `?rio::import` to see a full list.
You can get a directory of data files used in this class for tutorials and exercises with the following code, which will create a directory called “data” in your project directory. Alternatively, you can download a [zip file of the datasets](data/data.zip).
```
dataskills::getdata()
```
Probably the most common file type you will encounter is [.csv](https://psyteachr.github.io/glossary/c#csv "Comma-separated variable: a file type for representing data where each variable is separated from the next by a comma.") (comma\-separated values). As the name suggests, a CSV file distinguishes which values go with which variable by separating them with commas, and text values are sometimes enclosed in double quotes. The first line of a file usually provides the names of the variables.
For example, here are the first few lines of a CSV containing personality scores:
```
subj_id,O,C,E,A,N
S01,4.428571429,4.5,3.333333333,5.142857143,1.625
S02,5.714285714,2.9,3.222222222,3,2.625
S03,5.142857143,2.8,6,3.571428571,2.5
S04,3.142857143,5.2,1.333333333,1.571428571,3.125
S05,5.428571429,4.4,2.444444444,4.714285714,1.625
```
There are six variables in this dataset, and their names are given in the first line of the file: `subj_id`, `O`, `C`, `E`, `A`, and `N`. You can see that the values for each of these variables are given in order, separated by commas, on each subsequent line of the file.
When you read in CSV files, it is best practice to use the `readr::read_csv()` function. The `readr` package is automatically loaded as part of the `tidyverse` package, which we will be using in almost every script. Note that you would normally want to store the result of the `read_csv()` function to an object, as so:
```r
csv_data <- read_csv("data/5factor.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## subj_id = col_character(),
## O = col_double(),
## C = col_double(),
## E = col_double(),
## A = col_double(),
## N = col_double()
## )
```
The `read_csv()` and `read_tsv()` functions will give you some information about the data you just read in so you can check the column names and [data types](data.html#data_types). For now, it’s enough to know that `col_double()` refers to columns with numbers and `col_character()` refers to columns with words. We’ll learn in the [toroubleshooting](data.html#troubleshooting) section below how to fix it if the function guesses the wrong data type.
```
tsv_data <- read_tsv("data/5factor.txt")
xls_data <- readxl::read_xls("data/5factor.xls")
# you can load sheets from excel files by name or number
rep_data <- readxl::read_xls("data/5factor.xls", sheet = "replication")
spss_data <- rio::import("data/5factor.sav")
```
Once loaded, you can view your data using the data viewer. In the upper right hand window of RStudio, under the Environment tab, you will see the object `babynames` listed.
If you click on the View icon (, it will bring up a table view of the data you loaded in the top left pane of RStudio.
This allows you to check that the data have been loaded in properly. You can close the tab when you’re done looking at it, it won’t remove the object.
### 2\.4\.3 Creating data
If we are creating a data table from scratch, we can use the `tibble::tibble()` function, and type the data right in. The `tibble` package is part of the [tidyverse](https://psyteachr.github.io/glossary/t#tidyverse "A set of R packages that help you create and work with tidy data") package that we loaded at the start of this chapter.
Let’s create a small table with the names of three Avatar characters and their bending type. The `tibble()` function takes arguments with the names that you want your columns to have. The values are vectors that list the column values in order.
If you don’t know the value for one of the cells, you can enter `NA`, which we have to do for Sokka because he doesn’t have any bending ability. If all the values in the column are the same, you can just enter one value and it will be copied for each row.
```
avatar <- tibble(
name = c("Katara", "Toph", "Sokka"),
bends = c("water", "earth", NA),
friendly = TRUE
)
# print it
avatar
```
| name | bends | friendly |
| --- | --- | --- |
| Katara | water | TRUE |
| Toph | earth | TRUE |
| Sokka | NA | TRUE |
### 2\.4\.4 Writing Data
If you have data that you want to save to a CSV file, use `readr::write_csv()`, as follows.
```
write_csv(avatar, "avatar.csv")
```
This will save the data in CSV format to your working directory.
* Create a new table called `family` with the first name, last name, and age of your family members.
* Save it to a CSV file called “family.csv.”
* Clear the object from your environment by restarting R or with the code `remove(family)`.
* Load the data back in and view it.
We’ll be working with [tabular data](https://psyteachr.github.io/glossary/t#tabular-data "Data in a rectangular table format, where each row has an entry for each column.") a lot in this class, but tabular data is made up of [vectors](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings."), which group together data with the same basic [data type](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object."). The following sections explain some of this terminology to help you understand the functions we’ll be learning to process and analyse data.
2\.5 Basic data types
---------------------
Data can be numbers, words, true/false values or combinations of these. In order to understand some later concepts, it’s useful to have a basic understanding of [data types](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object.") in R: [numeric](https://psyteachr.github.io/glossary/n#numeric "A data type representing a real decimal number or integer."), [character](https://psyteachr.github.io/glossary/c#character "A data type representing strings of text."), and [logical](https://psyteachr.github.io/glossary/l#logical "A data type representing TRUE or FALSE values.") There is also a specific data type called a [factor](https://psyteachr.github.io/glossary/f#factor "A data type where a specific set of values are stored with labels; An explanatory variable manipulated by the experimenter"), which will probably give you a headache sooner or later, but we can ignore it for now.
### 2\.5\.1 Numeric data
All of the real numbers are [numeric](https://psyteachr.github.io/glossary/n#numeric "A data type representing a real decimal number or integer.") data types (imaginary numbers are “complex”). There are two types of numeric data, [integer](https://psyteachr.github.io/glossary/i#integer "A data type representing whole numbers.") and [double](https://psyteachr.github.io/glossary/d#double "A data type representing a real decimal number"). Integers are the whole numbers, like \-1, 0 and 1\. Doubles are numbers that can have fractional amounts. If you just type a plain number such as `10`, it is stored as a double, even if it doesn’t have a decimal point. If you want it to be an exact integer, use the `L` suffix (10L).
If you ever want to know the data type of something, use the `typeof` function.
```
typeof(10) # double
typeof(10.0) # double
typeof(10L) # integer
typeof(10i) # complex
```
```
## [1] "double"
## [1] "double"
## [1] "integer"
## [1] "complex"
```
If you want to know if something is numeric (a double or an integer), you can use the function `is.numeric()` and it will tell you if it is numeric (`TRUE`) or not (`FALSE`).
```
is.numeric(10L)
is.numeric(10.0)
is.numeric("Not a number")
```
```
## [1] TRUE
## [1] TRUE
## [1] FALSE
```
### 2\.5\.2 Character data
[Character](https://psyteachr.github.io/glossary/c#character "A data type representing strings of text.") strings are any text between quotation marks.
```
typeof("This is a character string")
typeof('You can use double or single quotes')
```
```
## [1] "character"
## [1] "character"
```
This can include quotes, but you have to [escape](https://psyteachr.github.io/glossary/e#escape "Include special characters like \" inside of a string by prefacing them with a backslash.") it using a backslash to signal the the quote isn’t meant to be the end of the string.
```
my_string <- "The instructor said, \"R is cool,\" and the class agreed."
cat(my_string) # cat() prints the arguments
```
```
## The instructor said, "R is cool," and the class agreed.
```
### 2\.5\.3 Logical Data
[Logical](https://psyteachr.github.io/glossary/l#logical "A data type representing TRUE or FALSE values.") data (also sometimes called “boolean” values) is one of two values: true or false. In R, we always write them in uppercase: `TRUE` and `FALSE`.
```
class(TRUE)
class(FALSE)
```
```
## [1] "logical"
## [1] "logical"
```
When you compare two values with an [operator](https://psyteachr.github.io/glossary/o#operator "A symbol that performs a mathematical operation, such as +, -, *, /"), such as checking to see if 10 is greater than 5, the resulting value is logical.
```
is.logical(10 > 5)
```
```
## [1] TRUE
```
You might also see logical values abbreviated as `T` and `F`, or `0` and `1`. This can cause some problems down the road, so we will always spell out the whole thing.
What data types are these:
* `100` integer double character logical factor
* `100L` integer double character logical factor
* `"100"` integer double character logical factor
* `100.0` integer double character logical factor
* `-100L` integer double character logical factor
* `factor(100)` integer double character logical factor
* `TRUE` integer double character logical factor
* `"TRUE"` integer double character logical factor
* `FALSE` integer double character logical factor
* `1 == 2` integer double character logical factor
2\.6 Basic container types
--------------------------
Individual data values can be grouped together into containers. The main types of containers we’ll work with are vectors, lists, and data tables.
### 2\.6\.1 Vectors
A [vector](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings.") in R is like a vector in mathematics: a set of ordered elements. All of the elements in a vector must be of the same **data type** (numeric, character, logical). You can create a vector by enclosing the elements in the function `c()`.
```
## put information into a vector using c(...)
c(1, 2, 3, 4)
c("this", "is", "cool")
1:6 # shortcut to make a vector of all integers x:y
```
```
## [1] 1 2 3 4
## [1] "this" "is" "cool"
## [1] 1 2 3 4 5 6
```
What happens when you mix types? What class is the variable `mixed`?
```
mixed <- c(2, "good", 2L, "b", TRUE)
```
You can’t mix data types in a vector; all elements of the vector must be the same data type. If you mix them, R will “coerce” them so that they are all the same. If you mix doubles and integers, the integers will be changed to doubles. If you mix characters and numeric types, the numbers will be coerced to characters, so `10` would turn into “10\.”
#### 2\.6\.1\.1 Selecting values from a vector
If we wanted to pick specific values out of a vector by position, we can use square brackets (an [extract operator](https://psyteachr.github.io/glossary/e#extract-operator "A symbol used to get values from a container object, such as [, [[, or $"), or `[]`) after the vector.
```
values <- c(10, 20, 30, 40, 50)
values[2] # selects the second value
```
```
## [1] 20
```
You can select more than one value from the vector by putting a vector of numbers inside the square brackets. For example, you can select the 18th, 19th, 20th, 21st, 4th, 9th and 15th letter from the built\-in vector `LETTERS` (which gives all the uppercase letters in the Latin alphabet).
```
word <- c(18, 19, 20, 21, 4, 9, 15)
LETTERS[word]
```
```
## [1] "R" "S" "T" "U" "D" "I" "O"
```
Can you decode the secret message?
```
secret <- c(14, 5, 22, 5, 18, 7, 15, 14, 14, 1, 7, 9, 22, 5, 25, 15, 21, 21, 16)
```
You can also create ‘named’ vectors, where each element has a name. For example:
```
vec <- c(first = 77.9, second = -13.2, third = 100.1)
vec
```
```
## first second third
## 77.9 -13.2 100.1
```
We can then access elements by name using a character vector within the square brackets. We can put them in any order we want, and we can repeat elements:
```
vec[c("third", "second", "second")]
```
```
## third second second
## 100.1 -13.2 -13.2
```
We can get the vector of names using the `names()` function, and we can set or change them using something like `names(vec2) <- c(“n1,” “n2,” “n3”)`.
Another way to access elements is by using a logical vector within the square brackets. This will pull out the elements of the vector for which the corresponding element of the logical vector is `TRUE`. If the logical vector doesn’t have the same length as the original, it will repeat. You can find out how long a vector is using the `length()` function.
```
length(LETTERS)
LETTERS[c(TRUE, FALSE)]
```
```
## [1] 26
## [1] "A" "C" "E" "G" "I" "K" "M" "O" "Q" "S" "U" "W" "Y"
```
#### 2\.6\.1\.2 Repeating Sequences
Here are some useful tricks to save typing when creating vectors.
In the command `x:y` the `:` operator would give you the sequence of number starting at `x`, and going to `y` in increments of 1\.
```
1:10
15.3:20.5
0:-10
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
## [1] 15.3 16.3 17.3 18.3 19.3 20.3
## [1] 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10
```
What if you want to create a sequence but with something other than integer steps? You can use the `seq()` function. Look at the examples below and work out what the arguments do.
```
seq(from = -1, to = 1, by = 0.2)
seq(0, 100, length.out = 11)
seq(0, 10, along.with = LETTERS)
```
```
## [1] -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
## [1] 0 10 20 30 40 50 60 70 80 90 100
## [1] 0.0 0.4 0.8 1.2 1.6 2.0 2.4 2.8 3.2 3.6 4.0 4.4 4.8 5.2 5.6
## [16] 6.0 6.4 6.8 7.2 7.6 8.0 8.4 8.8 9.2 9.6 10.0
```
What if you want to repeat a vector many times? You could either type it out (painful) or use the `rep()` function, which can repeat vectors in different ways.
```
rep(0, 10) # ten zeroes
rep(c(1L, 3L), times = 7) # alternating 1 and 3, 7 times
rep(c("A", "B", "C"), each = 2) # A to C, 2 times each
```
```
## [1] 0 0 0 0 0 0 0 0 0 0
## [1] 1 3 1 3 1 3 1 3 1 3 1 3 1 3
## [1] "A" "A" "B" "B" "C" "C"
```
The `rep()` function is useful to create a vector of logical values (`TRUE`/`FALSE` or `1`/`0`) to select values from another vector.
```
# Get subject IDs in the pattern Y Y N N ...
subject_ids <- 1:40
yynn <- rep(c(TRUE, FALSE), each = 2,
length.out = length(subject_ids))
subject_ids[yynn]
```
```
## [1] 1 2 5 6 9 10 13 14 17 18 21 22 25 26 29 30 33 34 37 38
```
#### 2\.6\.1\.3 Vectorized Operations
R performs calculations on vectors in a special way. Let’s look at an example using \\(z\\)\-scores. A \\(z\\)\-score is a [deviation score](https://psyteachr.github.io/glossary/d#deviation-score "A score minus the mean")(a score minus a mean) divided by a standard deviation. Let’s say we have a set of four IQ scores.
```
## example IQ scores: mu = 100, sigma = 15
iq <- c(86, 101, 127, 99)
```
If we want to subtract the mean from these four scores, we just use the following code:
```
iq - 100
```
```
## [1] -14 1 27 -1
```
This subtracts 100 from each element of the vector. R automatically assumes that this is what you wanted to do; it is called a [vectorized](https://psyteachr.github.io/glossary/v#vectorized "An operator or function that acts on each element in a vector") operation and it makes it possible to express operations more efficiently.
To calculate \\(z\\)\-scores we use the formula:
\\(z \= \\frac{X \- \\mu}{\\sigma}\\)
where X are the scores, \\(\\mu\\) is the mean, and \\(\\sigma\\) is the standard deviation. We can expression this formula in R as follows:
```
## z-scores
(iq - 100) / 15
```
```
## [1] -0.93333333 0.06666667 1.80000000 -0.06666667
```
You can see that it computed all four \\(z\\)\-scores with a single line of code. In later chapters, we’ll use vectorised operations to process our data, such as reverse\-scoring some questionnaire items.
### 2\.6\.2 Lists
Recall that vectors can contain data of only one type. What if you want to store a collection of data of different data types? For that purpose you would use a [list](https://psyteachr.github.io/glossary/l#list "A container data type that allows items with different data types to be grouped together."). Define a list using the `list()` function.
```
data_types <- list(
double = 10.0,
integer = 10L,
character = "10",
logical = TRUE
)
str(data_types) # str() prints lists in a condensed format
```
```
## List of 4
## $ double : num 10
## $ integer : int 10
## $ character: chr "10"
## $ logical : logi TRUE
```
You can refer to elements of a list using square brackets like a vector, but you can also use the dollar sign notation (`$`) if the list items have names.
```
data_types$logical
```
```
## [1] TRUE
```
Explore the 5 ways shown below to extract a value from a list. What data type is each object? What is the difference between the single and double brackets? Which one is the same as the dollar sign?
```
bracket1 <- data_types[1]
bracket2 <- data_types[[1]]
name1 <- data_types["double"]
name2 <- data_types[["double"]]
dollar <- data_types$double
```
### 2\.6\.3 Tables
The built\-in, imported, and created data above are [tabular data](https://psyteachr.github.io/glossary/t#tabular-data "Data in a rectangular table format, where each row has an entry for each column."), data arranged in the form of a table.
Tabular data structures allow for a collection of data of different types (characters, integers, logical, etc.) but subject to the constraint that each “column” of the table (element of the list) must have the same number of elements. The base R version of a table is called a `data.frame`, while the ‘tidyverse’ version is called a `tibble`. Tibbles are far easier to work with, so we’ll be using those. To learn more about differences between these two data structures, see `vignette("tibble")`.
Tabular data becomes especially important for when we talk about [tidy data](https://psyteachr.github.io/glossary/t#tidy-data "A format for data that maps the meaning onto the structure.") in [chapter 4](tidyr.html#tidyr), which consists of a set of simple principles for structuring data.
#### 2\.6\.3\.1 Creating a table
We learned how to create a table by importing a Excel or CSV file, and creating a table from scratch using the `tibble()` function. You can also use the `tibble::tribble()` function to create a table by row, rather than by column. You start by listing the column names, each preceded by a tilde (`~`), then you list the values for each column, row by row, separated by commas (don’t forget a comma at the end of each row). This method can be easier for some data, but doesn’t let you use shortcuts, like setting all of the values in a column to the same value or a [repeating sequence](data.html#rep_seq).
```
# by column using tibble
avatar_by_col <- tibble(
name = c("Katara", "Toph", "Sokka", "Azula"),
bends = c("water", "earth", NA, "fire"),
friendly = rep(c(TRUE, FALSE), c(3, 1))
)
# by row using tribble
avatar_by_row <- tribble(
~name, ~bends, ~friendly,
"Katara", "water", TRUE,
"Toph", "earth", TRUE,
"Sokka", NA, TRUE,
"Azula", "fire", FALSE
)
```
#### 2\.6\.3\.2 Table info
We can get information about the table using the functions `ncol()` (number of columns), `nrow()` (number of rows), `dim()` (the number of rows and number of columns), and `name()` (the column names).
```
nrow(avatar) # how many rows?
ncol(avatar) # how many columns?
dim(avatar) # what are the table dimensions?
names(avatar) # what are the column names?
```
```
## [1] 3
## [1] 3
## [1] 3 3
## [1] "name" "bends" "friendly"
```
#### 2\.6\.3\.3 Accessing rows and columns
There are various ways of accessing specific columns or rows from a table. The ones below are from [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") and are useful to know about, but you’ll be learning easier (and more readable) ways in the [tidyr](tidyr.html#tidyr) and [dplyr](dplyr.html#dplyr) lessons. Examples of these base R accessing functions are provided here for reference, since you might see them in other people’s scripts.
```
katara <- avatar[1, ] # first row
type <- avatar[, 2] # second column (bends)
benders <- avatar[c(1, 2), ] # selected rows (by number)
bends_name <- avatar[, c("bends", "name")] # selected columns (by name)
friendly <- avatar$friendly # by column name
```
2\.7 Troubleshooting
--------------------
What if you import some data and it guesses the wrong column type? The most common reason is that a numeric column has some non\-numbers in it somewhere. Maybe someone wrote a note in an otherwise numeric column. Columns have to be all one data type, so if there are any characters, the whole column is converted to character strings, and numbers like `1.2` get represented as “1\.2,” which will cause very weird errors like `"100" < "9" == TRUE`. You can catch this by looking at the output from `read_csv()` or using `glimpse()` to check your data.
The data directory you created with `dataskills::getdata()` contains a file called “mess.csv.” Let’s try loading this dataset.
```
mess <- read_csv("data/mess.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## `This is my messy dataset` = col_character()
## )
```
```
## Warning: 27 parsing failures.
## row col expected actual file
## 1 -- 1 columns 7 columns 'data/mess.csv'
## 2 -- 1 columns 7 columns 'data/mess.csv'
## 3 -- 1 columns 7 columns 'data/mess.csv'
## 4 -- 1 columns 7 columns 'data/mess.csv'
## 5 -- 1 columns 7 columns 'data/mess.csv'
## ... ... ......... ......... ...............
## See problems(...) for more details.
```
You’ll get a warning with many parsing errors and `mess` is just a single column of the word “junk.” View the file `data/mess.csv` by clicking on it in the File pane, and choosing “View File.” Here are the first 10 lines. What went wrong?
```
This is my messy dataset
junk,order,score,letter,good,min_max,date
junk,1,-1,a,1,1 - 2,2020-01-1
junk,missing,0.72,b,1,2 - 3,2020-01-2
junk,3,-0.62,c,FALSE,3 - 4,2020-01-3
junk,4,2.03,d,T,4 - 5,2020-01-4
```
First, the file starts with a note: “This is my messy dataset.” We want to skip the first two lines. You can do this with the argument `skip` in `read_csv()`.
```
mess <- read_csv("data/mess.csv", skip = 2)
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## junk = col_character(),
## order = col_character(),
## score = col_double(),
## letter = col_character(),
## good = col_character(),
## min_max = col_character(),
## date = col_character()
## )
```
```
mess
```
| junk | order | score | letter | good | min\_max | date |
| --- | --- | --- | --- | --- | --- | --- |
| junk | 1 | \-1\.00 | a | 1 | 1 \- 2 | 2020\-01\-1 |
| junk | missing | 0\.72 | b | 1 | 2 \- 3 | 2020\-01\-2 |
| junk | 3 | \-0\.62 | c | FALSE | 3 \- 4 | 2020\-01\-3 |
| junk | 4 | 2\.03 | d | T | 4 \- 5 | 2020\-01\-4 |
| junk | 5 | NA | e | 1 | 5 \- 6 | 2020\-01\-5 |
| junk | 6 | 0\.99 | f | 0 | 6 \- 7 | 2020\-01\-6 |
| junk | 7 | 0\.03 | g | T | 7 \- 8 | 2020\-01\-7 |
| junk | 8 | 0\.67 | h | TRUE | 8 \- 9 | 2020\-01\-8 |
| junk | 9 | 0\.57 | i | 1 | 9 \- 10 | 2020\-01\-9 |
| junk | 10 | 0\.90 | j | T | 10 \- 11 | 2020\-01\-10 |
| junk | 11 | \-1\.55 | k | F | 11 \- 12 | 2020\-01\-11 |
| junk | 12 | NA | l | FALSE | 12 \- 13 | 2020\-01\-12 |
| junk | 13 | 0\.15 | m | T | 13 \- 14 | 2020\-01\-13 |
| junk | 14 | \-0\.66 | n | TRUE | 14 \- 15 | 2020\-01\-14 |
| junk | 15 | \-0\.99 | o | 1 | 15 \- 16 | 2020\-01\-15 |
| junk | 16 | 1\.97 | p | T | 16 \- 17 | 2020\-01\-16 |
| junk | 17 | \-0\.44 | q | TRUE | 17 \- 18 | 2020\-01\-17 |
| junk | 18 | \-0\.90 | r | F | 18 \- 19 | 2020\-01\-18 |
| junk | 19 | \-0\.15 | s | FALSE | 19 \- 20 | 2020\-01\-19 |
| junk | 20 | \-0\.83 | t | 0 | 20 \- 21 | 2020\-01\-20 |
| junk | 21 | 1\.99 | u | T | 21 \- 22 | 2020\-01\-21 |
| junk | 22 | 0\.04 | v | F | 22 \- 23 | 2020\-01\-22 |
| junk | 23 | \-0\.40 | w | F | 23 \- 24 | 2020\-01\-23 |
| junk | 24 | \-0\.47 | x | 0 | 24 \- 25 | 2020\-01\-24 |
| junk | 25 | \-0\.41 | y | TRUE | 25 \- 26 | 2020\-01\-25 |
| junk | 26 | 0\.68 | z | 0 | 26 \- 27 | 2020\-01\-26 |
OK, that’s a little better, but this table is still a serious mess in several ways:
* `junk` is a column that we don’t need
* `order` should be an integer column
* `good` should be a logical column
* `good` uses all kinds of different ways to record TRUE and FALSE values
* `min_max` contains two pieces of numeric information, but is a character column
* `date` should be a date column
We’ll learn how to deal with this mess in the chapters on [tidy data](tidyr.html#tidyr) and [data wrangling](dplyr.html#dplyr), but we can fix a few things by setting the `col_types` argument in `read_csv()` to specify the column types for our two columns that were guessed wrong and skip the “junk” column. The argument `col_types` takes a list where the name of each item in the list is a column name and the value is from the table below. You can use the function, like `col_double()` or the abbreviation, like `"l"`. Omitted column names are guessed.
| function | | abbreviation |
| --- | --- | --- |
| col\_logical() | l | logical values |
| col\_integer() | i | integer values |
| col\_double() | d | numeric values |
| col\_character() | c | strings |
| col\_factor(levels, ordered) | f | a fixed set of values |
| col\_date(format \= "") | D | with the locale’s date\_format |
| col\_time(format \= "") | t | with the locale’s time\_format |
| col\_datetime(format \= "") | T | ISO8601 date time |
| col\_number() | n | numbers containing the grouping\_mark |
| col\_skip() | \_, \- | don’t import this column |
| col\_guess() | ? | parse using the “best” type based on the input |
```
# omitted values are guessed
# ?col_date for format options
ct <- list(
junk = "-", # skip this column
order = "i",
good = "l",
date = col_date(format = "%Y-%m-%d")
)
tidier <- read_csv("data/mess.csv",
skip = 2,
col_types = ct)
```
```
## Warning: 1 parsing failure.
## row col expected actual file
## 2 order an integer missing 'data/mess.csv'
```
You will get a message about “1 parsing failure” when you run this. Warnings look scary at first, but always start by reading the message. The table tells you what row (`2`) and column (`order`) the error was found in, what kind of data was expected (`integer`), and what the actual value was (`missing`). If you specifically tell `read_csv()` to import a column as an integer, any characters in the column will produce a warning like this and then be recorded as `NA`. You can manually set what the missing values are recorded as with the `na` argument.
```
tidiest <- read_csv("data/mess.csv",
skip = 2,
na = "missing",
col_types = ct)
```
Now `order` is an integer where “missing” is now `NA`, `good` is a logical value, where `0` and `F` are converted to `FALSE` and `1` and `T` are converted to `TRUE`, and `date` is a date type (adding leading zeros to the day). We’ll learn in later chapters how to fix the other problems.
```
tidiest
```
| order | score | letter | good | min\_max | date |
| --- | --- | --- | --- | --- | --- |
| 1 | \-1 | a | TRUE | 1 \- 2 | 2020\-01\-01 |
| NA | 0\.72 | b | TRUE | 2 \- 3 | 2020\-01\-02 |
| 3 | \-0\.62 | c | FALSE | 3 \- 4 | 2020\-01\-03 |
| 4 | 2\.03 | d | TRUE | 4 \- 5 | 2020\-01\-04 |
| 5 | NA | e | TRUE | 5 \- 6 | 2020\-01\-05 |
| 6 | 0\.99 | f | FALSE | 6 \- 7 | 2020\-01\-06 |
| 7 | 0\.03 | g | TRUE | 7 \- 8 | 2020\-01\-07 |
| 8 | 0\.67 | h | TRUE | 8 \- 9 | 2020\-01\-08 |
| 9 | 0\.57 | i | TRUE | 9 \- 10 | 2020\-01\-09 |
| 10 | 0\.9 | j | TRUE | 10 \- 11 | 2020\-01\-10 |
| 11 | \-1\.55 | k | FALSE | 11 \- 12 | 2020\-01\-11 |
| 12 | NA | l | FALSE | 12 \- 13 | 2020\-01\-12 |
| 13 | 0\.15 | m | TRUE | 13 \- 14 | 2020\-01\-13 |
| 14 | \-0\.66 | n | TRUE | 14 \- 15 | 2020\-01\-14 |
| 15 | \-0\.99 | o | TRUE | 15 \- 16 | 2020\-01\-15 |
| 16 | 1\.97 | p | TRUE | 16 \- 17 | 2020\-01\-16 |
| 17 | \-0\.44 | q | TRUE | 17 \- 18 | 2020\-01\-17 |
| 18 | \-0\.9 | r | FALSE | 18 \- 19 | 2020\-01\-18 |
| 19 | \-0\.15 | s | FALSE | 19 \- 20 | 2020\-01\-19 |
| 20 | \-0\.83 | t | FALSE | 20 \- 21 | 2020\-01\-20 |
| 21 | 1\.99 | u | TRUE | 21 \- 22 | 2020\-01\-21 |
| 22 | 0\.04 | v | FALSE | 22 \- 23 | 2020\-01\-22 |
| 23 | \-0\.4 | w | FALSE | 23 \- 24 | 2020\-01\-23 |
| 24 | \-0\.47 | x | FALSE | 24 \- 25 | 2020\-01\-24 |
| 25 | \-0\.41 | y | TRUE | 25 \- 26 | 2020\-01\-25 |
| 26 | 0\.68 | z | FALSE | 26 \- 27 | 2020\-01\-26 |
2\.8 Glossary
-------------
| term | definition |
| --- | --- |
| [base r](https://psyteachr.github.io/glossary/b#base.r) | The set of R functions that come with a basic installation of R, before you add external packages |
| [character](https://psyteachr.github.io/glossary/c#character) | A data type representing strings of text. |
| [csv](https://psyteachr.github.io/glossary/c#csv) | Comma\-separated variable: a file type for representing data where each variable is separated from the next by a comma. |
| [data type](https://psyteachr.github.io/glossary/d#data.type) | The kind of data represented by an object. |
| [deviation score](https://psyteachr.github.io/glossary/d#deviation.score) | A score minus the mean |
| [double](https://psyteachr.github.io/glossary/d#double) | A data type representing a real decimal number |
| [escape](https://psyteachr.github.io/glossary/e#escape) | Include special characters like " inside of a string by prefacing them with a backslash. |
| [extension](https://psyteachr.github.io/glossary/e#extension) | The end part of a file name that tells you what type of file it is (e.g., .R or .Rmd). |
| [extract operator](https://psyteachr.github.io/glossary/e#extract.operator) | A symbol used to get values from a container object, such as \[, \[\[, or $ |
| [factor](https://psyteachr.github.io/glossary/f#factor) | A data type where a specific set of values are stored with labels; An explanatory variable manipulated by the experimenter |
| [global environment](https://psyteachr.github.io/glossary/g#global.environment) | The interactive workspace where your script runs |
| [integer](https://psyteachr.github.io/glossary/i#integer) | A data type representing whole numbers. |
| [list](https://psyteachr.github.io/glossary/l#list) | A container data type that allows items with different data types to be grouped together. |
| [logical](https://psyteachr.github.io/glossary/l#logical) | A data type representing TRUE or FALSE values. |
| [numeric](https://psyteachr.github.io/glossary/n#numeric) | A data type representing a real decimal number or integer. |
| [operator](https://psyteachr.github.io/glossary/o#operator) | A symbol that performs a mathematical operation, such as \+, \-, \*, / |
| [tabular data](https://psyteachr.github.io/glossary/t#tabular.data) | Data in a rectangular table format, where each row has an entry for each column. |
| [tidy data](https://psyteachr.github.io/glossary/t#tidy.data) | A format for data that maps the meaning onto the structure. |
| [tidyverse](https://psyteachr.github.io/glossary/t#tidyverse) | A set of R packages that help you create and work with tidy data |
| [vector](https://psyteachr.github.io/glossary/v#vector) | A type of data structure that is basically a list of things like T/F values, numbers, or strings. |
| [vectorized](https://psyteachr.github.io/glossary/v#vectorized) | An operator or function that acts on each element in a vector |
2\.9 Exercises
--------------
Download the [exercises](exercises/02_data_exercise.Rmd). See the [answers](exercises/02_data_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(2)
# run this to access the answers
dataskills::exercise(2, answers = TRUE)
```
2\.1 Learning Objectives
------------------------
1. Load [built\-in datasets](data.html#builtin) [(video)](https://youtu.be/Z5fK5VGmzlY)
2. [Import data](data.html#import_data) from CSV and Excel files [(video)](https://youtu.be/a7Ra-hnB8l8)
3. Create a [data table](data.html#tables) [(video)](https://youtu.be/k-aqhurepb4)
4. Understand the use the [basic data types](data.html#data_types) [(video)](https://youtu.be/jXQrF18Jaac)
5. Understand and use the [basic container types](data.html#containers) (list, vector) [(video)](https://youtu.be/4xU7uKNdoig)
6. Use [vectorized operations](data.html#vectorized_ops) [(video)](https://youtu.be/9I5MdS7UWmI)
7. Be able to [troubleshoot](#Troubleshooting) common data import problems [(video)](https://youtu.be/gcxn4LJ_vAI)
2\.2 Resources
--------------
* [Chapter 11: Data Import](http://r4ds.had.co.nz/data-import.html) in *R for Data Science*
* [RStudio Data Import Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/data-import.pdf)
* [Scottish Babynames](https://www.nrscotland.gov.uk/files//statistics/babies-first-names-full-list/summary-records/babies-names16-all-names-years.csv)
* [Developing an analysis in R/RStudio: Scottish babynames (1/2\)](https://www.youtube.com/watch?v=lAaVPMcMs1w)
* [Developing an analysis in R/RStudio: Scottish babynames (2/2\)](https://www.youtube.com/watch?v=lzdTHCcClqo)
2\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(dataskills)
```
2\.4 Data tables
----------------
### 2\.4\.1 Built\-in data
R comes with built\-in datasets. Some packages, like tidyr and dataskills, also contain data. The `data()` function lists the datasets available in a package.
```
# lists datasets in dataskills
data(package = "dataskills")
```
Type the name of a dataset into the console to see the data. Type `?smalldata` into the console to see the dataset description.
```
smalldata
```
| id | group | pre | post |
| --- | --- | --- | --- |
| S01 | control | 98\.46606 | 106\.70508 |
| S02 | control | 104\.39774 | 89\.09030 |
| S03 | control | 105\.13377 | 123\.67230 |
| S04 | control | 92\.42574 | 70\.70178 |
| S05 | control | 123\.53268 | 124\.95526 |
| S06 | exp | 97\.48676 | 101\.61697 |
| S07 | exp | 87\.75594 | 126\.30077 |
| S08 | exp | 77\.15375 | 72\.31229 |
| S09 | exp | 97\.00283 | 108\.80713 |
| S10 | exp | 102\.32338 | 113\.74732 |
You can also use the `data()` function to load a dataset into your [global environment](https://psyteachr.github.io/glossary/g#global-environment "The interactive workspace where your script runs").
```
# loads smalldata into the environment
data("smalldata")
```
Always, always, always, look at your data once you’ve created or loaded a table. Also look at it after each step that transforms your table. There are three main ways to look at your tibble: `print()`, `glimpse()`, and `View()`.
The `print()` method can be run explicitly, but is more commonly called by just typing the variable name on the blank line. The default is not to print the entire table, but just the first 10 rows. It’s rare to print your data in a script; that is something you usually are doing for a sanity check, and you should just do it in the console.
Let’s look at the `smalldata` table that we made above.
```
smalldata
```
| id | group | pre | post |
| --- | --- | --- | --- |
| S01 | control | 98\.46606 | 106\.70508 |
| S02 | control | 104\.39774 | 89\.09030 |
| S03 | control | 105\.13377 | 123\.67230 |
| S04 | control | 92\.42574 | 70\.70178 |
| S05 | control | 123\.53268 | 124\.95526 |
| S06 | exp | 97\.48676 | 101\.61697 |
| S07 | exp | 87\.75594 | 126\.30077 |
| S08 | exp | 77\.15375 | 72\.31229 |
| S09 | exp | 97\.00283 | 108\.80713 |
| S10 | exp | 102\.32338 | 113\.74732 |
The function `glimpse()` gives a sideways version of the tibble. This is useful if the table is very wide and you can’t see all of the columns. It also tells you the data type of each column in angled brackets after each column name. We’ll learn about [data types](data.html#data_types) below.
```
glimpse(smalldata)
```
```
## Rows: 10
## Columns: 4
## $ id <chr> "S01", "S02", "S03", "S04", "S05", "S06", "S07", "S08", "S09", "…
## $ group <chr> "control", "control", "control", "control", "control", "exp", "e…
## $ pre <dbl> 98.46606, 104.39774, 105.13377, 92.42574, 123.53268, 97.48676, 8…
## $ post <dbl> 106.70508, 89.09030, 123.67230, 70.70178, 124.95526, 101.61697, …
```
The other way to look at the table is a more graphical spreadsheet\-like version given by `View()` (capital ‘V’). It can be useful in the console, but don’t ever put this one in a script because it will create an annoying pop\-up window when the user runs it.
Now you can click on `smalldata` in the environment pane to open it up in a viewer that looks a bit like Excel.
You can get a quick summary of a dataset with the `summary()` function.
```
summary(smalldata)
```
```
## id group pre post
## Length:10 Length:10 Min. : 77.15 Min. : 70.70
## Class :character Class :character 1st Qu.: 93.57 1st Qu.: 92.22
## Mode :character Mode :character Median : 97.98 Median :107.76
## Mean : 98.57 Mean :103.79
## 3rd Qu.:103.88 3rd Qu.:121.19
## Max. :123.53 Max. :126.30
```
You can even do things like calculate the difference between the means of two columns.
```
pre_mean <- mean(smalldata$pre)
post_mean <- mean(smalldata$post)
post_mean - pre_mean
```
```
## [1] 5.223055
```
### 2\.4\.2 Importing data
Built\-in data are nice for examples, but you’re probably more interested in your own data. There are many different types of files that you might work with when doing data analysis. These different file types are usually distinguished by the three letter [extension](https://psyteachr.github.io/glossary/e#extension "The end part of a file name that tells you what type of file it is (e.g., .R or .Rmd).") following a period at the end of the file name. Here are some examples of different types of files and the functions you would use to read them in or write them out.
| Extension | File Type | Reading | Writing |
| --- | --- | --- | --- |
| .csv | Comma\-separated values | `readr::read_csv()` | `readr::write_csv()` |
| .tsv, .txt | Tab\-separated values | `readr::read_tsv()` | `readr::write_tsv()` |
| .xls, .xlsx | Excel workbook | `readxl::read_excel()` | NA |
| .sav, .mat, … | Multiple types | `rio::import()` | NA |
The double colon means that the function on the right comes from the package on the left, so `readr::read_csv()` refers to the `read_csv()` function in the `readr` package, and `readxl::read_excel()` refers to the function `read_excel()` in the package `readxl`. The function `rio::import()` from the `rio` package will read almost any type of data file, including SPSS and Matlab. Check the help with `?rio::import` to see a full list.
You can get a directory of data files used in this class for tutorials and exercises with the following code, which will create a directory called “data” in your project directory. Alternatively, you can download a [zip file of the datasets](data/data.zip).
```
dataskills::getdata()
```
Probably the most common file type you will encounter is [.csv](https://psyteachr.github.io/glossary/c#csv "Comma-separated variable: a file type for representing data where each variable is separated from the next by a comma.") (comma\-separated values). As the name suggests, a CSV file distinguishes which values go with which variable by separating them with commas, and text values are sometimes enclosed in double quotes. The first line of a file usually provides the names of the variables.
For example, here are the first few lines of a CSV containing personality scores:
```
subj_id,O,C,E,A,N
S01,4.428571429,4.5,3.333333333,5.142857143,1.625
S02,5.714285714,2.9,3.222222222,3,2.625
S03,5.142857143,2.8,6,3.571428571,2.5
S04,3.142857143,5.2,1.333333333,1.571428571,3.125
S05,5.428571429,4.4,2.444444444,4.714285714,1.625
```
There are six variables in this dataset, and their names are given in the first line of the file: `subj_id`, `O`, `C`, `E`, `A`, and `N`. You can see that the values for each of these variables are given in order, separated by commas, on each subsequent line of the file.
When you read in CSV files, it is best practice to use the `readr::read_csv()` function. The `readr` package is automatically loaded as part of the `tidyverse` package, which we will be using in almost every script. Note that you would normally want to store the result of the `read_csv()` function to an object, as so:
```r
csv_data <- read_csv("data/5factor.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## subj_id = col_character(),
## O = col_double(),
## C = col_double(),
## E = col_double(),
## A = col_double(),
## N = col_double()
## )
```
The `read_csv()` and `read_tsv()` functions will give you some information about the data you just read in so you can check the column names and [data types](data.html#data_types). For now, it’s enough to know that `col_double()` refers to columns with numbers and `col_character()` refers to columns with words. We’ll learn in the [toroubleshooting](data.html#troubleshooting) section below how to fix it if the function guesses the wrong data type.
```
tsv_data <- read_tsv("data/5factor.txt")
xls_data <- readxl::read_xls("data/5factor.xls")
# you can load sheets from excel files by name or number
rep_data <- readxl::read_xls("data/5factor.xls", sheet = "replication")
spss_data <- rio::import("data/5factor.sav")
```
Once loaded, you can view your data using the data viewer. In the upper right hand window of RStudio, under the Environment tab, you will see the object `babynames` listed.
If you click on the View icon (, it will bring up a table view of the data you loaded in the top left pane of RStudio.
This allows you to check that the data have been loaded in properly. You can close the tab when you’re done looking at it, it won’t remove the object.
### 2\.4\.3 Creating data
If we are creating a data table from scratch, we can use the `tibble::tibble()` function, and type the data right in. The `tibble` package is part of the [tidyverse](https://psyteachr.github.io/glossary/t#tidyverse "A set of R packages that help you create and work with tidy data") package that we loaded at the start of this chapter.
Let’s create a small table with the names of three Avatar characters and their bending type. The `tibble()` function takes arguments with the names that you want your columns to have. The values are vectors that list the column values in order.
If you don’t know the value for one of the cells, you can enter `NA`, which we have to do for Sokka because he doesn’t have any bending ability. If all the values in the column are the same, you can just enter one value and it will be copied for each row.
```
avatar <- tibble(
name = c("Katara", "Toph", "Sokka"),
bends = c("water", "earth", NA),
friendly = TRUE
)
# print it
avatar
```
| name | bends | friendly |
| --- | --- | --- |
| Katara | water | TRUE |
| Toph | earth | TRUE |
| Sokka | NA | TRUE |
### 2\.4\.4 Writing Data
If you have data that you want to save to a CSV file, use `readr::write_csv()`, as follows.
```
write_csv(avatar, "avatar.csv")
```
This will save the data in CSV format to your working directory.
* Create a new table called `family` with the first name, last name, and age of your family members.
* Save it to a CSV file called “family.csv.”
* Clear the object from your environment by restarting R or with the code `remove(family)`.
* Load the data back in and view it.
We’ll be working with [tabular data](https://psyteachr.github.io/glossary/t#tabular-data "Data in a rectangular table format, where each row has an entry for each column.") a lot in this class, but tabular data is made up of [vectors](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings."), which group together data with the same basic [data type](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object."). The following sections explain some of this terminology to help you understand the functions we’ll be learning to process and analyse data.
### 2\.4\.1 Built\-in data
R comes with built\-in datasets. Some packages, like tidyr and dataskills, also contain data. The `data()` function lists the datasets available in a package.
```
# lists datasets in dataskills
data(package = "dataskills")
```
Type the name of a dataset into the console to see the data. Type `?smalldata` into the console to see the dataset description.
```
smalldata
```
| id | group | pre | post |
| --- | --- | --- | --- |
| S01 | control | 98\.46606 | 106\.70508 |
| S02 | control | 104\.39774 | 89\.09030 |
| S03 | control | 105\.13377 | 123\.67230 |
| S04 | control | 92\.42574 | 70\.70178 |
| S05 | control | 123\.53268 | 124\.95526 |
| S06 | exp | 97\.48676 | 101\.61697 |
| S07 | exp | 87\.75594 | 126\.30077 |
| S08 | exp | 77\.15375 | 72\.31229 |
| S09 | exp | 97\.00283 | 108\.80713 |
| S10 | exp | 102\.32338 | 113\.74732 |
You can also use the `data()` function to load a dataset into your [global environment](https://psyteachr.github.io/glossary/g#global-environment "The interactive workspace where your script runs").
```
# loads smalldata into the environment
data("smalldata")
```
Always, always, always, look at your data once you’ve created or loaded a table. Also look at it after each step that transforms your table. There are three main ways to look at your tibble: `print()`, `glimpse()`, and `View()`.
The `print()` method can be run explicitly, but is more commonly called by just typing the variable name on the blank line. The default is not to print the entire table, but just the first 10 rows. It’s rare to print your data in a script; that is something you usually are doing for a sanity check, and you should just do it in the console.
Let’s look at the `smalldata` table that we made above.
```
smalldata
```
| id | group | pre | post |
| --- | --- | --- | --- |
| S01 | control | 98\.46606 | 106\.70508 |
| S02 | control | 104\.39774 | 89\.09030 |
| S03 | control | 105\.13377 | 123\.67230 |
| S04 | control | 92\.42574 | 70\.70178 |
| S05 | control | 123\.53268 | 124\.95526 |
| S06 | exp | 97\.48676 | 101\.61697 |
| S07 | exp | 87\.75594 | 126\.30077 |
| S08 | exp | 77\.15375 | 72\.31229 |
| S09 | exp | 97\.00283 | 108\.80713 |
| S10 | exp | 102\.32338 | 113\.74732 |
The function `glimpse()` gives a sideways version of the tibble. This is useful if the table is very wide and you can’t see all of the columns. It also tells you the data type of each column in angled brackets after each column name. We’ll learn about [data types](data.html#data_types) below.
```
glimpse(smalldata)
```
```
## Rows: 10
## Columns: 4
## $ id <chr> "S01", "S02", "S03", "S04", "S05", "S06", "S07", "S08", "S09", "…
## $ group <chr> "control", "control", "control", "control", "control", "exp", "e…
## $ pre <dbl> 98.46606, 104.39774, 105.13377, 92.42574, 123.53268, 97.48676, 8…
## $ post <dbl> 106.70508, 89.09030, 123.67230, 70.70178, 124.95526, 101.61697, …
```
The other way to look at the table is a more graphical spreadsheet\-like version given by `View()` (capital ‘V’). It can be useful in the console, but don’t ever put this one in a script because it will create an annoying pop\-up window when the user runs it.
Now you can click on `smalldata` in the environment pane to open it up in a viewer that looks a bit like Excel.
You can get a quick summary of a dataset with the `summary()` function.
```
summary(smalldata)
```
```
## id group pre post
## Length:10 Length:10 Min. : 77.15 Min. : 70.70
## Class :character Class :character 1st Qu.: 93.57 1st Qu.: 92.22
## Mode :character Mode :character Median : 97.98 Median :107.76
## Mean : 98.57 Mean :103.79
## 3rd Qu.:103.88 3rd Qu.:121.19
## Max. :123.53 Max. :126.30
```
You can even do things like calculate the difference between the means of two columns.
```
pre_mean <- mean(smalldata$pre)
post_mean <- mean(smalldata$post)
post_mean - pre_mean
```
```
## [1] 5.223055
```
### 2\.4\.2 Importing data
Built\-in data are nice for examples, but you’re probably more interested in your own data. There are many different types of files that you might work with when doing data analysis. These different file types are usually distinguished by the three letter [extension](https://psyteachr.github.io/glossary/e#extension "The end part of a file name that tells you what type of file it is (e.g., .R or .Rmd).") following a period at the end of the file name. Here are some examples of different types of files and the functions you would use to read them in or write them out.
| Extension | File Type | Reading | Writing |
| --- | --- | --- | --- |
| .csv | Comma\-separated values | `readr::read_csv()` | `readr::write_csv()` |
| .tsv, .txt | Tab\-separated values | `readr::read_tsv()` | `readr::write_tsv()` |
| .xls, .xlsx | Excel workbook | `readxl::read_excel()` | NA |
| .sav, .mat, … | Multiple types | `rio::import()` | NA |
The double colon means that the function on the right comes from the package on the left, so `readr::read_csv()` refers to the `read_csv()` function in the `readr` package, and `readxl::read_excel()` refers to the function `read_excel()` in the package `readxl`. The function `rio::import()` from the `rio` package will read almost any type of data file, including SPSS and Matlab. Check the help with `?rio::import` to see a full list.
You can get a directory of data files used in this class for tutorials and exercises with the following code, which will create a directory called “data” in your project directory. Alternatively, you can download a [zip file of the datasets](data/data.zip).
```
dataskills::getdata()
```
Probably the most common file type you will encounter is [.csv](https://psyteachr.github.io/glossary/c#csv "Comma-separated variable: a file type for representing data where each variable is separated from the next by a comma.") (comma\-separated values). As the name suggests, a CSV file distinguishes which values go with which variable by separating them with commas, and text values are sometimes enclosed in double quotes. The first line of a file usually provides the names of the variables.
For example, here are the first few lines of a CSV containing personality scores:
```
subj_id,O,C,E,A,N
S01,4.428571429,4.5,3.333333333,5.142857143,1.625
S02,5.714285714,2.9,3.222222222,3,2.625
S03,5.142857143,2.8,6,3.571428571,2.5
S04,3.142857143,5.2,1.333333333,1.571428571,3.125
S05,5.428571429,4.4,2.444444444,4.714285714,1.625
```
There are six variables in this dataset, and their names are given in the first line of the file: `subj_id`, `O`, `C`, `E`, `A`, and `N`. You can see that the values for each of these variables are given in order, separated by commas, on each subsequent line of the file.
When you read in CSV files, it is best practice to use the `readr::read_csv()` function. The `readr` package is automatically loaded as part of the `tidyverse` package, which we will be using in almost every script. Note that you would normally want to store the result of the `read_csv()` function to an object, as so:
```r
csv_data <- read_csv("data/5factor.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## subj_id = col_character(),
## O = col_double(),
## C = col_double(),
## E = col_double(),
## A = col_double(),
## N = col_double()
## )
```
The `read_csv()` and `read_tsv()` functions will give you some information about the data you just read in so you can check the column names and [data types](data.html#data_types). For now, it’s enough to know that `col_double()` refers to columns with numbers and `col_character()` refers to columns with words. We’ll learn in the [toroubleshooting](data.html#troubleshooting) section below how to fix it if the function guesses the wrong data type.
```
tsv_data <- read_tsv("data/5factor.txt")
xls_data <- readxl::read_xls("data/5factor.xls")
# you can load sheets from excel files by name or number
rep_data <- readxl::read_xls("data/5factor.xls", sheet = "replication")
spss_data <- rio::import("data/5factor.sav")
```
Once loaded, you can view your data using the data viewer. In the upper right hand window of RStudio, under the Environment tab, you will see the object `babynames` listed.
If you click on the View icon (, it will bring up a table view of the data you loaded in the top left pane of RStudio.
This allows you to check that the data have been loaded in properly. You can close the tab when you’re done looking at it, it won’t remove the object.
### 2\.4\.3 Creating data
If we are creating a data table from scratch, we can use the `tibble::tibble()` function, and type the data right in. The `tibble` package is part of the [tidyverse](https://psyteachr.github.io/glossary/t#tidyverse "A set of R packages that help you create and work with tidy data") package that we loaded at the start of this chapter.
Let’s create a small table with the names of three Avatar characters and their bending type. The `tibble()` function takes arguments with the names that you want your columns to have. The values are vectors that list the column values in order.
If you don’t know the value for one of the cells, you can enter `NA`, which we have to do for Sokka because he doesn’t have any bending ability. If all the values in the column are the same, you can just enter one value and it will be copied for each row.
```
avatar <- tibble(
name = c("Katara", "Toph", "Sokka"),
bends = c("water", "earth", NA),
friendly = TRUE
)
# print it
avatar
```
| name | bends | friendly |
| --- | --- | --- |
| Katara | water | TRUE |
| Toph | earth | TRUE |
| Sokka | NA | TRUE |
### 2\.4\.4 Writing Data
If you have data that you want to save to a CSV file, use `readr::write_csv()`, as follows.
```
write_csv(avatar, "avatar.csv")
```
This will save the data in CSV format to your working directory.
* Create a new table called `family` with the first name, last name, and age of your family members.
* Save it to a CSV file called “family.csv.”
* Clear the object from your environment by restarting R or with the code `remove(family)`.
* Load the data back in and view it.
We’ll be working with [tabular data](https://psyteachr.github.io/glossary/t#tabular-data "Data in a rectangular table format, where each row has an entry for each column.") a lot in this class, but tabular data is made up of [vectors](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings."), which group together data with the same basic [data type](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object."). The following sections explain some of this terminology to help you understand the functions we’ll be learning to process and analyse data.
2\.5 Basic data types
---------------------
Data can be numbers, words, true/false values or combinations of these. In order to understand some later concepts, it’s useful to have a basic understanding of [data types](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object.") in R: [numeric](https://psyteachr.github.io/glossary/n#numeric "A data type representing a real decimal number or integer."), [character](https://psyteachr.github.io/glossary/c#character "A data type representing strings of text."), and [logical](https://psyteachr.github.io/glossary/l#logical "A data type representing TRUE or FALSE values.") There is also a specific data type called a [factor](https://psyteachr.github.io/glossary/f#factor "A data type where a specific set of values are stored with labels; An explanatory variable manipulated by the experimenter"), which will probably give you a headache sooner or later, but we can ignore it for now.
### 2\.5\.1 Numeric data
All of the real numbers are [numeric](https://psyteachr.github.io/glossary/n#numeric "A data type representing a real decimal number or integer.") data types (imaginary numbers are “complex”). There are two types of numeric data, [integer](https://psyteachr.github.io/glossary/i#integer "A data type representing whole numbers.") and [double](https://psyteachr.github.io/glossary/d#double "A data type representing a real decimal number"). Integers are the whole numbers, like \-1, 0 and 1\. Doubles are numbers that can have fractional amounts. If you just type a plain number such as `10`, it is stored as a double, even if it doesn’t have a decimal point. If you want it to be an exact integer, use the `L` suffix (10L).
If you ever want to know the data type of something, use the `typeof` function.
```
typeof(10) # double
typeof(10.0) # double
typeof(10L) # integer
typeof(10i) # complex
```
```
## [1] "double"
## [1] "double"
## [1] "integer"
## [1] "complex"
```
If you want to know if something is numeric (a double or an integer), you can use the function `is.numeric()` and it will tell you if it is numeric (`TRUE`) or not (`FALSE`).
```
is.numeric(10L)
is.numeric(10.0)
is.numeric("Not a number")
```
```
## [1] TRUE
## [1] TRUE
## [1] FALSE
```
### 2\.5\.2 Character data
[Character](https://psyteachr.github.io/glossary/c#character "A data type representing strings of text.") strings are any text between quotation marks.
```
typeof("This is a character string")
typeof('You can use double or single quotes')
```
```
## [1] "character"
## [1] "character"
```
This can include quotes, but you have to [escape](https://psyteachr.github.io/glossary/e#escape "Include special characters like \" inside of a string by prefacing them with a backslash.") it using a backslash to signal the the quote isn’t meant to be the end of the string.
```
my_string <- "The instructor said, \"R is cool,\" and the class agreed."
cat(my_string) # cat() prints the arguments
```
```
## The instructor said, "R is cool," and the class agreed.
```
### 2\.5\.3 Logical Data
[Logical](https://psyteachr.github.io/glossary/l#logical "A data type representing TRUE or FALSE values.") data (also sometimes called “boolean” values) is one of two values: true or false. In R, we always write them in uppercase: `TRUE` and `FALSE`.
```
class(TRUE)
class(FALSE)
```
```
## [1] "logical"
## [1] "logical"
```
When you compare two values with an [operator](https://psyteachr.github.io/glossary/o#operator "A symbol that performs a mathematical operation, such as +, -, *, /"), such as checking to see if 10 is greater than 5, the resulting value is logical.
```
is.logical(10 > 5)
```
```
## [1] TRUE
```
You might also see logical values abbreviated as `T` and `F`, or `0` and `1`. This can cause some problems down the road, so we will always spell out the whole thing.
What data types are these:
* `100` integer double character logical factor
* `100L` integer double character logical factor
* `"100"` integer double character logical factor
* `100.0` integer double character logical factor
* `-100L` integer double character logical factor
* `factor(100)` integer double character logical factor
* `TRUE` integer double character logical factor
* `"TRUE"` integer double character logical factor
* `FALSE` integer double character logical factor
* `1 == 2` integer double character logical factor
### 2\.5\.1 Numeric data
All of the real numbers are [numeric](https://psyteachr.github.io/glossary/n#numeric "A data type representing a real decimal number or integer.") data types (imaginary numbers are “complex”). There are two types of numeric data, [integer](https://psyteachr.github.io/glossary/i#integer "A data type representing whole numbers.") and [double](https://psyteachr.github.io/glossary/d#double "A data type representing a real decimal number"). Integers are the whole numbers, like \-1, 0 and 1\. Doubles are numbers that can have fractional amounts. If you just type a plain number such as `10`, it is stored as a double, even if it doesn’t have a decimal point. If you want it to be an exact integer, use the `L` suffix (10L).
If you ever want to know the data type of something, use the `typeof` function.
```
typeof(10) # double
typeof(10.0) # double
typeof(10L) # integer
typeof(10i) # complex
```
```
## [1] "double"
## [1] "double"
## [1] "integer"
## [1] "complex"
```
If you want to know if something is numeric (a double or an integer), you can use the function `is.numeric()` and it will tell you if it is numeric (`TRUE`) or not (`FALSE`).
```
is.numeric(10L)
is.numeric(10.0)
is.numeric("Not a number")
```
```
## [1] TRUE
## [1] TRUE
## [1] FALSE
```
### 2\.5\.2 Character data
[Character](https://psyteachr.github.io/glossary/c#character "A data type representing strings of text.") strings are any text between quotation marks.
```
typeof("This is a character string")
typeof('You can use double or single quotes')
```
```
## [1] "character"
## [1] "character"
```
This can include quotes, but you have to [escape](https://psyteachr.github.io/glossary/e#escape "Include special characters like \" inside of a string by prefacing them with a backslash.") it using a backslash to signal the the quote isn’t meant to be the end of the string.
```
my_string <- "The instructor said, \"R is cool,\" and the class agreed."
cat(my_string) # cat() prints the arguments
```
```
## The instructor said, "R is cool," and the class agreed.
```
### 2\.5\.3 Logical Data
[Logical](https://psyteachr.github.io/glossary/l#logical "A data type representing TRUE or FALSE values.") data (also sometimes called “boolean” values) is one of two values: true or false. In R, we always write them in uppercase: `TRUE` and `FALSE`.
```
class(TRUE)
class(FALSE)
```
```
## [1] "logical"
## [1] "logical"
```
When you compare two values with an [operator](https://psyteachr.github.io/glossary/o#operator "A symbol that performs a mathematical operation, such as +, -, *, /"), such as checking to see if 10 is greater than 5, the resulting value is logical.
```
is.logical(10 > 5)
```
```
## [1] TRUE
```
You might also see logical values abbreviated as `T` and `F`, or `0` and `1`. This can cause some problems down the road, so we will always spell out the whole thing.
What data types are these:
* `100` integer double character logical factor
* `100L` integer double character logical factor
* `"100"` integer double character logical factor
* `100.0` integer double character logical factor
* `-100L` integer double character logical factor
* `factor(100)` integer double character logical factor
* `TRUE` integer double character logical factor
* `"TRUE"` integer double character logical factor
* `FALSE` integer double character logical factor
* `1 == 2` integer double character logical factor
2\.6 Basic container types
--------------------------
Individual data values can be grouped together into containers. The main types of containers we’ll work with are vectors, lists, and data tables.
### 2\.6\.1 Vectors
A [vector](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings.") in R is like a vector in mathematics: a set of ordered elements. All of the elements in a vector must be of the same **data type** (numeric, character, logical). You can create a vector by enclosing the elements in the function `c()`.
```
## put information into a vector using c(...)
c(1, 2, 3, 4)
c("this", "is", "cool")
1:6 # shortcut to make a vector of all integers x:y
```
```
## [1] 1 2 3 4
## [1] "this" "is" "cool"
## [1] 1 2 3 4 5 6
```
What happens when you mix types? What class is the variable `mixed`?
```
mixed <- c(2, "good", 2L, "b", TRUE)
```
You can’t mix data types in a vector; all elements of the vector must be the same data type. If you mix them, R will “coerce” them so that they are all the same. If you mix doubles and integers, the integers will be changed to doubles. If you mix characters and numeric types, the numbers will be coerced to characters, so `10` would turn into “10\.”
#### 2\.6\.1\.1 Selecting values from a vector
If we wanted to pick specific values out of a vector by position, we can use square brackets (an [extract operator](https://psyteachr.github.io/glossary/e#extract-operator "A symbol used to get values from a container object, such as [, [[, or $"), or `[]`) after the vector.
```
values <- c(10, 20, 30, 40, 50)
values[2] # selects the second value
```
```
## [1] 20
```
You can select more than one value from the vector by putting a vector of numbers inside the square brackets. For example, you can select the 18th, 19th, 20th, 21st, 4th, 9th and 15th letter from the built\-in vector `LETTERS` (which gives all the uppercase letters in the Latin alphabet).
```
word <- c(18, 19, 20, 21, 4, 9, 15)
LETTERS[word]
```
```
## [1] "R" "S" "T" "U" "D" "I" "O"
```
Can you decode the secret message?
```
secret <- c(14, 5, 22, 5, 18, 7, 15, 14, 14, 1, 7, 9, 22, 5, 25, 15, 21, 21, 16)
```
You can also create ‘named’ vectors, where each element has a name. For example:
```
vec <- c(first = 77.9, second = -13.2, third = 100.1)
vec
```
```
## first second third
## 77.9 -13.2 100.1
```
We can then access elements by name using a character vector within the square brackets. We can put them in any order we want, and we can repeat elements:
```
vec[c("third", "second", "second")]
```
```
## third second second
## 100.1 -13.2 -13.2
```
We can get the vector of names using the `names()` function, and we can set or change them using something like `names(vec2) <- c(“n1,” “n2,” “n3”)`.
Another way to access elements is by using a logical vector within the square brackets. This will pull out the elements of the vector for which the corresponding element of the logical vector is `TRUE`. If the logical vector doesn’t have the same length as the original, it will repeat. You can find out how long a vector is using the `length()` function.
```
length(LETTERS)
LETTERS[c(TRUE, FALSE)]
```
```
## [1] 26
## [1] "A" "C" "E" "G" "I" "K" "M" "O" "Q" "S" "U" "W" "Y"
```
#### 2\.6\.1\.2 Repeating Sequences
Here are some useful tricks to save typing when creating vectors.
In the command `x:y` the `:` operator would give you the sequence of number starting at `x`, and going to `y` in increments of 1\.
```
1:10
15.3:20.5
0:-10
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
## [1] 15.3 16.3 17.3 18.3 19.3 20.3
## [1] 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10
```
What if you want to create a sequence but with something other than integer steps? You can use the `seq()` function. Look at the examples below and work out what the arguments do.
```
seq(from = -1, to = 1, by = 0.2)
seq(0, 100, length.out = 11)
seq(0, 10, along.with = LETTERS)
```
```
## [1] -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
## [1] 0 10 20 30 40 50 60 70 80 90 100
## [1] 0.0 0.4 0.8 1.2 1.6 2.0 2.4 2.8 3.2 3.6 4.0 4.4 4.8 5.2 5.6
## [16] 6.0 6.4 6.8 7.2 7.6 8.0 8.4 8.8 9.2 9.6 10.0
```
What if you want to repeat a vector many times? You could either type it out (painful) or use the `rep()` function, which can repeat vectors in different ways.
```
rep(0, 10) # ten zeroes
rep(c(1L, 3L), times = 7) # alternating 1 and 3, 7 times
rep(c("A", "B", "C"), each = 2) # A to C, 2 times each
```
```
## [1] 0 0 0 0 0 0 0 0 0 0
## [1] 1 3 1 3 1 3 1 3 1 3 1 3 1 3
## [1] "A" "A" "B" "B" "C" "C"
```
The `rep()` function is useful to create a vector of logical values (`TRUE`/`FALSE` or `1`/`0`) to select values from another vector.
```
# Get subject IDs in the pattern Y Y N N ...
subject_ids <- 1:40
yynn <- rep(c(TRUE, FALSE), each = 2,
length.out = length(subject_ids))
subject_ids[yynn]
```
```
## [1] 1 2 5 6 9 10 13 14 17 18 21 22 25 26 29 30 33 34 37 38
```
#### 2\.6\.1\.3 Vectorized Operations
R performs calculations on vectors in a special way. Let’s look at an example using \\(z\\)\-scores. A \\(z\\)\-score is a [deviation score](https://psyteachr.github.io/glossary/d#deviation-score "A score minus the mean")(a score minus a mean) divided by a standard deviation. Let’s say we have a set of four IQ scores.
```
## example IQ scores: mu = 100, sigma = 15
iq <- c(86, 101, 127, 99)
```
If we want to subtract the mean from these four scores, we just use the following code:
```
iq - 100
```
```
## [1] -14 1 27 -1
```
This subtracts 100 from each element of the vector. R automatically assumes that this is what you wanted to do; it is called a [vectorized](https://psyteachr.github.io/glossary/v#vectorized "An operator or function that acts on each element in a vector") operation and it makes it possible to express operations more efficiently.
To calculate \\(z\\)\-scores we use the formula:
\\(z \= \\frac{X \- \\mu}{\\sigma}\\)
where X are the scores, \\(\\mu\\) is the mean, and \\(\\sigma\\) is the standard deviation. We can expression this formula in R as follows:
```
## z-scores
(iq - 100) / 15
```
```
## [1] -0.93333333 0.06666667 1.80000000 -0.06666667
```
You can see that it computed all four \\(z\\)\-scores with a single line of code. In later chapters, we’ll use vectorised operations to process our data, such as reverse\-scoring some questionnaire items.
### 2\.6\.2 Lists
Recall that vectors can contain data of only one type. What if you want to store a collection of data of different data types? For that purpose you would use a [list](https://psyteachr.github.io/glossary/l#list "A container data type that allows items with different data types to be grouped together."). Define a list using the `list()` function.
```
data_types <- list(
double = 10.0,
integer = 10L,
character = "10",
logical = TRUE
)
str(data_types) # str() prints lists in a condensed format
```
```
## List of 4
## $ double : num 10
## $ integer : int 10
## $ character: chr "10"
## $ logical : logi TRUE
```
You can refer to elements of a list using square brackets like a vector, but you can also use the dollar sign notation (`$`) if the list items have names.
```
data_types$logical
```
```
## [1] TRUE
```
Explore the 5 ways shown below to extract a value from a list. What data type is each object? What is the difference between the single and double brackets? Which one is the same as the dollar sign?
```
bracket1 <- data_types[1]
bracket2 <- data_types[[1]]
name1 <- data_types["double"]
name2 <- data_types[["double"]]
dollar <- data_types$double
```
### 2\.6\.3 Tables
The built\-in, imported, and created data above are [tabular data](https://psyteachr.github.io/glossary/t#tabular-data "Data in a rectangular table format, where each row has an entry for each column."), data arranged in the form of a table.
Tabular data structures allow for a collection of data of different types (characters, integers, logical, etc.) but subject to the constraint that each “column” of the table (element of the list) must have the same number of elements. The base R version of a table is called a `data.frame`, while the ‘tidyverse’ version is called a `tibble`. Tibbles are far easier to work with, so we’ll be using those. To learn more about differences between these two data structures, see `vignette("tibble")`.
Tabular data becomes especially important for when we talk about [tidy data](https://psyteachr.github.io/glossary/t#tidy-data "A format for data that maps the meaning onto the structure.") in [chapter 4](tidyr.html#tidyr), which consists of a set of simple principles for structuring data.
#### 2\.6\.3\.1 Creating a table
We learned how to create a table by importing a Excel or CSV file, and creating a table from scratch using the `tibble()` function. You can also use the `tibble::tribble()` function to create a table by row, rather than by column. You start by listing the column names, each preceded by a tilde (`~`), then you list the values for each column, row by row, separated by commas (don’t forget a comma at the end of each row). This method can be easier for some data, but doesn’t let you use shortcuts, like setting all of the values in a column to the same value or a [repeating sequence](data.html#rep_seq).
```
# by column using tibble
avatar_by_col <- tibble(
name = c("Katara", "Toph", "Sokka", "Azula"),
bends = c("water", "earth", NA, "fire"),
friendly = rep(c(TRUE, FALSE), c(3, 1))
)
# by row using tribble
avatar_by_row <- tribble(
~name, ~bends, ~friendly,
"Katara", "water", TRUE,
"Toph", "earth", TRUE,
"Sokka", NA, TRUE,
"Azula", "fire", FALSE
)
```
#### 2\.6\.3\.2 Table info
We can get information about the table using the functions `ncol()` (number of columns), `nrow()` (number of rows), `dim()` (the number of rows and number of columns), and `name()` (the column names).
```
nrow(avatar) # how many rows?
ncol(avatar) # how many columns?
dim(avatar) # what are the table dimensions?
names(avatar) # what are the column names?
```
```
## [1] 3
## [1] 3
## [1] 3 3
## [1] "name" "bends" "friendly"
```
#### 2\.6\.3\.3 Accessing rows and columns
There are various ways of accessing specific columns or rows from a table. The ones below are from [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") and are useful to know about, but you’ll be learning easier (and more readable) ways in the [tidyr](tidyr.html#tidyr) and [dplyr](dplyr.html#dplyr) lessons. Examples of these base R accessing functions are provided here for reference, since you might see them in other people’s scripts.
```
katara <- avatar[1, ] # first row
type <- avatar[, 2] # second column (bends)
benders <- avatar[c(1, 2), ] # selected rows (by number)
bends_name <- avatar[, c("bends", "name")] # selected columns (by name)
friendly <- avatar$friendly # by column name
```
### 2\.6\.1 Vectors
A [vector](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings.") in R is like a vector in mathematics: a set of ordered elements. All of the elements in a vector must be of the same **data type** (numeric, character, logical). You can create a vector by enclosing the elements in the function `c()`.
```
## put information into a vector using c(...)
c(1, 2, 3, 4)
c("this", "is", "cool")
1:6 # shortcut to make a vector of all integers x:y
```
```
## [1] 1 2 3 4
## [1] "this" "is" "cool"
## [1] 1 2 3 4 5 6
```
What happens when you mix types? What class is the variable `mixed`?
```
mixed <- c(2, "good", 2L, "b", TRUE)
```
You can’t mix data types in a vector; all elements of the vector must be the same data type. If you mix them, R will “coerce” them so that they are all the same. If you mix doubles and integers, the integers will be changed to doubles. If you mix characters and numeric types, the numbers will be coerced to characters, so `10` would turn into “10\.”
#### 2\.6\.1\.1 Selecting values from a vector
If we wanted to pick specific values out of a vector by position, we can use square brackets (an [extract operator](https://psyteachr.github.io/glossary/e#extract-operator "A symbol used to get values from a container object, such as [, [[, or $"), or `[]`) after the vector.
```
values <- c(10, 20, 30, 40, 50)
values[2] # selects the second value
```
```
## [1] 20
```
You can select more than one value from the vector by putting a vector of numbers inside the square brackets. For example, you can select the 18th, 19th, 20th, 21st, 4th, 9th and 15th letter from the built\-in vector `LETTERS` (which gives all the uppercase letters in the Latin alphabet).
```
word <- c(18, 19, 20, 21, 4, 9, 15)
LETTERS[word]
```
```
## [1] "R" "S" "T" "U" "D" "I" "O"
```
Can you decode the secret message?
```
secret <- c(14, 5, 22, 5, 18, 7, 15, 14, 14, 1, 7, 9, 22, 5, 25, 15, 21, 21, 16)
```
You can also create ‘named’ vectors, where each element has a name. For example:
```
vec <- c(first = 77.9, second = -13.2, third = 100.1)
vec
```
```
## first second third
## 77.9 -13.2 100.1
```
We can then access elements by name using a character vector within the square brackets. We can put them in any order we want, and we can repeat elements:
```
vec[c("third", "second", "second")]
```
```
## third second second
## 100.1 -13.2 -13.2
```
We can get the vector of names using the `names()` function, and we can set or change them using something like `names(vec2) <- c(“n1,” “n2,” “n3”)`.
Another way to access elements is by using a logical vector within the square brackets. This will pull out the elements of the vector for which the corresponding element of the logical vector is `TRUE`. If the logical vector doesn’t have the same length as the original, it will repeat. You can find out how long a vector is using the `length()` function.
```
length(LETTERS)
LETTERS[c(TRUE, FALSE)]
```
```
## [1] 26
## [1] "A" "C" "E" "G" "I" "K" "M" "O" "Q" "S" "U" "W" "Y"
```
#### 2\.6\.1\.2 Repeating Sequences
Here are some useful tricks to save typing when creating vectors.
In the command `x:y` the `:` operator would give you the sequence of number starting at `x`, and going to `y` in increments of 1\.
```
1:10
15.3:20.5
0:-10
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
## [1] 15.3 16.3 17.3 18.3 19.3 20.3
## [1] 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10
```
What if you want to create a sequence but with something other than integer steps? You can use the `seq()` function. Look at the examples below and work out what the arguments do.
```
seq(from = -1, to = 1, by = 0.2)
seq(0, 100, length.out = 11)
seq(0, 10, along.with = LETTERS)
```
```
## [1] -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
## [1] 0 10 20 30 40 50 60 70 80 90 100
## [1] 0.0 0.4 0.8 1.2 1.6 2.0 2.4 2.8 3.2 3.6 4.0 4.4 4.8 5.2 5.6
## [16] 6.0 6.4 6.8 7.2 7.6 8.0 8.4 8.8 9.2 9.6 10.0
```
What if you want to repeat a vector many times? You could either type it out (painful) or use the `rep()` function, which can repeat vectors in different ways.
```
rep(0, 10) # ten zeroes
rep(c(1L, 3L), times = 7) # alternating 1 and 3, 7 times
rep(c("A", "B", "C"), each = 2) # A to C, 2 times each
```
```
## [1] 0 0 0 0 0 0 0 0 0 0
## [1] 1 3 1 3 1 3 1 3 1 3 1 3 1 3
## [1] "A" "A" "B" "B" "C" "C"
```
The `rep()` function is useful to create a vector of logical values (`TRUE`/`FALSE` or `1`/`0`) to select values from another vector.
```
# Get subject IDs in the pattern Y Y N N ...
subject_ids <- 1:40
yynn <- rep(c(TRUE, FALSE), each = 2,
length.out = length(subject_ids))
subject_ids[yynn]
```
```
## [1] 1 2 5 6 9 10 13 14 17 18 21 22 25 26 29 30 33 34 37 38
```
#### 2\.6\.1\.3 Vectorized Operations
R performs calculations on vectors in a special way. Let’s look at an example using \\(z\\)\-scores. A \\(z\\)\-score is a [deviation score](https://psyteachr.github.io/glossary/d#deviation-score "A score minus the mean")(a score minus a mean) divided by a standard deviation. Let’s say we have a set of four IQ scores.
```
## example IQ scores: mu = 100, sigma = 15
iq <- c(86, 101, 127, 99)
```
If we want to subtract the mean from these four scores, we just use the following code:
```
iq - 100
```
```
## [1] -14 1 27 -1
```
This subtracts 100 from each element of the vector. R automatically assumes that this is what you wanted to do; it is called a [vectorized](https://psyteachr.github.io/glossary/v#vectorized "An operator or function that acts on each element in a vector") operation and it makes it possible to express operations more efficiently.
To calculate \\(z\\)\-scores we use the formula:
\\(z \= \\frac{X \- \\mu}{\\sigma}\\)
where X are the scores, \\(\\mu\\) is the mean, and \\(\\sigma\\) is the standard deviation. We can expression this formula in R as follows:
```
## z-scores
(iq - 100) / 15
```
```
## [1] -0.93333333 0.06666667 1.80000000 -0.06666667
```
You can see that it computed all four \\(z\\)\-scores with a single line of code. In later chapters, we’ll use vectorised operations to process our data, such as reverse\-scoring some questionnaire items.
#### 2\.6\.1\.1 Selecting values from a vector
If we wanted to pick specific values out of a vector by position, we can use square brackets (an [extract operator](https://psyteachr.github.io/glossary/e#extract-operator "A symbol used to get values from a container object, such as [, [[, or $"), or `[]`) after the vector.
```
values <- c(10, 20, 30, 40, 50)
values[2] # selects the second value
```
```
## [1] 20
```
You can select more than one value from the vector by putting a vector of numbers inside the square brackets. For example, you can select the 18th, 19th, 20th, 21st, 4th, 9th and 15th letter from the built\-in vector `LETTERS` (which gives all the uppercase letters in the Latin alphabet).
```
word <- c(18, 19, 20, 21, 4, 9, 15)
LETTERS[word]
```
```
## [1] "R" "S" "T" "U" "D" "I" "O"
```
Can you decode the secret message?
```
secret <- c(14, 5, 22, 5, 18, 7, 15, 14, 14, 1, 7, 9, 22, 5, 25, 15, 21, 21, 16)
```
You can also create ‘named’ vectors, where each element has a name. For example:
```
vec <- c(first = 77.9, second = -13.2, third = 100.1)
vec
```
```
## first second third
## 77.9 -13.2 100.1
```
We can then access elements by name using a character vector within the square brackets. We can put them in any order we want, and we can repeat elements:
```
vec[c("third", "second", "second")]
```
```
## third second second
## 100.1 -13.2 -13.2
```
We can get the vector of names using the `names()` function, and we can set or change them using something like `names(vec2) <- c(“n1,” “n2,” “n3”)`.
Another way to access elements is by using a logical vector within the square brackets. This will pull out the elements of the vector for which the corresponding element of the logical vector is `TRUE`. If the logical vector doesn’t have the same length as the original, it will repeat. You can find out how long a vector is using the `length()` function.
```
length(LETTERS)
LETTERS[c(TRUE, FALSE)]
```
```
## [1] 26
## [1] "A" "C" "E" "G" "I" "K" "M" "O" "Q" "S" "U" "W" "Y"
```
#### 2\.6\.1\.2 Repeating Sequences
Here are some useful tricks to save typing when creating vectors.
In the command `x:y` the `:` operator would give you the sequence of number starting at `x`, and going to `y` in increments of 1\.
```
1:10
15.3:20.5
0:-10
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
## [1] 15.3 16.3 17.3 18.3 19.3 20.3
## [1] 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10
```
What if you want to create a sequence but with something other than integer steps? You can use the `seq()` function. Look at the examples below and work out what the arguments do.
```
seq(from = -1, to = 1, by = 0.2)
seq(0, 100, length.out = 11)
seq(0, 10, along.with = LETTERS)
```
```
## [1] -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
## [1] 0 10 20 30 40 50 60 70 80 90 100
## [1] 0.0 0.4 0.8 1.2 1.6 2.0 2.4 2.8 3.2 3.6 4.0 4.4 4.8 5.2 5.6
## [16] 6.0 6.4 6.8 7.2 7.6 8.0 8.4 8.8 9.2 9.6 10.0
```
What if you want to repeat a vector many times? You could either type it out (painful) or use the `rep()` function, which can repeat vectors in different ways.
```
rep(0, 10) # ten zeroes
rep(c(1L, 3L), times = 7) # alternating 1 and 3, 7 times
rep(c("A", "B", "C"), each = 2) # A to C, 2 times each
```
```
## [1] 0 0 0 0 0 0 0 0 0 0
## [1] 1 3 1 3 1 3 1 3 1 3 1 3 1 3
## [1] "A" "A" "B" "B" "C" "C"
```
The `rep()` function is useful to create a vector of logical values (`TRUE`/`FALSE` or `1`/`0`) to select values from another vector.
```
# Get subject IDs in the pattern Y Y N N ...
subject_ids <- 1:40
yynn <- rep(c(TRUE, FALSE), each = 2,
length.out = length(subject_ids))
subject_ids[yynn]
```
```
## [1] 1 2 5 6 9 10 13 14 17 18 21 22 25 26 29 30 33 34 37 38
```
#### 2\.6\.1\.3 Vectorized Operations
R performs calculations on vectors in a special way. Let’s look at an example using \\(z\\)\-scores. A \\(z\\)\-score is a [deviation score](https://psyteachr.github.io/glossary/d#deviation-score "A score minus the mean")(a score minus a mean) divided by a standard deviation. Let’s say we have a set of four IQ scores.
```
## example IQ scores: mu = 100, sigma = 15
iq <- c(86, 101, 127, 99)
```
If we want to subtract the mean from these four scores, we just use the following code:
```
iq - 100
```
```
## [1] -14 1 27 -1
```
This subtracts 100 from each element of the vector. R automatically assumes that this is what you wanted to do; it is called a [vectorized](https://psyteachr.github.io/glossary/v#vectorized "An operator or function that acts on each element in a vector") operation and it makes it possible to express operations more efficiently.
To calculate \\(z\\)\-scores we use the formula:
\\(z \= \\frac{X \- \\mu}{\\sigma}\\)
where X are the scores, \\(\\mu\\) is the mean, and \\(\\sigma\\) is the standard deviation. We can expression this formula in R as follows:
```
## z-scores
(iq - 100) / 15
```
```
## [1] -0.93333333 0.06666667 1.80000000 -0.06666667
```
You can see that it computed all four \\(z\\)\-scores with a single line of code. In later chapters, we’ll use vectorised operations to process our data, such as reverse\-scoring some questionnaire items.
### 2\.6\.2 Lists
Recall that vectors can contain data of only one type. What if you want to store a collection of data of different data types? For that purpose you would use a [list](https://psyteachr.github.io/glossary/l#list "A container data type that allows items with different data types to be grouped together."). Define a list using the `list()` function.
```
data_types <- list(
double = 10.0,
integer = 10L,
character = "10",
logical = TRUE
)
str(data_types) # str() prints lists in a condensed format
```
```
## List of 4
## $ double : num 10
## $ integer : int 10
## $ character: chr "10"
## $ logical : logi TRUE
```
You can refer to elements of a list using square brackets like a vector, but you can also use the dollar sign notation (`$`) if the list items have names.
```
data_types$logical
```
```
## [1] TRUE
```
Explore the 5 ways shown below to extract a value from a list. What data type is each object? What is the difference between the single and double brackets? Which one is the same as the dollar sign?
```
bracket1 <- data_types[1]
bracket2 <- data_types[[1]]
name1 <- data_types["double"]
name2 <- data_types[["double"]]
dollar <- data_types$double
```
### 2\.6\.3 Tables
The built\-in, imported, and created data above are [tabular data](https://psyteachr.github.io/glossary/t#tabular-data "Data in a rectangular table format, where each row has an entry for each column."), data arranged in the form of a table.
Tabular data structures allow for a collection of data of different types (characters, integers, logical, etc.) but subject to the constraint that each “column” of the table (element of the list) must have the same number of elements. The base R version of a table is called a `data.frame`, while the ‘tidyverse’ version is called a `tibble`. Tibbles are far easier to work with, so we’ll be using those. To learn more about differences between these two data structures, see `vignette("tibble")`.
Tabular data becomes especially important for when we talk about [tidy data](https://psyteachr.github.io/glossary/t#tidy-data "A format for data that maps the meaning onto the structure.") in [chapter 4](tidyr.html#tidyr), which consists of a set of simple principles for structuring data.
#### 2\.6\.3\.1 Creating a table
We learned how to create a table by importing a Excel or CSV file, and creating a table from scratch using the `tibble()` function. You can also use the `tibble::tribble()` function to create a table by row, rather than by column. You start by listing the column names, each preceded by a tilde (`~`), then you list the values for each column, row by row, separated by commas (don’t forget a comma at the end of each row). This method can be easier for some data, but doesn’t let you use shortcuts, like setting all of the values in a column to the same value or a [repeating sequence](data.html#rep_seq).
```
# by column using tibble
avatar_by_col <- tibble(
name = c("Katara", "Toph", "Sokka", "Azula"),
bends = c("water", "earth", NA, "fire"),
friendly = rep(c(TRUE, FALSE), c(3, 1))
)
# by row using tribble
avatar_by_row <- tribble(
~name, ~bends, ~friendly,
"Katara", "water", TRUE,
"Toph", "earth", TRUE,
"Sokka", NA, TRUE,
"Azula", "fire", FALSE
)
```
#### 2\.6\.3\.2 Table info
We can get information about the table using the functions `ncol()` (number of columns), `nrow()` (number of rows), `dim()` (the number of rows and number of columns), and `name()` (the column names).
```
nrow(avatar) # how many rows?
ncol(avatar) # how many columns?
dim(avatar) # what are the table dimensions?
names(avatar) # what are the column names?
```
```
## [1] 3
## [1] 3
## [1] 3 3
## [1] "name" "bends" "friendly"
```
#### 2\.6\.3\.3 Accessing rows and columns
There are various ways of accessing specific columns or rows from a table. The ones below are from [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") and are useful to know about, but you’ll be learning easier (and more readable) ways in the [tidyr](tidyr.html#tidyr) and [dplyr](dplyr.html#dplyr) lessons. Examples of these base R accessing functions are provided here for reference, since you might see them in other people’s scripts.
```
katara <- avatar[1, ] # first row
type <- avatar[, 2] # second column (bends)
benders <- avatar[c(1, 2), ] # selected rows (by number)
bends_name <- avatar[, c("bends", "name")] # selected columns (by name)
friendly <- avatar$friendly # by column name
```
#### 2\.6\.3\.1 Creating a table
We learned how to create a table by importing a Excel or CSV file, and creating a table from scratch using the `tibble()` function. You can also use the `tibble::tribble()` function to create a table by row, rather than by column. You start by listing the column names, each preceded by a tilde (`~`), then you list the values for each column, row by row, separated by commas (don’t forget a comma at the end of each row). This method can be easier for some data, but doesn’t let you use shortcuts, like setting all of the values in a column to the same value or a [repeating sequence](data.html#rep_seq).
```
# by column using tibble
avatar_by_col <- tibble(
name = c("Katara", "Toph", "Sokka", "Azula"),
bends = c("water", "earth", NA, "fire"),
friendly = rep(c(TRUE, FALSE), c(3, 1))
)
# by row using tribble
avatar_by_row <- tribble(
~name, ~bends, ~friendly,
"Katara", "water", TRUE,
"Toph", "earth", TRUE,
"Sokka", NA, TRUE,
"Azula", "fire", FALSE
)
```
#### 2\.6\.3\.2 Table info
We can get information about the table using the functions `ncol()` (number of columns), `nrow()` (number of rows), `dim()` (the number of rows and number of columns), and `name()` (the column names).
```
nrow(avatar) # how many rows?
ncol(avatar) # how many columns?
dim(avatar) # what are the table dimensions?
names(avatar) # what are the column names?
```
```
## [1] 3
## [1] 3
## [1] 3 3
## [1] "name" "bends" "friendly"
```
#### 2\.6\.3\.3 Accessing rows and columns
There are various ways of accessing specific columns or rows from a table. The ones below are from [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") and are useful to know about, but you’ll be learning easier (and more readable) ways in the [tidyr](tidyr.html#tidyr) and [dplyr](dplyr.html#dplyr) lessons. Examples of these base R accessing functions are provided here for reference, since you might see them in other people’s scripts.
```
katara <- avatar[1, ] # first row
type <- avatar[, 2] # second column (bends)
benders <- avatar[c(1, 2), ] # selected rows (by number)
bends_name <- avatar[, c("bends", "name")] # selected columns (by name)
friendly <- avatar$friendly # by column name
```
2\.7 Troubleshooting
--------------------
What if you import some data and it guesses the wrong column type? The most common reason is that a numeric column has some non\-numbers in it somewhere. Maybe someone wrote a note in an otherwise numeric column. Columns have to be all one data type, so if there are any characters, the whole column is converted to character strings, and numbers like `1.2` get represented as “1\.2,” which will cause very weird errors like `"100" < "9" == TRUE`. You can catch this by looking at the output from `read_csv()` or using `glimpse()` to check your data.
The data directory you created with `dataskills::getdata()` contains a file called “mess.csv.” Let’s try loading this dataset.
```
mess <- read_csv("data/mess.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## `This is my messy dataset` = col_character()
## )
```
```
## Warning: 27 parsing failures.
## row col expected actual file
## 1 -- 1 columns 7 columns 'data/mess.csv'
## 2 -- 1 columns 7 columns 'data/mess.csv'
## 3 -- 1 columns 7 columns 'data/mess.csv'
## 4 -- 1 columns 7 columns 'data/mess.csv'
## 5 -- 1 columns 7 columns 'data/mess.csv'
## ... ... ......... ......... ...............
## See problems(...) for more details.
```
You’ll get a warning with many parsing errors and `mess` is just a single column of the word “junk.” View the file `data/mess.csv` by clicking on it in the File pane, and choosing “View File.” Here are the first 10 lines. What went wrong?
```
This is my messy dataset
junk,order,score,letter,good,min_max,date
junk,1,-1,a,1,1 - 2,2020-01-1
junk,missing,0.72,b,1,2 - 3,2020-01-2
junk,3,-0.62,c,FALSE,3 - 4,2020-01-3
junk,4,2.03,d,T,4 - 5,2020-01-4
```
First, the file starts with a note: “This is my messy dataset.” We want to skip the first two lines. You can do this with the argument `skip` in `read_csv()`.
```
mess <- read_csv("data/mess.csv", skip = 2)
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## junk = col_character(),
## order = col_character(),
## score = col_double(),
## letter = col_character(),
## good = col_character(),
## min_max = col_character(),
## date = col_character()
## )
```
```
mess
```
| junk | order | score | letter | good | min\_max | date |
| --- | --- | --- | --- | --- | --- | --- |
| junk | 1 | \-1\.00 | a | 1 | 1 \- 2 | 2020\-01\-1 |
| junk | missing | 0\.72 | b | 1 | 2 \- 3 | 2020\-01\-2 |
| junk | 3 | \-0\.62 | c | FALSE | 3 \- 4 | 2020\-01\-3 |
| junk | 4 | 2\.03 | d | T | 4 \- 5 | 2020\-01\-4 |
| junk | 5 | NA | e | 1 | 5 \- 6 | 2020\-01\-5 |
| junk | 6 | 0\.99 | f | 0 | 6 \- 7 | 2020\-01\-6 |
| junk | 7 | 0\.03 | g | T | 7 \- 8 | 2020\-01\-7 |
| junk | 8 | 0\.67 | h | TRUE | 8 \- 9 | 2020\-01\-8 |
| junk | 9 | 0\.57 | i | 1 | 9 \- 10 | 2020\-01\-9 |
| junk | 10 | 0\.90 | j | T | 10 \- 11 | 2020\-01\-10 |
| junk | 11 | \-1\.55 | k | F | 11 \- 12 | 2020\-01\-11 |
| junk | 12 | NA | l | FALSE | 12 \- 13 | 2020\-01\-12 |
| junk | 13 | 0\.15 | m | T | 13 \- 14 | 2020\-01\-13 |
| junk | 14 | \-0\.66 | n | TRUE | 14 \- 15 | 2020\-01\-14 |
| junk | 15 | \-0\.99 | o | 1 | 15 \- 16 | 2020\-01\-15 |
| junk | 16 | 1\.97 | p | T | 16 \- 17 | 2020\-01\-16 |
| junk | 17 | \-0\.44 | q | TRUE | 17 \- 18 | 2020\-01\-17 |
| junk | 18 | \-0\.90 | r | F | 18 \- 19 | 2020\-01\-18 |
| junk | 19 | \-0\.15 | s | FALSE | 19 \- 20 | 2020\-01\-19 |
| junk | 20 | \-0\.83 | t | 0 | 20 \- 21 | 2020\-01\-20 |
| junk | 21 | 1\.99 | u | T | 21 \- 22 | 2020\-01\-21 |
| junk | 22 | 0\.04 | v | F | 22 \- 23 | 2020\-01\-22 |
| junk | 23 | \-0\.40 | w | F | 23 \- 24 | 2020\-01\-23 |
| junk | 24 | \-0\.47 | x | 0 | 24 \- 25 | 2020\-01\-24 |
| junk | 25 | \-0\.41 | y | TRUE | 25 \- 26 | 2020\-01\-25 |
| junk | 26 | 0\.68 | z | 0 | 26 \- 27 | 2020\-01\-26 |
OK, that’s a little better, but this table is still a serious mess in several ways:
* `junk` is a column that we don’t need
* `order` should be an integer column
* `good` should be a logical column
* `good` uses all kinds of different ways to record TRUE and FALSE values
* `min_max` contains two pieces of numeric information, but is a character column
* `date` should be a date column
We’ll learn how to deal with this mess in the chapters on [tidy data](tidyr.html#tidyr) and [data wrangling](dplyr.html#dplyr), but we can fix a few things by setting the `col_types` argument in `read_csv()` to specify the column types for our two columns that were guessed wrong and skip the “junk” column. The argument `col_types` takes a list where the name of each item in the list is a column name and the value is from the table below. You can use the function, like `col_double()` or the abbreviation, like `"l"`. Omitted column names are guessed.
| function | | abbreviation |
| --- | --- | --- |
| col\_logical() | l | logical values |
| col\_integer() | i | integer values |
| col\_double() | d | numeric values |
| col\_character() | c | strings |
| col\_factor(levels, ordered) | f | a fixed set of values |
| col\_date(format \= "") | D | with the locale’s date\_format |
| col\_time(format \= "") | t | with the locale’s time\_format |
| col\_datetime(format \= "") | T | ISO8601 date time |
| col\_number() | n | numbers containing the grouping\_mark |
| col\_skip() | \_, \- | don’t import this column |
| col\_guess() | ? | parse using the “best” type based on the input |
```
# omitted values are guessed
# ?col_date for format options
ct <- list(
junk = "-", # skip this column
order = "i",
good = "l",
date = col_date(format = "%Y-%m-%d")
)
tidier <- read_csv("data/mess.csv",
skip = 2,
col_types = ct)
```
```
## Warning: 1 parsing failure.
## row col expected actual file
## 2 order an integer missing 'data/mess.csv'
```
You will get a message about “1 parsing failure” when you run this. Warnings look scary at first, but always start by reading the message. The table tells you what row (`2`) and column (`order`) the error was found in, what kind of data was expected (`integer`), and what the actual value was (`missing`). If you specifically tell `read_csv()` to import a column as an integer, any characters in the column will produce a warning like this and then be recorded as `NA`. You can manually set what the missing values are recorded as with the `na` argument.
```
tidiest <- read_csv("data/mess.csv",
skip = 2,
na = "missing",
col_types = ct)
```
Now `order` is an integer where “missing” is now `NA`, `good` is a logical value, where `0` and `F` are converted to `FALSE` and `1` and `T` are converted to `TRUE`, and `date` is a date type (adding leading zeros to the day). We’ll learn in later chapters how to fix the other problems.
```
tidiest
```
| order | score | letter | good | min\_max | date |
| --- | --- | --- | --- | --- | --- |
| 1 | \-1 | a | TRUE | 1 \- 2 | 2020\-01\-01 |
| NA | 0\.72 | b | TRUE | 2 \- 3 | 2020\-01\-02 |
| 3 | \-0\.62 | c | FALSE | 3 \- 4 | 2020\-01\-03 |
| 4 | 2\.03 | d | TRUE | 4 \- 5 | 2020\-01\-04 |
| 5 | NA | e | TRUE | 5 \- 6 | 2020\-01\-05 |
| 6 | 0\.99 | f | FALSE | 6 \- 7 | 2020\-01\-06 |
| 7 | 0\.03 | g | TRUE | 7 \- 8 | 2020\-01\-07 |
| 8 | 0\.67 | h | TRUE | 8 \- 9 | 2020\-01\-08 |
| 9 | 0\.57 | i | TRUE | 9 \- 10 | 2020\-01\-09 |
| 10 | 0\.9 | j | TRUE | 10 \- 11 | 2020\-01\-10 |
| 11 | \-1\.55 | k | FALSE | 11 \- 12 | 2020\-01\-11 |
| 12 | NA | l | FALSE | 12 \- 13 | 2020\-01\-12 |
| 13 | 0\.15 | m | TRUE | 13 \- 14 | 2020\-01\-13 |
| 14 | \-0\.66 | n | TRUE | 14 \- 15 | 2020\-01\-14 |
| 15 | \-0\.99 | o | TRUE | 15 \- 16 | 2020\-01\-15 |
| 16 | 1\.97 | p | TRUE | 16 \- 17 | 2020\-01\-16 |
| 17 | \-0\.44 | q | TRUE | 17 \- 18 | 2020\-01\-17 |
| 18 | \-0\.9 | r | FALSE | 18 \- 19 | 2020\-01\-18 |
| 19 | \-0\.15 | s | FALSE | 19 \- 20 | 2020\-01\-19 |
| 20 | \-0\.83 | t | FALSE | 20 \- 21 | 2020\-01\-20 |
| 21 | 1\.99 | u | TRUE | 21 \- 22 | 2020\-01\-21 |
| 22 | 0\.04 | v | FALSE | 22 \- 23 | 2020\-01\-22 |
| 23 | \-0\.4 | w | FALSE | 23 \- 24 | 2020\-01\-23 |
| 24 | \-0\.47 | x | FALSE | 24 \- 25 | 2020\-01\-24 |
| 25 | \-0\.41 | y | TRUE | 25 \- 26 | 2020\-01\-25 |
| 26 | 0\.68 | z | FALSE | 26 \- 27 | 2020\-01\-26 |
2\.8 Glossary
-------------
| term | definition |
| --- | --- |
| [base r](https://psyteachr.github.io/glossary/b#base.r) | The set of R functions that come with a basic installation of R, before you add external packages |
| [character](https://psyteachr.github.io/glossary/c#character) | A data type representing strings of text. |
| [csv](https://psyteachr.github.io/glossary/c#csv) | Comma\-separated variable: a file type for representing data where each variable is separated from the next by a comma. |
| [data type](https://psyteachr.github.io/glossary/d#data.type) | The kind of data represented by an object. |
| [deviation score](https://psyteachr.github.io/glossary/d#deviation.score) | A score minus the mean |
| [double](https://psyteachr.github.io/glossary/d#double) | A data type representing a real decimal number |
| [escape](https://psyteachr.github.io/glossary/e#escape) | Include special characters like " inside of a string by prefacing them with a backslash. |
| [extension](https://psyteachr.github.io/glossary/e#extension) | The end part of a file name that tells you what type of file it is (e.g., .R or .Rmd). |
| [extract operator](https://psyteachr.github.io/glossary/e#extract.operator) | A symbol used to get values from a container object, such as \[, \[\[, or $ |
| [factor](https://psyteachr.github.io/glossary/f#factor) | A data type where a specific set of values are stored with labels; An explanatory variable manipulated by the experimenter |
| [global environment](https://psyteachr.github.io/glossary/g#global.environment) | The interactive workspace where your script runs |
| [integer](https://psyteachr.github.io/glossary/i#integer) | A data type representing whole numbers. |
| [list](https://psyteachr.github.io/glossary/l#list) | A container data type that allows items with different data types to be grouped together. |
| [logical](https://psyteachr.github.io/glossary/l#logical) | A data type representing TRUE or FALSE values. |
| [numeric](https://psyteachr.github.io/glossary/n#numeric) | A data type representing a real decimal number or integer. |
| [operator](https://psyteachr.github.io/glossary/o#operator) | A symbol that performs a mathematical operation, such as \+, \-, \*, / |
| [tabular data](https://psyteachr.github.io/glossary/t#tabular.data) | Data in a rectangular table format, where each row has an entry for each column. |
| [tidy data](https://psyteachr.github.io/glossary/t#tidy.data) | A format for data that maps the meaning onto the structure. |
| [tidyverse](https://psyteachr.github.io/glossary/t#tidyverse) | A set of R packages that help you create and work with tidy data |
| [vector](https://psyteachr.github.io/glossary/v#vector) | A type of data structure that is basically a list of things like T/F values, numbers, or strings. |
| [vectorized](https://psyteachr.github.io/glossary/v#vectorized) | An operator or function that acts on each element in a vector |
2\.9 Exercises
--------------
Download the [exercises](exercises/02_data_exercise.Rmd). See the [answers](exercises/02_data_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(2)
# run this to access the answers
dataskills::exercise(2, answers = TRUE)
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/data.html |
Chapter 2 Working with Data
===========================
2\.1 Learning Objectives
------------------------
1. Load [built\-in datasets](data.html#builtin) [(video)](https://youtu.be/Z5fK5VGmzlY)
2. [Import data](data.html#import_data) from CSV and Excel files [(video)](https://youtu.be/a7Ra-hnB8l8)
3. Create a [data table](data.html#tables) [(video)](https://youtu.be/k-aqhurepb4)
4. Understand the use the [basic data types](data.html#data_types) [(video)](https://youtu.be/jXQrF18Jaac)
5. Understand and use the [basic container types](data.html#containers) (list, vector) [(video)](https://youtu.be/4xU7uKNdoig)
6. Use [vectorized operations](data.html#vectorized_ops) [(video)](https://youtu.be/9I5MdS7UWmI)
7. Be able to [troubleshoot](#Troubleshooting) common data import problems [(video)](https://youtu.be/gcxn4LJ_vAI)
2\.2 Resources
--------------
* [Chapter 11: Data Import](http://r4ds.had.co.nz/data-import.html) in *R for Data Science*
* [RStudio Data Import Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/data-import.pdf)
* [Scottish Babynames](https://www.nrscotland.gov.uk/files//statistics/babies-first-names-full-list/summary-records/babies-names16-all-names-years.csv)
* [Developing an analysis in R/RStudio: Scottish babynames (1/2\)](https://www.youtube.com/watch?v=lAaVPMcMs1w)
* [Developing an analysis in R/RStudio: Scottish babynames (2/2\)](https://www.youtube.com/watch?v=lzdTHCcClqo)
2\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(dataskills)
```
2\.4 Data tables
----------------
### 2\.4\.1 Built\-in data
R comes with built\-in datasets. Some packages, like tidyr and dataskills, also contain data. The `data()` function lists the datasets available in a package.
```
# lists datasets in dataskills
data(package = "dataskills")
```
Type the name of a dataset into the console to see the data. Type `?smalldata` into the console to see the dataset description.
```
smalldata
```
| id | group | pre | post |
| --- | --- | --- | --- |
| S01 | control | 98\.46606 | 106\.70508 |
| S02 | control | 104\.39774 | 89\.09030 |
| S03 | control | 105\.13377 | 123\.67230 |
| S04 | control | 92\.42574 | 70\.70178 |
| S05 | control | 123\.53268 | 124\.95526 |
| S06 | exp | 97\.48676 | 101\.61697 |
| S07 | exp | 87\.75594 | 126\.30077 |
| S08 | exp | 77\.15375 | 72\.31229 |
| S09 | exp | 97\.00283 | 108\.80713 |
| S10 | exp | 102\.32338 | 113\.74732 |
You can also use the `data()` function to load a dataset into your [global environment](https://psyteachr.github.io/glossary/g#global-environment "The interactive workspace where your script runs").
```
# loads smalldata into the environment
data("smalldata")
```
Always, always, always, look at your data once you’ve created or loaded a table. Also look at it after each step that transforms your table. There are three main ways to look at your tibble: `print()`, `glimpse()`, and `View()`.
The `print()` method can be run explicitly, but is more commonly called by just typing the variable name on the blank line. The default is not to print the entire table, but just the first 10 rows. It’s rare to print your data in a script; that is something you usually are doing for a sanity check, and you should just do it in the console.
Let’s look at the `smalldata` table that we made above.
```
smalldata
```
| id | group | pre | post |
| --- | --- | --- | --- |
| S01 | control | 98\.46606 | 106\.70508 |
| S02 | control | 104\.39774 | 89\.09030 |
| S03 | control | 105\.13377 | 123\.67230 |
| S04 | control | 92\.42574 | 70\.70178 |
| S05 | control | 123\.53268 | 124\.95526 |
| S06 | exp | 97\.48676 | 101\.61697 |
| S07 | exp | 87\.75594 | 126\.30077 |
| S08 | exp | 77\.15375 | 72\.31229 |
| S09 | exp | 97\.00283 | 108\.80713 |
| S10 | exp | 102\.32338 | 113\.74732 |
The function `glimpse()` gives a sideways version of the tibble. This is useful if the table is very wide and you can’t see all of the columns. It also tells you the data type of each column in angled brackets after each column name. We’ll learn about [data types](data.html#data_types) below.
```
glimpse(smalldata)
```
```
## Rows: 10
## Columns: 4
## $ id <chr> "S01", "S02", "S03", "S04", "S05", "S06", "S07", "S08", "S09", "…
## $ group <chr> "control", "control", "control", "control", "control", "exp", "e…
## $ pre <dbl> 98.46606, 104.39774, 105.13377, 92.42574, 123.53268, 97.48676, 8…
## $ post <dbl> 106.70508, 89.09030, 123.67230, 70.70178, 124.95526, 101.61697, …
```
The other way to look at the table is a more graphical spreadsheet\-like version given by `View()` (capital ‘V’). It can be useful in the console, but don’t ever put this one in a script because it will create an annoying pop\-up window when the user runs it.
Now you can click on `smalldata` in the environment pane to open it up in a viewer that looks a bit like Excel.
You can get a quick summary of a dataset with the `summary()` function.
```
summary(smalldata)
```
```
## id group pre post
## Length:10 Length:10 Min. : 77.15 Min. : 70.70
## Class :character Class :character 1st Qu.: 93.57 1st Qu.: 92.22
## Mode :character Mode :character Median : 97.98 Median :107.76
## Mean : 98.57 Mean :103.79
## 3rd Qu.:103.88 3rd Qu.:121.19
## Max. :123.53 Max. :126.30
```
You can even do things like calculate the difference between the means of two columns.
```
pre_mean <- mean(smalldata$pre)
post_mean <- mean(smalldata$post)
post_mean - pre_mean
```
```
## [1] 5.223055
```
### 2\.4\.2 Importing data
Built\-in data are nice for examples, but you’re probably more interested in your own data. There are many different types of files that you might work with when doing data analysis. These different file types are usually distinguished by the three letter [extension](https://psyteachr.github.io/glossary/e#extension "The end part of a file name that tells you what type of file it is (e.g., .R or .Rmd).") following a period at the end of the file name. Here are some examples of different types of files and the functions you would use to read them in or write them out.
| Extension | File Type | Reading | Writing |
| --- | --- | --- | --- |
| .csv | Comma\-separated values | `readr::read_csv()` | `readr::write_csv()` |
| .tsv, .txt | Tab\-separated values | `readr::read_tsv()` | `readr::write_tsv()` |
| .xls, .xlsx | Excel workbook | `readxl::read_excel()` | NA |
| .sav, .mat, … | Multiple types | `rio::import()` | NA |
The double colon means that the function on the right comes from the package on the left, so `readr::read_csv()` refers to the `read_csv()` function in the `readr` package, and `readxl::read_excel()` refers to the function `read_excel()` in the package `readxl`. The function `rio::import()` from the `rio` package will read almost any type of data file, including SPSS and Matlab. Check the help with `?rio::import` to see a full list.
You can get a directory of data files used in this class for tutorials and exercises with the following code, which will create a directory called “data” in your project directory. Alternatively, you can download a [zip file of the datasets](data/data.zip).
```
dataskills::getdata()
```
Probably the most common file type you will encounter is [.csv](https://psyteachr.github.io/glossary/c#csv "Comma-separated variable: a file type for representing data where each variable is separated from the next by a comma.") (comma\-separated values). As the name suggests, a CSV file distinguishes which values go with which variable by separating them with commas, and text values are sometimes enclosed in double quotes. The first line of a file usually provides the names of the variables.
For example, here are the first few lines of a CSV containing personality scores:
```
subj_id,O,C,E,A,N
S01,4.428571429,4.5,3.333333333,5.142857143,1.625
S02,5.714285714,2.9,3.222222222,3,2.625
S03,5.142857143,2.8,6,3.571428571,2.5
S04,3.142857143,5.2,1.333333333,1.571428571,3.125
S05,5.428571429,4.4,2.444444444,4.714285714,1.625
```
There are six variables in this dataset, and their names are given in the first line of the file: `subj_id`, `O`, `C`, `E`, `A`, and `N`. You can see that the values for each of these variables are given in order, separated by commas, on each subsequent line of the file.
When you read in CSV files, it is best practice to use the `readr::read_csv()` function. The `readr` package is automatically loaded as part of the `tidyverse` package, which we will be using in almost every script. Note that you would normally want to store the result of the `read_csv()` function to an object, as so:
```r
csv_data <- read_csv("data/5factor.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## subj_id = col_character(),
## O = col_double(),
## C = col_double(),
## E = col_double(),
## A = col_double(),
## N = col_double()
## )
```
The `read_csv()` and `read_tsv()` functions will give you some information about the data you just read in so you can check the column names and [data types](data.html#data_types). For now, it’s enough to know that `col_double()` refers to columns with numbers and `col_character()` refers to columns with words. We’ll learn in the [toroubleshooting](data.html#troubleshooting) section below how to fix it if the function guesses the wrong data type.
```
tsv_data <- read_tsv("data/5factor.txt")
xls_data <- readxl::read_xls("data/5factor.xls")
# you can load sheets from excel files by name or number
rep_data <- readxl::read_xls("data/5factor.xls", sheet = "replication")
spss_data <- rio::import("data/5factor.sav")
```
Once loaded, you can view your data using the data viewer. In the upper right hand window of RStudio, under the Environment tab, you will see the object `babynames` listed.
If you click on the View icon (, it will bring up a table view of the data you loaded in the top left pane of RStudio.
This allows you to check that the data have been loaded in properly. You can close the tab when you’re done looking at it, it won’t remove the object.
### 2\.4\.3 Creating data
If we are creating a data table from scratch, we can use the `tibble::tibble()` function, and type the data right in. The `tibble` package is part of the [tidyverse](https://psyteachr.github.io/glossary/t#tidyverse "A set of R packages that help you create and work with tidy data") package that we loaded at the start of this chapter.
Let’s create a small table with the names of three Avatar characters and their bending type. The `tibble()` function takes arguments with the names that you want your columns to have. The values are vectors that list the column values in order.
If you don’t know the value for one of the cells, you can enter `NA`, which we have to do for Sokka because he doesn’t have any bending ability. If all the values in the column are the same, you can just enter one value and it will be copied for each row.
```
avatar <- tibble(
name = c("Katara", "Toph", "Sokka"),
bends = c("water", "earth", NA),
friendly = TRUE
)
# print it
avatar
```
| name | bends | friendly |
| --- | --- | --- |
| Katara | water | TRUE |
| Toph | earth | TRUE |
| Sokka | NA | TRUE |
### 2\.4\.4 Writing Data
If you have data that you want to save to a CSV file, use `readr::write_csv()`, as follows.
```
write_csv(avatar, "avatar.csv")
```
This will save the data in CSV format to your working directory.
* Create a new table called `family` with the first name, last name, and age of your family members.
* Save it to a CSV file called “family.csv.”
* Clear the object from your environment by restarting R or with the code `remove(family)`.
* Load the data back in and view it.
We’ll be working with [tabular data](https://psyteachr.github.io/glossary/t#tabular-data "Data in a rectangular table format, where each row has an entry for each column.") a lot in this class, but tabular data is made up of [vectors](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings."), which group together data with the same basic [data type](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object."). The following sections explain some of this terminology to help you understand the functions we’ll be learning to process and analyse data.
2\.5 Basic data types
---------------------
Data can be numbers, words, true/false values or combinations of these. In order to understand some later concepts, it’s useful to have a basic understanding of [data types](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object.") in R: [numeric](https://psyteachr.github.io/glossary/n#numeric "A data type representing a real decimal number or integer."), [character](https://psyteachr.github.io/glossary/c#character "A data type representing strings of text."), and [logical](https://psyteachr.github.io/glossary/l#logical "A data type representing TRUE or FALSE values.") There is also a specific data type called a [factor](https://psyteachr.github.io/glossary/f#factor "A data type where a specific set of values are stored with labels; An explanatory variable manipulated by the experimenter"), which will probably give you a headache sooner or later, but we can ignore it for now.
### 2\.5\.1 Numeric data
All of the real numbers are [numeric](https://psyteachr.github.io/glossary/n#numeric "A data type representing a real decimal number or integer.") data types (imaginary numbers are “complex”). There are two types of numeric data, [integer](https://psyteachr.github.io/glossary/i#integer "A data type representing whole numbers.") and [double](https://psyteachr.github.io/glossary/d#double "A data type representing a real decimal number"). Integers are the whole numbers, like \-1, 0 and 1\. Doubles are numbers that can have fractional amounts. If you just type a plain number such as `10`, it is stored as a double, even if it doesn’t have a decimal point. If you want it to be an exact integer, use the `L` suffix (10L).
If you ever want to know the data type of something, use the `typeof` function.
```
typeof(10) # double
typeof(10.0) # double
typeof(10L) # integer
typeof(10i) # complex
```
```
## [1] "double"
## [1] "double"
## [1] "integer"
## [1] "complex"
```
If you want to know if something is numeric (a double or an integer), you can use the function `is.numeric()` and it will tell you if it is numeric (`TRUE`) or not (`FALSE`).
```
is.numeric(10L)
is.numeric(10.0)
is.numeric("Not a number")
```
```
## [1] TRUE
## [1] TRUE
## [1] FALSE
```
### 2\.5\.2 Character data
[Character](https://psyteachr.github.io/glossary/c#character "A data type representing strings of text.") strings are any text between quotation marks.
```
typeof("This is a character string")
typeof('You can use double or single quotes')
```
```
## [1] "character"
## [1] "character"
```
This can include quotes, but you have to [escape](https://psyteachr.github.io/glossary/e#escape "Include special characters like \" inside of a string by prefacing them with a backslash.") it using a backslash to signal the the quote isn’t meant to be the end of the string.
```
my_string <- "The instructor said, \"R is cool,\" and the class agreed."
cat(my_string) # cat() prints the arguments
```
```
## The instructor said, "R is cool," and the class agreed.
```
### 2\.5\.3 Logical Data
[Logical](https://psyteachr.github.io/glossary/l#logical "A data type representing TRUE or FALSE values.") data (also sometimes called “boolean” values) is one of two values: true or false. In R, we always write them in uppercase: `TRUE` and `FALSE`.
```
class(TRUE)
class(FALSE)
```
```
## [1] "logical"
## [1] "logical"
```
When you compare two values with an [operator](https://psyteachr.github.io/glossary/o#operator "A symbol that performs a mathematical operation, such as +, -, *, /"), such as checking to see if 10 is greater than 5, the resulting value is logical.
```
is.logical(10 > 5)
```
```
## [1] TRUE
```
You might also see logical values abbreviated as `T` and `F`, or `0` and `1`. This can cause some problems down the road, so we will always spell out the whole thing.
What data types are these:
* `100` integer double character logical factor
* `100L` integer double character logical factor
* `"100"` integer double character logical factor
* `100.0` integer double character logical factor
* `-100L` integer double character logical factor
* `factor(100)` integer double character logical factor
* `TRUE` integer double character logical factor
* `"TRUE"` integer double character logical factor
* `FALSE` integer double character logical factor
* `1 == 2` integer double character logical factor
2\.6 Basic container types
--------------------------
Individual data values can be grouped together into containers. The main types of containers we’ll work with are vectors, lists, and data tables.
### 2\.6\.1 Vectors
A [vector](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings.") in R is like a vector in mathematics: a set of ordered elements. All of the elements in a vector must be of the same **data type** (numeric, character, logical). You can create a vector by enclosing the elements in the function `c()`.
```
## put information into a vector using c(...)
c(1, 2, 3, 4)
c("this", "is", "cool")
1:6 # shortcut to make a vector of all integers x:y
```
```
## [1] 1 2 3 4
## [1] "this" "is" "cool"
## [1] 1 2 3 4 5 6
```
What happens when you mix types? What class is the variable `mixed`?
```
mixed <- c(2, "good", 2L, "b", TRUE)
```
You can’t mix data types in a vector; all elements of the vector must be the same data type. If you mix them, R will “coerce” them so that they are all the same. If you mix doubles and integers, the integers will be changed to doubles. If you mix characters and numeric types, the numbers will be coerced to characters, so `10` would turn into “10\.”
#### 2\.6\.1\.1 Selecting values from a vector
If we wanted to pick specific values out of a vector by position, we can use square brackets (an [extract operator](https://psyteachr.github.io/glossary/e#extract-operator "A symbol used to get values from a container object, such as [, [[, or $"), or `[]`) after the vector.
```
values <- c(10, 20, 30, 40, 50)
values[2] # selects the second value
```
```
## [1] 20
```
You can select more than one value from the vector by putting a vector of numbers inside the square brackets. For example, you can select the 18th, 19th, 20th, 21st, 4th, 9th and 15th letter from the built\-in vector `LETTERS` (which gives all the uppercase letters in the Latin alphabet).
```
word <- c(18, 19, 20, 21, 4, 9, 15)
LETTERS[word]
```
```
## [1] "R" "S" "T" "U" "D" "I" "O"
```
Can you decode the secret message?
```
secret <- c(14, 5, 22, 5, 18, 7, 15, 14, 14, 1, 7, 9, 22, 5, 25, 15, 21, 21, 16)
```
You can also create ‘named’ vectors, where each element has a name. For example:
```
vec <- c(first = 77.9, second = -13.2, third = 100.1)
vec
```
```
## first second third
## 77.9 -13.2 100.1
```
We can then access elements by name using a character vector within the square brackets. We can put them in any order we want, and we can repeat elements:
```
vec[c("third", "second", "second")]
```
```
## third second second
## 100.1 -13.2 -13.2
```
We can get the vector of names using the `names()` function, and we can set or change them using something like `names(vec2) <- c(“n1,” “n2,” “n3”)`.
Another way to access elements is by using a logical vector within the square brackets. This will pull out the elements of the vector for which the corresponding element of the logical vector is `TRUE`. If the logical vector doesn’t have the same length as the original, it will repeat. You can find out how long a vector is using the `length()` function.
```
length(LETTERS)
LETTERS[c(TRUE, FALSE)]
```
```
## [1] 26
## [1] "A" "C" "E" "G" "I" "K" "M" "O" "Q" "S" "U" "W" "Y"
```
#### 2\.6\.1\.2 Repeating Sequences
Here are some useful tricks to save typing when creating vectors.
In the command `x:y` the `:` operator would give you the sequence of number starting at `x`, and going to `y` in increments of 1\.
```
1:10
15.3:20.5
0:-10
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
## [1] 15.3 16.3 17.3 18.3 19.3 20.3
## [1] 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10
```
What if you want to create a sequence but with something other than integer steps? You can use the `seq()` function. Look at the examples below and work out what the arguments do.
```
seq(from = -1, to = 1, by = 0.2)
seq(0, 100, length.out = 11)
seq(0, 10, along.with = LETTERS)
```
```
## [1] -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
## [1] 0 10 20 30 40 50 60 70 80 90 100
## [1] 0.0 0.4 0.8 1.2 1.6 2.0 2.4 2.8 3.2 3.6 4.0 4.4 4.8 5.2 5.6
## [16] 6.0 6.4 6.8 7.2 7.6 8.0 8.4 8.8 9.2 9.6 10.0
```
What if you want to repeat a vector many times? You could either type it out (painful) or use the `rep()` function, which can repeat vectors in different ways.
```
rep(0, 10) # ten zeroes
rep(c(1L, 3L), times = 7) # alternating 1 and 3, 7 times
rep(c("A", "B", "C"), each = 2) # A to C, 2 times each
```
```
## [1] 0 0 0 0 0 0 0 0 0 0
## [1] 1 3 1 3 1 3 1 3 1 3 1 3 1 3
## [1] "A" "A" "B" "B" "C" "C"
```
The `rep()` function is useful to create a vector of logical values (`TRUE`/`FALSE` or `1`/`0`) to select values from another vector.
```
# Get subject IDs in the pattern Y Y N N ...
subject_ids <- 1:40
yynn <- rep(c(TRUE, FALSE), each = 2,
length.out = length(subject_ids))
subject_ids[yynn]
```
```
## [1] 1 2 5 6 9 10 13 14 17 18 21 22 25 26 29 30 33 34 37 38
```
#### 2\.6\.1\.3 Vectorized Operations
R performs calculations on vectors in a special way. Let’s look at an example using \\(z\\)\-scores. A \\(z\\)\-score is a [deviation score](https://psyteachr.github.io/glossary/d#deviation-score "A score minus the mean")(a score minus a mean) divided by a standard deviation. Let’s say we have a set of four IQ scores.
```
## example IQ scores: mu = 100, sigma = 15
iq <- c(86, 101, 127, 99)
```
If we want to subtract the mean from these four scores, we just use the following code:
```
iq - 100
```
```
## [1] -14 1 27 -1
```
This subtracts 100 from each element of the vector. R automatically assumes that this is what you wanted to do; it is called a [vectorized](https://psyteachr.github.io/glossary/v#vectorized "An operator or function that acts on each element in a vector") operation and it makes it possible to express operations more efficiently.
To calculate \\(z\\)\-scores we use the formula:
\\(z \= \\frac{X \- \\mu}{\\sigma}\\)
where X are the scores, \\(\\mu\\) is the mean, and \\(\\sigma\\) is the standard deviation. We can expression this formula in R as follows:
```
## z-scores
(iq - 100) / 15
```
```
## [1] -0.93333333 0.06666667 1.80000000 -0.06666667
```
You can see that it computed all four \\(z\\)\-scores with a single line of code. In later chapters, we’ll use vectorised operations to process our data, such as reverse\-scoring some questionnaire items.
### 2\.6\.2 Lists
Recall that vectors can contain data of only one type. What if you want to store a collection of data of different data types? For that purpose you would use a [list](https://psyteachr.github.io/glossary/l#list "A container data type that allows items with different data types to be grouped together."). Define a list using the `list()` function.
```
data_types <- list(
double = 10.0,
integer = 10L,
character = "10",
logical = TRUE
)
str(data_types) # str() prints lists in a condensed format
```
```
## List of 4
## $ double : num 10
## $ integer : int 10
## $ character: chr "10"
## $ logical : logi TRUE
```
You can refer to elements of a list using square brackets like a vector, but you can also use the dollar sign notation (`$`) if the list items have names.
```
data_types$logical
```
```
## [1] TRUE
```
Explore the 5 ways shown below to extract a value from a list. What data type is each object? What is the difference between the single and double brackets? Which one is the same as the dollar sign?
```
bracket1 <- data_types[1]
bracket2 <- data_types[[1]]
name1 <- data_types["double"]
name2 <- data_types[["double"]]
dollar <- data_types$double
```
### 2\.6\.3 Tables
The built\-in, imported, and created data above are [tabular data](https://psyteachr.github.io/glossary/t#tabular-data "Data in a rectangular table format, where each row has an entry for each column."), data arranged in the form of a table.
Tabular data structures allow for a collection of data of different types (characters, integers, logical, etc.) but subject to the constraint that each “column” of the table (element of the list) must have the same number of elements. The base R version of a table is called a `data.frame`, while the ‘tidyverse’ version is called a `tibble`. Tibbles are far easier to work with, so we’ll be using those. To learn more about differences between these two data structures, see `vignette("tibble")`.
Tabular data becomes especially important for when we talk about [tidy data](https://psyteachr.github.io/glossary/t#tidy-data "A format for data that maps the meaning onto the structure.") in [chapter 4](tidyr.html#tidyr), which consists of a set of simple principles for structuring data.
#### 2\.6\.3\.1 Creating a table
We learned how to create a table by importing a Excel or CSV file, and creating a table from scratch using the `tibble()` function. You can also use the `tibble::tribble()` function to create a table by row, rather than by column. You start by listing the column names, each preceded by a tilde (`~`), then you list the values for each column, row by row, separated by commas (don’t forget a comma at the end of each row). This method can be easier for some data, but doesn’t let you use shortcuts, like setting all of the values in a column to the same value or a [repeating sequence](data.html#rep_seq).
```
# by column using tibble
avatar_by_col <- tibble(
name = c("Katara", "Toph", "Sokka", "Azula"),
bends = c("water", "earth", NA, "fire"),
friendly = rep(c(TRUE, FALSE), c(3, 1))
)
# by row using tribble
avatar_by_row <- tribble(
~name, ~bends, ~friendly,
"Katara", "water", TRUE,
"Toph", "earth", TRUE,
"Sokka", NA, TRUE,
"Azula", "fire", FALSE
)
```
#### 2\.6\.3\.2 Table info
We can get information about the table using the functions `ncol()` (number of columns), `nrow()` (number of rows), `dim()` (the number of rows and number of columns), and `name()` (the column names).
```
nrow(avatar) # how many rows?
ncol(avatar) # how many columns?
dim(avatar) # what are the table dimensions?
names(avatar) # what are the column names?
```
```
## [1] 3
## [1] 3
## [1] 3 3
## [1] "name" "bends" "friendly"
```
#### 2\.6\.3\.3 Accessing rows and columns
There are various ways of accessing specific columns or rows from a table. The ones below are from [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") and are useful to know about, but you’ll be learning easier (and more readable) ways in the [tidyr](tidyr.html#tidyr) and [dplyr](dplyr.html#dplyr) lessons. Examples of these base R accessing functions are provided here for reference, since you might see them in other people’s scripts.
```
katara <- avatar[1, ] # first row
type <- avatar[, 2] # second column (bends)
benders <- avatar[c(1, 2), ] # selected rows (by number)
bends_name <- avatar[, c("bends", "name")] # selected columns (by name)
friendly <- avatar$friendly # by column name
```
2\.7 Troubleshooting
--------------------
What if you import some data and it guesses the wrong column type? The most common reason is that a numeric column has some non\-numbers in it somewhere. Maybe someone wrote a note in an otherwise numeric column. Columns have to be all one data type, so if there are any characters, the whole column is converted to character strings, and numbers like `1.2` get represented as “1\.2,” which will cause very weird errors like `"100" < "9" == TRUE`. You can catch this by looking at the output from `read_csv()` or using `glimpse()` to check your data.
The data directory you created with `dataskills::getdata()` contains a file called “mess.csv.” Let’s try loading this dataset.
```
mess <- read_csv("data/mess.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## `This is my messy dataset` = col_character()
## )
```
```
## Warning: 27 parsing failures.
## row col expected actual file
## 1 -- 1 columns 7 columns 'data/mess.csv'
## 2 -- 1 columns 7 columns 'data/mess.csv'
## 3 -- 1 columns 7 columns 'data/mess.csv'
## 4 -- 1 columns 7 columns 'data/mess.csv'
## 5 -- 1 columns 7 columns 'data/mess.csv'
## ... ... ......... ......... ...............
## See problems(...) for more details.
```
You’ll get a warning with many parsing errors and `mess` is just a single column of the word “junk.” View the file `data/mess.csv` by clicking on it in the File pane, and choosing “View File.” Here are the first 10 lines. What went wrong?
```
This is my messy dataset
junk,order,score,letter,good,min_max,date
junk,1,-1,a,1,1 - 2,2020-01-1
junk,missing,0.72,b,1,2 - 3,2020-01-2
junk,3,-0.62,c,FALSE,3 - 4,2020-01-3
junk,4,2.03,d,T,4 - 5,2020-01-4
```
First, the file starts with a note: “This is my messy dataset.” We want to skip the first two lines. You can do this with the argument `skip` in `read_csv()`.
```
mess <- read_csv("data/mess.csv", skip = 2)
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## junk = col_character(),
## order = col_character(),
## score = col_double(),
## letter = col_character(),
## good = col_character(),
## min_max = col_character(),
## date = col_character()
## )
```
```
mess
```
| junk | order | score | letter | good | min\_max | date |
| --- | --- | --- | --- | --- | --- | --- |
| junk | 1 | \-1\.00 | a | 1 | 1 \- 2 | 2020\-01\-1 |
| junk | missing | 0\.72 | b | 1 | 2 \- 3 | 2020\-01\-2 |
| junk | 3 | \-0\.62 | c | FALSE | 3 \- 4 | 2020\-01\-3 |
| junk | 4 | 2\.03 | d | T | 4 \- 5 | 2020\-01\-4 |
| junk | 5 | NA | e | 1 | 5 \- 6 | 2020\-01\-5 |
| junk | 6 | 0\.99 | f | 0 | 6 \- 7 | 2020\-01\-6 |
| junk | 7 | 0\.03 | g | T | 7 \- 8 | 2020\-01\-7 |
| junk | 8 | 0\.67 | h | TRUE | 8 \- 9 | 2020\-01\-8 |
| junk | 9 | 0\.57 | i | 1 | 9 \- 10 | 2020\-01\-9 |
| junk | 10 | 0\.90 | j | T | 10 \- 11 | 2020\-01\-10 |
| junk | 11 | \-1\.55 | k | F | 11 \- 12 | 2020\-01\-11 |
| junk | 12 | NA | l | FALSE | 12 \- 13 | 2020\-01\-12 |
| junk | 13 | 0\.15 | m | T | 13 \- 14 | 2020\-01\-13 |
| junk | 14 | \-0\.66 | n | TRUE | 14 \- 15 | 2020\-01\-14 |
| junk | 15 | \-0\.99 | o | 1 | 15 \- 16 | 2020\-01\-15 |
| junk | 16 | 1\.97 | p | T | 16 \- 17 | 2020\-01\-16 |
| junk | 17 | \-0\.44 | q | TRUE | 17 \- 18 | 2020\-01\-17 |
| junk | 18 | \-0\.90 | r | F | 18 \- 19 | 2020\-01\-18 |
| junk | 19 | \-0\.15 | s | FALSE | 19 \- 20 | 2020\-01\-19 |
| junk | 20 | \-0\.83 | t | 0 | 20 \- 21 | 2020\-01\-20 |
| junk | 21 | 1\.99 | u | T | 21 \- 22 | 2020\-01\-21 |
| junk | 22 | 0\.04 | v | F | 22 \- 23 | 2020\-01\-22 |
| junk | 23 | \-0\.40 | w | F | 23 \- 24 | 2020\-01\-23 |
| junk | 24 | \-0\.47 | x | 0 | 24 \- 25 | 2020\-01\-24 |
| junk | 25 | \-0\.41 | y | TRUE | 25 \- 26 | 2020\-01\-25 |
| junk | 26 | 0\.68 | z | 0 | 26 \- 27 | 2020\-01\-26 |
OK, that’s a little better, but this table is still a serious mess in several ways:
* `junk` is a column that we don’t need
* `order` should be an integer column
* `good` should be a logical column
* `good` uses all kinds of different ways to record TRUE and FALSE values
* `min_max` contains two pieces of numeric information, but is a character column
* `date` should be a date column
We’ll learn how to deal with this mess in the chapters on [tidy data](tidyr.html#tidyr) and [data wrangling](dplyr.html#dplyr), but we can fix a few things by setting the `col_types` argument in `read_csv()` to specify the column types for our two columns that were guessed wrong and skip the “junk” column. The argument `col_types` takes a list where the name of each item in the list is a column name and the value is from the table below. You can use the function, like `col_double()` or the abbreviation, like `"l"`. Omitted column names are guessed.
| function | | abbreviation |
| --- | --- | --- |
| col\_logical() | l | logical values |
| col\_integer() | i | integer values |
| col\_double() | d | numeric values |
| col\_character() | c | strings |
| col\_factor(levels, ordered) | f | a fixed set of values |
| col\_date(format \= "") | D | with the locale’s date\_format |
| col\_time(format \= "") | t | with the locale’s time\_format |
| col\_datetime(format \= "") | T | ISO8601 date time |
| col\_number() | n | numbers containing the grouping\_mark |
| col\_skip() | \_, \- | don’t import this column |
| col\_guess() | ? | parse using the “best” type based on the input |
```
# omitted values are guessed
# ?col_date for format options
ct <- list(
junk = "-", # skip this column
order = "i",
good = "l",
date = col_date(format = "%Y-%m-%d")
)
tidier <- read_csv("data/mess.csv",
skip = 2,
col_types = ct)
```
```
## Warning: 1 parsing failure.
## row col expected actual file
## 2 order an integer missing 'data/mess.csv'
```
You will get a message about “1 parsing failure” when you run this. Warnings look scary at first, but always start by reading the message. The table tells you what row (`2`) and column (`order`) the error was found in, what kind of data was expected (`integer`), and what the actual value was (`missing`). If you specifically tell `read_csv()` to import a column as an integer, any characters in the column will produce a warning like this and then be recorded as `NA`. You can manually set what the missing values are recorded as with the `na` argument.
```
tidiest <- read_csv("data/mess.csv",
skip = 2,
na = "missing",
col_types = ct)
```
Now `order` is an integer where “missing” is now `NA`, `good` is a logical value, where `0` and `F` are converted to `FALSE` and `1` and `T` are converted to `TRUE`, and `date` is a date type (adding leading zeros to the day). We’ll learn in later chapters how to fix the other problems.
```
tidiest
```
| order | score | letter | good | min\_max | date |
| --- | --- | --- | --- | --- | --- |
| 1 | \-1 | a | TRUE | 1 \- 2 | 2020\-01\-01 |
| NA | 0\.72 | b | TRUE | 2 \- 3 | 2020\-01\-02 |
| 3 | \-0\.62 | c | FALSE | 3 \- 4 | 2020\-01\-03 |
| 4 | 2\.03 | d | TRUE | 4 \- 5 | 2020\-01\-04 |
| 5 | NA | e | TRUE | 5 \- 6 | 2020\-01\-05 |
| 6 | 0\.99 | f | FALSE | 6 \- 7 | 2020\-01\-06 |
| 7 | 0\.03 | g | TRUE | 7 \- 8 | 2020\-01\-07 |
| 8 | 0\.67 | h | TRUE | 8 \- 9 | 2020\-01\-08 |
| 9 | 0\.57 | i | TRUE | 9 \- 10 | 2020\-01\-09 |
| 10 | 0\.9 | j | TRUE | 10 \- 11 | 2020\-01\-10 |
| 11 | \-1\.55 | k | FALSE | 11 \- 12 | 2020\-01\-11 |
| 12 | NA | l | FALSE | 12 \- 13 | 2020\-01\-12 |
| 13 | 0\.15 | m | TRUE | 13 \- 14 | 2020\-01\-13 |
| 14 | \-0\.66 | n | TRUE | 14 \- 15 | 2020\-01\-14 |
| 15 | \-0\.99 | o | TRUE | 15 \- 16 | 2020\-01\-15 |
| 16 | 1\.97 | p | TRUE | 16 \- 17 | 2020\-01\-16 |
| 17 | \-0\.44 | q | TRUE | 17 \- 18 | 2020\-01\-17 |
| 18 | \-0\.9 | r | FALSE | 18 \- 19 | 2020\-01\-18 |
| 19 | \-0\.15 | s | FALSE | 19 \- 20 | 2020\-01\-19 |
| 20 | \-0\.83 | t | FALSE | 20 \- 21 | 2020\-01\-20 |
| 21 | 1\.99 | u | TRUE | 21 \- 22 | 2020\-01\-21 |
| 22 | 0\.04 | v | FALSE | 22 \- 23 | 2020\-01\-22 |
| 23 | \-0\.4 | w | FALSE | 23 \- 24 | 2020\-01\-23 |
| 24 | \-0\.47 | x | FALSE | 24 \- 25 | 2020\-01\-24 |
| 25 | \-0\.41 | y | TRUE | 25 \- 26 | 2020\-01\-25 |
| 26 | 0\.68 | z | FALSE | 26 \- 27 | 2020\-01\-26 |
2\.8 Glossary
-------------
| term | definition |
| --- | --- |
| [base r](https://psyteachr.github.io/glossary/b#base.r) | The set of R functions that come with a basic installation of R, before you add external packages |
| [character](https://psyteachr.github.io/glossary/c#character) | A data type representing strings of text. |
| [csv](https://psyteachr.github.io/glossary/c#csv) | Comma\-separated variable: a file type for representing data where each variable is separated from the next by a comma. |
| [data type](https://psyteachr.github.io/glossary/d#data.type) | The kind of data represented by an object. |
| [deviation score](https://psyteachr.github.io/glossary/d#deviation.score) | A score minus the mean |
| [double](https://psyteachr.github.io/glossary/d#double) | A data type representing a real decimal number |
| [escape](https://psyteachr.github.io/glossary/e#escape) | Include special characters like " inside of a string by prefacing them with a backslash. |
| [extension](https://psyteachr.github.io/glossary/e#extension) | The end part of a file name that tells you what type of file it is (e.g., .R or .Rmd). |
| [extract operator](https://psyteachr.github.io/glossary/e#extract.operator) | A symbol used to get values from a container object, such as \[, \[\[, or $ |
| [factor](https://psyteachr.github.io/glossary/f#factor) | A data type where a specific set of values are stored with labels; An explanatory variable manipulated by the experimenter |
| [global environment](https://psyteachr.github.io/glossary/g#global.environment) | The interactive workspace where your script runs |
| [integer](https://psyteachr.github.io/glossary/i#integer) | A data type representing whole numbers. |
| [list](https://psyteachr.github.io/glossary/l#list) | A container data type that allows items with different data types to be grouped together. |
| [logical](https://psyteachr.github.io/glossary/l#logical) | A data type representing TRUE or FALSE values. |
| [numeric](https://psyteachr.github.io/glossary/n#numeric) | A data type representing a real decimal number or integer. |
| [operator](https://psyteachr.github.io/glossary/o#operator) | A symbol that performs a mathematical operation, such as \+, \-, \*, / |
| [tabular data](https://psyteachr.github.io/glossary/t#tabular.data) | Data in a rectangular table format, where each row has an entry for each column. |
| [tidy data](https://psyteachr.github.io/glossary/t#tidy.data) | A format for data that maps the meaning onto the structure. |
| [tidyverse](https://psyteachr.github.io/glossary/t#tidyverse) | A set of R packages that help you create and work with tidy data |
| [vector](https://psyteachr.github.io/glossary/v#vector) | A type of data structure that is basically a list of things like T/F values, numbers, or strings. |
| [vectorized](https://psyteachr.github.io/glossary/v#vectorized) | An operator or function that acts on each element in a vector |
2\.9 Exercises
--------------
Download the [exercises](exercises/02_data_exercise.Rmd). See the [answers](exercises/02_data_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(2)
# run this to access the answers
dataskills::exercise(2, answers = TRUE)
```
2\.1 Learning Objectives
------------------------
1. Load [built\-in datasets](data.html#builtin) [(video)](https://youtu.be/Z5fK5VGmzlY)
2. [Import data](data.html#import_data) from CSV and Excel files [(video)](https://youtu.be/a7Ra-hnB8l8)
3. Create a [data table](data.html#tables) [(video)](https://youtu.be/k-aqhurepb4)
4. Understand the use the [basic data types](data.html#data_types) [(video)](https://youtu.be/jXQrF18Jaac)
5. Understand and use the [basic container types](data.html#containers) (list, vector) [(video)](https://youtu.be/4xU7uKNdoig)
6. Use [vectorized operations](data.html#vectorized_ops) [(video)](https://youtu.be/9I5MdS7UWmI)
7. Be able to [troubleshoot](#Troubleshooting) common data import problems [(video)](https://youtu.be/gcxn4LJ_vAI)
2\.2 Resources
--------------
* [Chapter 11: Data Import](http://r4ds.had.co.nz/data-import.html) in *R for Data Science*
* [RStudio Data Import Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/data-import.pdf)
* [Scottish Babynames](https://www.nrscotland.gov.uk/files//statistics/babies-first-names-full-list/summary-records/babies-names16-all-names-years.csv)
* [Developing an analysis in R/RStudio: Scottish babynames (1/2\)](https://www.youtube.com/watch?v=lAaVPMcMs1w)
* [Developing an analysis in R/RStudio: Scottish babynames (2/2\)](https://www.youtube.com/watch?v=lzdTHCcClqo)
2\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(dataskills)
```
2\.4 Data tables
----------------
### 2\.4\.1 Built\-in data
R comes with built\-in datasets. Some packages, like tidyr and dataskills, also contain data. The `data()` function lists the datasets available in a package.
```
# lists datasets in dataskills
data(package = "dataskills")
```
Type the name of a dataset into the console to see the data. Type `?smalldata` into the console to see the dataset description.
```
smalldata
```
| id | group | pre | post |
| --- | --- | --- | --- |
| S01 | control | 98\.46606 | 106\.70508 |
| S02 | control | 104\.39774 | 89\.09030 |
| S03 | control | 105\.13377 | 123\.67230 |
| S04 | control | 92\.42574 | 70\.70178 |
| S05 | control | 123\.53268 | 124\.95526 |
| S06 | exp | 97\.48676 | 101\.61697 |
| S07 | exp | 87\.75594 | 126\.30077 |
| S08 | exp | 77\.15375 | 72\.31229 |
| S09 | exp | 97\.00283 | 108\.80713 |
| S10 | exp | 102\.32338 | 113\.74732 |
You can also use the `data()` function to load a dataset into your [global environment](https://psyteachr.github.io/glossary/g#global-environment "The interactive workspace where your script runs").
```
# loads smalldata into the environment
data("smalldata")
```
Always, always, always, look at your data once you’ve created or loaded a table. Also look at it after each step that transforms your table. There are three main ways to look at your tibble: `print()`, `glimpse()`, and `View()`.
The `print()` method can be run explicitly, but is more commonly called by just typing the variable name on the blank line. The default is not to print the entire table, but just the first 10 rows. It’s rare to print your data in a script; that is something you usually are doing for a sanity check, and you should just do it in the console.
Let’s look at the `smalldata` table that we made above.
```
smalldata
```
| id | group | pre | post |
| --- | --- | --- | --- |
| S01 | control | 98\.46606 | 106\.70508 |
| S02 | control | 104\.39774 | 89\.09030 |
| S03 | control | 105\.13377 | 123\.67230 |
| S04 | control | 92\.42574 | 70\.70178 |
| S05 | control | 123\.53268 | 124\.95526 |
| S06 | exp | 97\.48676 | 101\.61697 |
| S07 | exp | 87\.75594 | 126\.30077 |
| S08 | exp | 77\.15375 | 72\.31229 |
| S09 | exp | 97\.00283 | 108\.80713 |
| S10 | exp | 102\.32338 | 113\.74732 |
The function `glimpse()` gives a sideways version of the tibble. This is useful if the table is very wide and you can’t see all of the columns. It also tells you the data type of each column in angled brackets after each column name. We’ll learn about [data types](data.html#data_types) below.
```
glimpse(smalldata)
```
```
## Rows: 10
## Columns: 4
## $ id <chr> "S01", "S02", "S03", "S04", "S05", "S06", "S07", "S08", "S09", "…
## $ group <chr> "control", "control", "control", "control", "control", "exp", "e…
## $ pre <dbl> 98.46606, 104.39774, 105.13377, 92.42574, 123.53268, 97.48676, 8…
## $ post <dbl> 106.70508, 89.09030, 123.67230, 70.70178, 124.95526, 101.61697, …
```
The other way to look at the table is a more graphical spreadsheet\-like version given by `View()` (capital ‘V’). It can be useful in the console, but don’t ever put this one in a script because it will create an annoying pop\-up window when the user runs it.
Now you can click on `smalldata` in the environment pane to open it up in a viewer that looks a bit like Excel.
You can get a quick summary of a dataset with the `summary()` function.
```
summary(smalldata)
```
```
## id group pre post
## Length:10 Length:10 Min. : 77.15 Min. : 70.70
## Class :character Class :character 1st Qu.: 93.57 1st Qu.: 92.22
## Mode :character Mode :character Median : 97.98 Median :107.76
## Mean : 98.57 Mean :103.79
## 3rd Qu.:103.88 3rd Qu.:121.19
## Max. :123.53 Max. :126.30
```
You can even do things like calculate the difference between the means of two columns.
```
pre_mean <- mean(smalldata$pre)
post_mean <- mean(smalldata$post)
post_mean - pre_mean
```
```
## [1] 5.223055
```
### 2\.4\.2 Importing data
Built\-in data are nice for examples, but you’re probably more interested in your own data. There are many different types of files that you might work with when doing data analysis. These different file types are usually distinguished by the three letter [extension](https://psyteachr.github.io/glossary/e#extension "The end part of a file name that tells you what type of file it is (e.g., .R or .Rmd).") following a period at the end of the file name. Here are some examples of different types of files and the functions you would use to read them in or write them out.
| Extension | File Type | Reading | Writing |
| --- | --- | --- | --- |
| .csv | Comma\-separated values | `readr::read_csv()` | `readr::write_csv()` |
| .tsv, .txt | Tab\-separated values | `readr::read_tsv()` | `readr::write_tsv()` |
| .xls, .xlsx | Excel workbook | `readxl::read_excel()` | NA |
| .sav, .mat, … | Multiple types | `rio::import()` | NA |
The double colon means that the function on the right comes from the package on the left, so `readr::read_csv()` refers to the `read_csv()` function in the `readr` package, and `readxl::read_excel()` refers to the function `read_excel()` in the package `readxl`. The function `rio::import()` from the `rio` package will read almost any type of data file, including SPSS and Matlab. Check the help with `?rio::import` to see a full list.
You can get a directory of data files used in this class for tutorials and exercises with the following code, which will create a directory called “data” in your project directory. Alternatively, you can download a [zip file of the datasets](data/data.zip).
```
dataskills::getdata()
```
Probably the most common file type you will encounter is [.csv](https://psyteachr.github.io/glossary/c#csv "Comma-separated variable: a file type for representing data where each variable is separated from the next by a comma.") (comma\-separated values). As the name suggests, a CSV file distinguishes which values go with which variable by separating them with commas, and text values are sometimes enclosed in double quotes. The first line of a file usually provides the names of the variables.
For example, here are the first few lines of a CSV containing personality scores:
```
subj_id,O,C,E,A,N
S01,4.428571429,4.5,3.333333333,5.142857143,1.625
S02,5.714285714,2.9,3.222222222,3,2.625
S03,5.142857143,2.8,6,3.571428571,2.5
S04,3.142857143,5.2,1.333333333,1.571428571,3.125
S05,5.428571429,4.4,2.444444444,4.714285714,1.625
```
There are six variables in this dataset, and their names are given in the first line of the file: `subj_id`, `O`, `C`, `E`, `A`, and `N`. You can see that the values for each of these variables are given in order, separated by commas, on each subsequent line of the file.
When you read in CSV files, it is best practice to use the `readr::read_csv()` function. The `readr` package is automatically loaded as part of the `tidyverse` package, which we will be using in almost every script. Note that you would normally want to store the result of the `read_csv()` function to an object, as so:
```r
csv_data <- read_csv("data/5factor.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## subj_id = col_character(),
## O = col_double(),
## C = col_double(),
## E = col_double(),
## A = col_double(),
## N = col_double()
## )
```
The `read_csv()` and `read_tsv()` functions will give you some information about the data you just read in so you can check the column names and [data types](data.html#data_types). For now, it’s enough to know that `col_double()` refers to columns with numbers and `col_character()` refers to columns with words. We’ll learn in the [toroubleshooting](data.html#troubleshooting) section below how to fix it if the function guesses the wrong data type.
```
tsv_data <- read_tsv("data/5factor.txt")
xls_data <- readxl::read_xls("data/5factor.xls")
# you can load sheets from excel files by name or number
rep_data <- readxl::read_xls("data/5factor.xls", sheet = "replication")
spss_data <- rio::import("data/5factor.sav")
```
Once loaded, you can view your data using the data viewer. In the upper right hand window of RStudio, under the Environment tab, you will see the object `babynames` listed.
If you click on the View icon (, it will bring up a table view of the data you loaded in the top left pane of RStudio.
This allows you to check that the data have been loaded in properly. You can close the tab when you’re done looking at it, it won’t remove the object.
### 2\.4\.3 Creating data
If we are creating a data table from scratch, we can use the `tibble::tibble()` function, and type the data right in. The `tibble` package is part of the [tidyverse](https://psyteachr.github.io/glossary/t#tidyverse "A set of R packages that help you create and work with tidy data") package that we loaded at the start of this chapter.
Let’s create a small table with the names of three Avatar characters and their bending type. The `tibble()` function takes arguments with the names that you want your columns to have. The values are vectors that list the column values in order.
If you don’t know the value for one of the cells, you can enter `NA`, which we have to do for Sokka because he doesn’t have any bending ability. If all the values in the column are the same, you can just enter one value and it will be copied for each row.
```
avatar <- tibble(
name = c("Katara", "Toph", "Sokka"),
bends = c("water", "earth", NA),
friendly = TRUE
)
# print it
avatar
```
| name | bends | friendly |
| --- | --- | --- |
| Katara | water | TRUE |
| Toph | earth | TRUE |
| Sokka | NA | TRUE |
### 2\.4\.4 Writing Data
If you have data that you want to save to a CSV file, use `readr::write_csv()`, as follows.
```
write_csv(avatar, "avatar.csv")
```
This will save the data in CSV format to your working directory.
* Create a new table called `family` with the first name, last name, and age of your family members.
* Save it to a CSV file called “family.csv.”
* Clear the object from your environment by restarting R or with the code `remove(family)`.
* Load the data back in and view it.
We’ll be working with [tabular data](https://psyteachr.github.io/glossary/t#tabular-data "Data in a rectangular table format, where each row has an entry for each column.") a lot in this class, but tabular data is made up of [vectors](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings."), which group together data with the same basic [data type](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object."). The following sections explain some of this terminology to help you understand the functions we’ll be learning to process and analyse data.
### 2\.4\.1 Built\-in data
R comes with built\-in datasets. Some packages, like tidyr and dataskills, also contain data. The `data()` function lists the datasets available in a package.
```
# lists datasets in dataskills
data(package = "dataskills")
```
Type the name of a dataset into the console to see the data. Type `?smalldata` into the console to see the dataset description.
```
smalldata
```
| id | group | pre | post |
| --- | --- | --- | --- |
| S01 | control | 98\.46606 | 106\.70508 |
| S02 | control | 104\.39774 | 89\.09030 |
| S03 | control | 105\.13377 | 123\.67230 |
| S04 | control | 92\.42574 | 70\.70178 |
| S05 | control | 123\.53268 | 124\.95526 |
| S06 | exp | 97\.48676 | 101\.61697 |
| S07 | exp | 87\.75594 | 126\.30077 |
| S08 | exp | 77\.15375 | 72\.31229 |
| S09 | exp | 97\.00283 | 108\.80713 |
| S10 | exp | 102\.32338 | 113\.74732 |
You can also use the `data()` function to load a dataset into your [global environment](https://psyteachr.github.io/glossary/g#global-environment "The interactive workspace where your script runs").
```
# loads smalldata into the environment
data("smalldata")
```
Always, always, always, look at your data once you’ve created or loaded a table. Also look at it after each step that transforms your table. There are three main ways to look at your tibble: `print()`, `glimpse()`, and `View()`.
The `print()` method can be run explicitly, but is more commonly called by just typing the variable name on the blank line. The default is not to print the entire table, but just the first 10 rows. It’s rare to print your data in a script; that is something you usually are doing for a sanity check, and you should just do it in the console.
Let’s look at the `smalldata` table that we made above.
```
smalldata
```
| id | group | pre | post |
| --- | --- | --- | --- |
| S01 | control | 98\.46606 | 106\.70508 |
| S02 | control | 104\.39774 | 89\.09030 |
| S03 | control | 105\.13377 | 123\.67230 |
| S04 | control | 92\.42574 | 70\.70178 |
| S05 | control | 123\.53268 | 124\.95526 |
| S06 | exp | 97\.48676 | 101\.61697 |
| S07 | exp | 87\.75594 | 126\.30077 |
| S08 | exp | 77\.15375 | 72\.31229 |
| S09 | exp | 97\.00283 | 108\.80713 |
| S10 | exp | 102\.32338 | 113\.74732 |
The function `glimpse()` gives a sideways version of the tibble. This is useful if the table is very wide and you can’t see all of the columns. It also tells you the data type of each column in angled brackets after each column name. We’ll learn about [data types](data.html#data_types) below.
```
glimpse(smalldata)
```
```
## Rows: 10
## Columns: 4
## $ id <chr> "S01", "S02", "S03", "S04", "S05", "S06", "S07", "S08", "S09", "…
## $ group <chr> "control", "control", "control", "control", "control", "exp", "e…
## $ pre <dbl> 98.46606, 104.39774, 105.13377, 92.42574, 123.53268, 97.48676, 8…
## $ post <dbl> 106.70508, 89.09030, 123.67230, 70.70178, 124.95526, 101.61697, …
```
The other way to look at the table is a more graphical spreadsheet\-like version given by `View()` (capital ‘V’). It can be useful in the console, but don’t ever put this one in a script because it will create an annoying pop\-up window when the user runs it.
Now you can click on `smalldata` in the environment pane to open it up in a viewer that looks a bit like Excel.
You can get a quick summary of a dataset with the `summary()` function.
```
summary(smalldata)
```
```
## id group pre post
## Length:10 Length:10 Min. : 77.15 Min. : 70.70
## Class :character Class :character 1st Qu.: 93.57 1st Qu.: 92.22
## Mode :character Mode :character Median : 97.98 Median :107.76
## Mean : 98.57 Mean :103.79
## 3rd Qu.:103.88 3rd Qu.:121.19
## Max. :123.53 Max. :126.30
```
You can even do things like calculate the difference between the means of two columns.
```
pre_mean <- mean(smalldata$pre)
post_mean <- mean(smalldata$post)
post_mean - pre_mean
```
```
## [1] 5.223055
```
### 2\.4\.2 Importing data
Built\-in data are nice for examples, but you’re probably more interested in your own data. There are many different types of files that you might work with when doing data analysis. These different file types are usually distinguished by the three letter [extension](https://psyteachr.github.io/glossary/e#extension "The end part of a file name that tells you what type of file it is (e.g., .R or .Rmd).") following a period at the end of the file name. Here are some examples of different types of files and the functions you would use to read them in or write them out.
| Extension | File Type | Reading | Writing |
| --- | --- | --- | --- |
| .csv | Comma\-separated values | `readr::read_csv()` | `readr::write_csv()` |
| .tsv, .txt | Tab\-separated values | `readr::read_tsv()` | `readr::write_tsv()` |
| .xls, .xlsx | Excel workbook | `readxl::read_excel()` | NA |
| .sav, .mat, … | Multiple types | `rio::import()` | NA |
The double colon means that the function on the right comes from the package on the left, so `readr::read_csv()` refers to the `read_csv()` function in the `readr` package, and `readxl::read_excel()` refers to the function `read_excel()` in the package `readxl`. The function `rio::import()` from the `rio` package will read almost any type of data file, including SPSS and Matlab. Check the help with `?rio::import` to see a full list.
You can get a directory of data files used in this class for tutorials and exercises with the following code, which will create a directory called “data” in your project directory. Alternatively, you can download a [zip file of the datasets](data/data.zip).
```
dataskills::getdata()
```
Probably the most common file type you will encounter is [.csv](https://psyteachr.github.io/glossary/c#csv "Comma-separated variable: a file type for representing data where each variable is separated from the next by a comma.") (comma\-separated values). As the name suggests, a CSV file distinguishes which values go with which variable by separating them with commas, and text values are sometimes enclosed in double quotes. The first line of a file usually provides the names of the variables.
For example, here are the first few lines of a CSV containing personality scores:
```
subj_id,O,C,E,A,N
S01,4.428571429,4.5,3.333333333,5.142857143,1.625
S02,5.714285714,2.9,3.222222222,3,2.625
S03,5.142857143,2.8,6,3.571428571,2.5
S04,3.142857143,5.2,1.333333333,1.571428571,3.125
S05,5.428571429,4.4,2.444444444,4.714285714,1.625
```
There are six variables in this dataset, and their names are given in the first line of the file: `subj_id`, `O`, `C`, `E`, `A`, and `N`. You can see that the values for each of these variables are given in order, separated by commas, on each subsequent line of the file.
When you read in CSV files, it is best practice to use the `readr::read_csv()` function. The `readr` package is automatically loaded as part of the `tidyverse` package, which we will be using in almost every script. Note that you would normally want to store the result of the `read_csv()` function to an object, as so:
```r
csv_data <- read_csv("data/5factor.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## subj_id = col_character(),
## O = col_double(),
## C = col_double(),
## E = col_double(),
## A = col_double(),
## N = col_double()
## )
```
The `read_csv()` and `read_tsv()` functions will give you some information about the data you just read in so you can check the column names and [data types](data.html#data_types). For now, it’s enough to know that `col_double()` refers to columns with numbers and `col_character()` refers to columns with words. We’ll learn in the [toroubleshooting](data.html#troubleshooting) section below how to fix it if the function guesses the wrong data type.
```
tsv_data <- read_tsv("data/5factor.txt")
xls_data <- readxl::read_xls("data/5factor.xls")
# you can load sheets from excel files by name or number
rep_data <- readxl::read_xls("data/5factor.xls", sheet = "replication")
spss_data <- rio::import("data/5factor.sav")
```
Once loaded, you can view your data using the data viewer. In the upper right hand window of RStudio, under the Environment tab, you will see the object `babynames` listed.
If you click on the View icon (, it will bring up a table view of the data you loaded in the top left pane of RStudio.
This allows you to check that the data have been loaded in properly. You can close the tab when you’re done looking at it, it won’t remove the object.
### 2\.4\.3 Creating data
If we are creating a data table from scratch, we can use the `tibble::tibble()` function, and type the data right in. The `tibble` package is part of the [tidyverse](https://psyteachr.github.io/glossary/t#tidyverse "A set of R packages that help you create and work with tidy data") package that we loaded at the start of this chapter.
Let’s create a small table with the names of three Avatar characters and their bending type. The `tibble()` function takes arguments with the names that you want your columns to have. The values are vectors that list the column values in order.
If you don’t know the value for one of the cells, you can enter `NA`, which we have to do for Sokka because he doesn’t have any bending ability. If all the values in the column are the same, you can just enter one value and it will be copied for each row.
```
avatar <- tibble(
name = c("Katara", "Toph", "Sokka"),
bends = c("water", "earth", NA),
friendly = TRUE
)
# print it
avatar
```
| name | bends | friendly |
| --- | --- | --- |
| Katara | water | TRUE |
| Toph | earth | TRUE |
| Sokka | NA | TRUE |
### 2\.4\.4 Writing Data
If you have data that you want to save to a CSV file, use `readr::write_csv()`, as follows.
```
write_csv(avatar, "avatar.csv")
```
This will save the data in CSV format to your working directory.
* Create a new table called `family` with the first name, last name, and age of your family members.
* Save it to a CSV file called “family.csv.”
* Clear the object from your environment by restarting R or with the code `remove(family)`.
* Load the data back in and view it.
We’ll be working with [tabular data](https://psyteachr.github.io/glossary/t#tabular-data "Data in a rectangular table format, where each row has an entry for each column.") a lot in this class, but tabular data is made up of [vectors](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings."), which group together data with the same basic [data type](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object."). The following sections explain some of this terminology to help you understand the functions we’ll be learning to process and analyse data.
2\.5 Basic data types
---------------------
Data can be numbers, words, true/false values or combinations of these. In order to understand some later concepts, it’s useful to have a basic understanding of [data types](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object.") in R: [numeric](https://psyteachr.github.io/glossary/n#numeric "A data type representing a real decimal number or integer."), [character](https://psyteachr.github.io/glossary/c#character "A data type representing strings of text."), and [logical](https://psyteachr.github.io/glossary/l#logical "A data type representing TRUE or FALSE values.") There is also a specific data type called a [factor](https://psyteachr.github.io/glossary/f#factor "A data type where a specific set of values are stored with labels; An explanatory variable manipulated by the experimenter"), which will probably give you a headache sooner or later, but we can ignore it for now.
### 2\.5\.1 Numeric data
All of the real numbers are [numeric](https://psyteachr.github.io/glossary/n#numeric "A data type representing a real decimal number or integer.") data types (imaginary numbers are “complex”). There are two types of numeric data, [integer](https://psyteachr.github.io/glossary/i#integer "A data type representing whole numbers.") and [double](https://psyteachr.github.io/glossary/d#double "A data type representing a real decimal number"). Integers are the whole numbers, like \-1, 0 and 1\. Doubles are numbers that can have fractional amounts. If you just type a plain number such as `10`, it is stored as a double, even if it doesn’t have a decimal point. If you want it to be an exact integer, use the `L` suffix (10L).
If you ever want to know the data type of something, use the `typeof` function.
```
typeof(10) # double
typeof(10.0) # double
typeof(10L) # integer
typeof(10i) # complex
```
```
## [1] "double"
## [1] "double"
## [1] "integer"
## [1] "complex"
```
If you want to know if something is numeric (a double or an integer), you can use the function `is.numeric()` and it will tell you if it is numeric (`TRUE`) or not (`FALSE`).
```
is.numeric(10L)
is.numeric(10.0)
is.numeric("Not a number")
```
```
## [1] TRUE
## [1] TRUE
## [1] FALSE
```
### 2\.5\.2 Character data
[Character](https://psyteachr.github.io/glossary/c#character "A data type representing strings of text.") strings are any text between quotation marks.
```
typeof("This is a character string")
typeof('You can use double or single quotes')
```
```
## [1] "character"
## [1] "character"
```
This can include quotes, but you have to [escape](https://psyteachr.github.io/glossary/e#escape "Include special characters like \" inside of a string by prefacing them with a backslash.") it using a backslash to signal the the quote isn’t meant to be the end of the string.
```
my_string <- "The instructor said, \"R is cool,\" and the class agreed."
cat(my_string) # cat() prints the arguments
```
```
## The instructor said, "R is cool," and the class agreed.
```
### 2\.5\.3 Logical Data
[Logical](https://psyteachr.github.io/glossary/l#logical "A data type representing TRUE or FALSE values.") data (also sometimes called “boolean” values) is one of two values: true or false. In R, we always write them in uppercase: `TRUE` and `FALSE`.
```
class(TRUE)
class(FALSE)
```
```
## [1] "logical"
## [1] "logical"
```
When you compare two values with an [operator](https://psyteachr.github.io/glossary/o#operator "A symbol that performs a mathematical operation, such as +, -, *, /"), such as checking to see if 10 is greater than 5, the resulting value is logical.
```
is.logical(10 > 5)
```
```
## [1] TRUE
```
You might also see logical values abbreviated as `T` and `F`, or `0` and `1`. This can cause some problems down the road, so we will always spell out the whole thing.
What data types are these:
* `100` integer double character logical factor
* `100L` integer double character logical factor
* `"100"` integer double character logical factor
* `100.0` integer double character logical factor
* `-100L` integer double character logical factor
* `factor(100)` integer double character logical factor
* `TRUE` integer double character logical factor
* `"TRUE"` integer double character logical factor
* `FALSE` integer double character logical factor
* `1 == 2` integer double character logical factor
### 2\.5\.1 Numeric data
All of the real numbers are [numeric](https://psyteachr.github.io/glossary/n#numeric "A data type representing a real decimal number or integer.") data types (imaginary numbers are “complex”). There are two types of numeric data, [integer](https://psyteachr.github.io/glossary/i#integer "A data type representing whole numbers.") and [double](https://psyteachr.github.io/glossary/d#double "A data type representing a real decimal number"). Integers are the whole numbers, like \-1, 0 and 1\. Doubles are numbers that can have fractional amounts. If you just type a plain number such as `10`, it is stored as a double, even if it doesn’t have a decimal point. If you want it to be an exact integer, use the `L` suffix (10L).
If you ever want to know the data type of something, use the `typeof` function.
```
typeof(10) # double
typeof(10.0) # double
typeof(10L) # integer
typeof(10i) # complex
```
```
## [1] "double"
## [1] "double"
## [1] "integer"
## [1] "complex"
```
If you want to know if something is numeric (a double or an integer), you can use the function `is.numeric()` and it will tell you if it is numeric (`TRUE`) or not (`FALSE`).
```
is.numeric(10L)
is.numeric(10.0)
is.numeric("Not a number")
```
```
## [1] TRUE
## [1] TRUE
## [1] FALSE
```
### 2\.5\.2 Character data
[Character](https://psyteachr.github.io/glossary/c#character "A data type representing strings of text.") strings are any text between quotation marks.
```
typeof("This is a character string")
typeof('You can use double or single quotes')
```
```
## [1] "character"
## [1] "character"
```
This can include quotes, but you have to [escape](https://psyteachr.github.io/glossary/e#escape "Include special characters like \" inside of a string by prefacing them with a backslash.") it using a backslash to signal the the quote isn’t meant to be the end of the string.
```
my_string <- "The instructor said, \"R is cool,\" and the class agreed."
cat(my_string) # cat() prints the arguments
```
```
## The instructor said, "R is cool," and the class agreed.
```
### 2\.5\.3 Logical Data
[Logical](https://psyteachr.github.io/glossary/l#logical "A data type representing TRUE or FALSE values.") data (also sometimes called “boolean” values) is one of two values: true or false. In R, we always write them in uppercase: `TRUE` and `FALSE`.
```
class(TRUE)
class(FALSE)
```
```
## [1] "logical"
## [1] "logical"
```
When you compare two values with an [operator](https://psyteachr.github.io/glossary/o#operator "A symbol that performs a mathematical operation, such as +, -, *, /"), such as checking to see if 10 is greater than 5, the resulting value is logical.
```
is.logical(10 > 5)
```
```
## [1] TRUE
```
You might also see logical values abbreviated as `T` and `F`, or `0` and `1`. This can cause some problems down the road, so we will always spell out the whole thing.
What data types are these:
* `100` integer double character logical factor
* `100L` integer double character logical factor
* `"100"` integer double character logical factor
* `100.0` integer double character logical factor
* `-100L` integer double character logical factor
* `factor(100)` integer double character logical factor
* `TRUE` integer double character logical factor
* `"TRUE"` integer double character logical factor
* `FALSE` integer double character logical factor
* `1 == 2` integer double character logical factor
2\.6 Basic container types
--------------------------
Individual data values can be grouped together into containers. The main types of containers we’ll work with are vectors, lists, and data tables.
### 2\.6\.1 Vectors
A [vector](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings.") in R is like a vector in mathematics: a set of ordered elements. All of the elements in a vector must be of the same **data type** (numeric, character, logical). You can create a vector by enclosing the elements in the function `c()`.
```
## put information into a vector using c(...)
c(1, 2, 3, 4)
c("this", "is", "cool")
1:6 # shortcut to make a vector of all integers x:y
```
```
## [1] 1 2 3 4
## [1] "this" "is" "cool"
## [1] 1 2 3 4 5 6
```
What happens when you mix types? What class is the variable `mixed`?
```
mixed <- c(2, "good", 2L, "b", TRUE)
```
You can’t mix data types in a vector; all elements of the vector must be the same data type. If you mix them, R will “coerce” them so that they are all the same. If you mix doubles and integers, the integers will be changed to doubles. If you mix characters and numeric types, the numbers will be coerced to characters, so `10` would turn into “10\.”
#### 2\.6\.1\.1 Selecting values from a vector
If we wanted to pick specific values out of a vector by position, we can use square brackets (an [extract operator](https://psyteachr.github.io/glossary/e#extract-operator "A symbol used to get values from a container object, such as [, [[, or $"), or `[]`) after the vector.
```
values <- c(10, 20, 30, 40, 50)
values[2] # selects the second value
```
```
## [1] 20
```
You can select more than one value from the vector by putting a vector of numbers inside the square brackets. For example, you can select the 18th, 19th, 20th, 21st, 4th, 9th and 15th letter from the built\-in vector `LETTERS` (which gives all the uppercase letters in the Latin alphabet).
```
word <- c(18, 19, 20, 21, 4, 9, 15)
LETTERS[word]
```
```
## [1] "R" "S" "T" "U" "D" "I" "O"
```
Can you decode the secret message?
```
secret <- c(14, 5, 22, 5, 18, 7, 15, 14, 14, 1, 7, 9, 22, 5, 25, 15, 21, 21, 16)
```
You can also create ‘named’ vectors, where each element has a name. For example:
```
vec <- c(first = 77.9, second = -13.2, third = 100.1)
vec
```
```
## first second third
## 77.9 -13.2 100.1
```
We can then access elements by name using a character vector within the square brackets. We can put them in any order we want, and we can repeat elements:
```
vec[c("third", "second", "second")]
```
```
## third second second
## 100.1 -13.2 -13.2
```
We can get the vector of names using the `names()` function, and we can set or change them using something like `names(vec2) <- c(“n1,” “n2,” “n3”)`.
Another way to access elements is by using a logical vector within the square brackets. This will pull out the elements of the vector for which the corresponding element of the logical vector is `TRUE`. If the logical vector doesn’t have the same length as the original, it will repeat. You can find out how long a vector is using the `length()` function.
```
length(LETTERS)
LETTERS[c(TRUE, FALSE)]
```
```
## [1] 26
## [1] "A" "C" "E" "G" "I" "K" "M" "O" "Q" "S" "U" "W" "Y"
```
#### 2\.6\.1\.2 Repeating Sequences
Here are some useful tricks to save typing when creating vectors.
In the command `x:y` the `:` operator would give you the sequence of number starting at `x`, and going to `y` in increments of 1\.
```
1:10
15.3:20.5
0:-10
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
## [1] 15.3 16.3 17.3 18.3 19.3 20.3
## [1] 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10
```
What if you want to create a sequence but with something other than integer steps? You can use the `seq()` function. Look at the examples below and work out what the arguments do.
```
seq(from = -1, to = 1, by = 0.2)
seq(0, 100, length.out = 11)
seq(0, 10, along.with = LETTERS)
```
```
## [1] -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
## [1] 0 10 20 30 40 50 60 70 80 90 100
## [1] 0.0 0.4 0.8 1.2 1.6 2.0 2.4 2.8 3.2 3.6 4.0 4.4 4.8 5.2 5.6
## [16] 6.0 6.4 6.8 7.2 7.6 8.0 8.4 8.8 9.2 9.6 10.0
```
What if you want to repeat a vector many times? You could either type it out (painful) or use the `rep()` function, which can repeat vectors in different ways.
```
rep(0, 10) # ten zeroes
rep(c(1L, 3L), times = 7) # alternating 1 and 3, 7 times
rep(c("A", "B", "C"), each = 2) # A to C, 2 times each
```
```
## [1] 0 0 0 0 0 0 0 0 0 0
## [1] 1 3 1 3 1 3 1 3 1 3 1 3 1 3
## [1] "A" "A" "B" "B" "C" "C"
```
The `rep()` function is useful to create a vector of logical values (`TRUE`/`FALSE` or `1`/`0`) to select values from another vector.
```
# Get subject IDs in the pattern Y Y N N ...
subject_ids <- 1:40
yynn <- rep(c(TRUE, FALSE), each = 2,
length.out = length(subject_ids))
subject_ids[yynn]
```
```
## [1] 1 2 5 6 9 10 13 14 17 18 21 22 25 26 29 30 33 34 37 38
```
#### 2\.6\.1\.3 Vectorized Operations
R performs calculations on vectors in a special way. Let’s look at an example using \\(z\\)\-scores. A \\(z\\)\-score is a [deviation score](https://psyteachr.github.io/glossary/d#deviation-score "A score minus the mean")(a score minus a mean) divided by a standard deviation. Let’s say we have a set of four IQ scores.
```
## example IQ scores: mu = 100, sigma = 15
iq <- c(86, 101, 127, 99)
```
If we want to subtract the mean from these four scores, we just use the following code:
```
iq - 100
```
```
## [1] -14 1 27 -1
```
This subtracts 100 from each element of the vector. R automatically assumes that this is what you wanted to do; it is called a [vectorized](https://psyteachr.github.io/glossary/v#vectorized "An operator or function that acts on each element in a vector") operation and it makes it possible to express operations more efficiently.
To calculate \\(z\\)\-scores we use the formula:
\\(z \= \\frac{X \- \\mu}{\\sigma}\\)
where X are the scores, \\(\\mu\\) is the mean, and \\(\\sigma\\) is the standard deviation. We can expression this formula in R as follows:
```
## z-scores
(iq - 100) / 15
```
```
## [1] -0.93333333 0.06666667 1.80000000 -0.06666667
```
You can see that it computed all four \\(z\\)\-scores with a single line of code. In later chapters, we’ll use vectorised operations to process our data, such as reverse\-scoring some questionnaire items.
### 2\.6\.2 Lists
Recall that vectors can contain data of only one type. What if you want to store a collection of data of different data types? For that purpose you would use a [list](https://psyteachr.github.io/glossary/l#list "A container data type that allows items with different data types to be grouped together."). Define a list using the `list()` function.
```
data_types <- list(
double = 10.0,
integer = 10L,
character = "10",
logical = TRUE
)
str(data_types) # str() prints lists in a condensed format
```
```
## List of 4
## $ double : num 10
## $ integer : int 10
## $ character: chr "10"
## $ logical : logi TRUE
```
You can refer to elements of a list using square brackets like a vector, but you can also use the dollar sign notation (`$`) if the list items have names.
```
data_types$logical
```
```
## [1] TRUE
```
Explore the 5 ways shown below to extract a value from a list. What data type is each object? What is the difference between the single and double brackets? Which one is the same as the dollar sign?
```
bracket1 <- data_types[1]
bracket2 <- data_types[[1]]
name1 <- data_types["double"]
name2 <- data_types[["double"]]
dollar <- data_types$double
```
### 2\.6\.3 Tables
The built\-in, imported, and created data above are [tabular data](https://psyteachr.github.io/glossary/t#tabular-data "Data in a rectangular table format, where each row has an entry for each column."), data arranged in the form of a table.
Tabular data structures allow for a collection of data of different types (characters, integers, logical, etc.) but subject to the constraint that each “column” of the table (element of the list) must have the same number of elements. The base R version of a table is called a `data.frame`, while the ‘tidyverse’ version is called a `tibble`. Tibbles are far easier to work with, so we’ll be using those. To learn more about differences between these two data structures, see `vignette("tibble")`.
Tabular data becomes especially important for when we talk about [tidy data](https://psyteachr.github.io/glossary/t#tidy-data "A format for data that maps the meaning onto the structure.") in [chapter 4](tidyr.html#tidyr), which consists of a set of simple principles for structuring data.
#### 2\.6\.3\.1 Creating a table
We learned how to create a table by importing a Excel or CSV file, and creating a table from scratch using the `tibble()` function. You can also use the `tibble::tribble()` function to create a table by row, rather than by column. You start by listing the column names, each preceded by a tilde (`~`), then you list the values for each column, row by row, separated by commas (don’t forget a comma at the end of each row). This method can be easier for some data, but doesn’t let you use shortcuts, like setting all of the values in a column to the same value or a [repeating sequence](data.html#rep_seq).
```
# by column using tibble
avatar_by_col <- tibble(
name = c("Katara", "Toph", "Sokka", "Azula"),
bends = c("water", "earth", NA, "fire"),
friendly = rep(c(TRUE, FALSE), c(3, 1))
)
# by row using tribble
avatar_by_row <- tribble(
~name, ~bends, ~friendly,
"Katara", "water", TRUE,
"Toph", "earth", TRUE,
"Sokka", NA, TRUE,
"Azula", "fire", FALSE
)
```
#### 2\.6\.3\.2 Table info
We can get information about the table using the functions `ncol()` (number of columns), `nrow()` (number of rows), `dim()` (the number of rows and number of columns), and `name()` (the column names).
```
nrow(avatar) # how many rows?
ncol(avatar) # how many columns?
dim(avatar) # what are the table dimensions?
names(avatar) # what are the column names?
```
```
## [1] 3
## [1] 3
## [1] 3 3
## [1] "name" "bends" "friendly"
```
#### 2\.6\.3\.3 Accessing rows and columns
There are various ways of accessing specific columns or rows from a table. The ones below are from [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") and are useful to know about, but you’ll be learning easier (and more readable) ways in the [tidyr](tidyr.html#tidyr) and [dplyr](dplyr.html#dplyr) lessons. Examples of these base R accessing functions are provided here for reference, since you might see them in other people’s scripts.
```
katara <- avatar[1, ] # first row
type <- avatar[, 2] # second column (bends)
benders <- avatar[c(1, 2), ] # selected rows (by number)
bends_name <- avatar[, c("bends", "name")] # selected columns (by name)
friendly <- avatar$friendly # by column name
```
### 2\.6\.1 Vectors
A [vector](https://psyteachr.github.io/glossary/v#vector "A type of data structure that is basically a list of things like T/F values, numbers, or strings.") in R is like a vector in mathematics: a set of ordered elements. All of the elements in a vector must be of the same **data type** (numeric, character, logical). You can create a vector by enclosing the elements in the function `c()`.
```
## put information into a vector using c(...)
c(1, 2, 3, 4)
c("this", "is", "cool")
1:6 # shortcut to make a vector of all integers x:y
```
```
## [1] 1 2 3 4
## [1] "this" "is" "cool"
## [1] 1 2 3 4 5 6
```
What happens when you mix types? What class is the variable `mixed`?
```
mixed <- c(2, "good", 2L, "b", TRUE)
```
You can’t mix data types in a vector; all elements of the vector must be the same data type. If you mix them, R will “coerce” them so that they are all the same. If you mix doubles and integers, the integers will be changed to doubles. If you mix characters and numeric types, the numbers will be coerced to characters, so `10` would turn into “10\.”
#### 2\.6\.1\.1 Selecting values from a vector
If we wanted to pick specific values out of a vector by position, we can use square brackets (an [extract operator](https://psyteachr.github.io/glossary/e#extract-operator "A symbol used to get values from a container object, such as [, [[, or $"), or `[]`) after the vector.
```
values <- c(10, 20, 30, 40, 50)
values[2] # selects the second value
```
```
## [1] 20
```
You can select more than one value from the vector by putting a vector of numbers inside the square brackets. For example, you can select the 18th, 19th, 20th, 21st, 4th, 9th and 15th letter from the built\-in vector `LETTERS` (which gives all the uppercase letters in the Latin alphabet).
```
word <- c(18, 19, 20, 21, 4, 9, 15)
LETTERS[word]
```
```
## [1] "R" "S" "T" "U" "D" "I" "O"
```
Can you decode the secret message?
```
secret <- c(14, 5, 22, 5, 18, 7, 15, 14, 14, 1, 7, 9, 22, 5, 25, 15, 21, 21, 16)
```
You can also create ‘named’ vectors, where each element has a name. For example:
```
vec <- c(first = 77.9, second = -13.2, third = 100.1)
vec
```
```
## first second third
## 77.9 -13.2 100.1
```
We can then access elements by name using a character vector within the square brackets. We can put them in any order we want, and we can repeat elements:
```
vec[c("third", "second", "second")]
```
```
## third second second
## 100.1 -13.2 -13.2
```
We can get the vector of names using the `names()` function, and we can set or change them using something like `names(vec2) <- c(“n1,” “n2,” “n3”)`.
Another way to access elements is by using a logical vector within the square brackets. This will pull out the elements of the vector for which the corresponding element of the logical vector is `TRUE`. If the logical vector doesn’t have the same length as the original, it will repeat. You can find out how long a vector is using the `length()` function.
```
length(LETTERS)
LETTERS[c(TRUE, FALSE)]
```
```
## [1] 26
## [1] "A" "C" "E" "G" "I" "K" "M" "O" "Q" "S" "U" "W" "Y"
```
#### 2\.6\.1\.2 Repeating Sequences
Here are some useful tricks to save typing when creating vectors.
In the command `x:y` the `:` operator would give you the sequence of number starting at `x`, and going to `y` in increments of 1\.
```
1:10
15.3:20.5
0:-10
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
## [1] 15.3 16.3 17.3 18.3 19.3 20.3
## [1] 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10
```
What if you want to create a sequence but with something other than integer steps? You can use the `seq()` function. Look at the examples below and work out what the arguments do.
```
seq(from = -1, to = 1, by = 0.2)
seq(0, 100, length.out = 11)
seq(0, 10, along.with = LETTERS)
```
```
## [1] -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
## [1] 0 10 20 30 40 50 60 70 80 90 100
## [1] 0.0 0.4 0.8 1.2 1.6 2.0 2.4 2.8 3.2 3.6 4.0 4.4 4.8 5.2 5.6
## [16] 6.0 6.4 6.8 7.2 7.6 8.0 8.4 8.8 9.2 9.6 10.0
```
What if you want to repeat a vector many times? You could either type it out (painful) or use the `rep()` function, which can repeat vectors in different ways.
```
rep(0, 10) # ten zeroes
rep(c(1L, 3L), times = 7) # alternating 1 and 3, 7 times
rep(c("A", "B", "C"), each = 2) # A to C, 2 times each
```
```
## [1] 0 0 0 0 0 0 0 0 0 0
## [1] 1 3 1 3 1 3 1 3 1 3 1 3 1 3
## [1] "A" "A" "B" "B" "C" "C"
```
The `rep()` function is useful to create a vector of logical values (`TRUE`/`FALSE` or `1`/`0`) to select values from another vector.
```
# Get subject IDs in the pattern Y Y N N ...
subject_ids <- 1:40
yynn <- rep(c(TRUE, FALSE), each = 2,
length.out = length(subject_ids))
subject_ids[yynn]
```
```
## [1] 1 2 5 6 9 10 13 14 17 18 21 22 25 26 29 30 33 34 37 38
```
#### 2\.6\.1\.3 Vectorized Operations
R performs calculations on vectors in a special way. Let’s look at an example using \\(z\\)\-scores. A \\(z\\)\-score is a [deviation score](https://psyteachr.github.io/glossary/d#deviation-score "A score minus the mean")(a score minus a mean) divided by a standard deviation. Let’s say we have a set of four IQ scores.
```
## example IQ scores: mu = 100, sigma = 15
iq <- c(86, 101, 127, 99)
```
If we want to subtract the mean from these four scores, we just use the following code:
```
iq - 100
```
```
## [1] -14 1 27 -1
```
This subtracts 100 from each element of the vector. R automatically assumes that this is what you wanted to do; it is called a [vectorized](https://psyteachr.github.io/glossary/v#vectorized "An operator or function that acts on each element in a vector") operation and it makes it possible to express operations more efficiently.
To calculate \\(z\\)\-scores we use the formula:
\\(z \= \\frac{X \- \\mu}{\\sigma}\\)
where X are the scores, \\(\\mu\\) is the mean, and \\(\\sigma\\) is the standard deviation. We can expression this formula in R as follows:
```
## z-scores
(iq - 100) / 15
```
```
## [1] -0.93333333 0.06666667 1.80000000 -0.06666667
```
You can see that it computed all four \\(z\\)\-scores with a single line of code. In later chapters, we’ll use vectorised operations to process our data, such as reverse\-scoring some questionnaire items.
#### 2\.6\.1\.1 Selecting values from a vector
If we wanted to pick specific values out of a vector by position, we can use square brackets (an [extract operator](https://psyteachr.github.io/glossary/e#extract-operator "A symbol used to get values from a container object, such as [, [[, or $"), or `[]`) after the vector.
```
values <- c(10, 20, 30, 40, 50)
values[2] # selects the second value
```
```
## [1] 20
```
You can select more than one value from the vector by putting a vector of numbers inside the square brackets. For example, you can select the 18th, 19th, 20th, 21st, 4th, 9th and 15th letter from the built\-in vector `LETTERS` (which gives all the uppercase letters in the Latin alphabet).
```
word <- c(18, 19, 20, 21, 4, 9, 15)
LETTERS[word]
```
```
## [1] "R" "S" "T" "U" "D" "I" "O"
```
Can you decode the secret message?
```
secret <- c(14, 5, 22, 5, 18, 7, 15, 14, 14, 1, 7, 9, 22, 5, 25, 15, 21, 21, 16)
```
You can also create ‘named’ vectors, where each element has a name. For example:
```
vec <- c(first = 77.9, second = -13.2, third = 100.1)
vec
```
```
## first second third
## 77.9 -13.2 100.1
```
We can then access elements by name using a character vector within the square brackets. We can put them in any order we want, and we can repeat elements:
```
vec[c("third", "second", "second")]
```
```
## third second second
## 100.1 -13.2 -13.2
```
We can get the vector of names using the `names()` function, and we can set or change them using something like `names(vec2) <- c(“n1,” “n2,” “n3”)`.
Another way to access elements is by using a logical vector within the square brackets. This will pull out the elements of the vector for which the corresponding element of the logical vector is `TRUE`. If the logical vector doesn’t have the same length as the original, it will repeat. You can find out how long a vector is using the `length()` function.
```
length(LETTERS)
LETTERS[c(TRUE, FALSE)]
```
```
## [1] 26
## [1] "A" "C" "E" "G" "I" "K" "M" "O" "Q" "S" "U" "W" "Y"
```
#### 2\.6\.1\.2 Repeating Sequences
Here are some useful tricks to save typing when creating vectors.
In the command `x:y` the `:` operator would give you the sequence of number starting at `x`, and going to `y` in increments of 1\.
```
1:10
15.3:20.5
0:-10
```
```
## [1] 1 2 3 4 5 6 7 8 9 10
## [1] 15.3 16.3 17.3 18.3 19.3 20.3
## [1] 0 -1 -2 -3 -4 -5 -6 -7 -8 -9 -10
```
What if you want to create a sequence but with something other than integer steps? You can use the `seq()` function. Look at the examples below and work out what the arguments do.
```
seq(from = -1, to = 1, by = 0.2)
seq(0, 100, length.out = 11)
seq(0, 10, along.with = LETTERS)
```
```
## [1] -1.0 -0.8 -0.6 -0.4 -0.2 0.0 0.2 0.4 0.6 0.8 1.0
## [1] 0 10 20 30 40 50 60 70 80 90 100
## [1] 0.0 0.4 0.8 1.2 1.6 2.0 2.4 2.8 3.2 3.6 4.0 4.4 4.8 5.2 5.6
## [16] 6.0 6.4 6.8 7.2 7.6 8.0 8.4 8.8 9.2 9.6 10.0
```
What if you want to repeat a vector many times? You could either type it out (painful) or use the `rep()` function, which can repeat vectors in different ways.
```
rep(0, 10) # ten zeroes
rep(c(1L, 3L), times = 7) # alternating 1 and 3, 7 times
rep(c("A", "B", "C"), each = 2) # A to C, 2 times each
```
```
## [1] 0 0 0 0 0 0 0 0 0 0
## [1] 1 3 1 3 1 3 1 3 1 3 1 3 1 3
## [1] "A" "A" "B" "B" "C" "C"
```
The `rep()` function is useful to create a vector of logical values (`TRUE`/`FALSE` or `1`/`0`) to select values from another vector.
```
# Get subject IDs in the pattern Y Y N N ...
subject_ids <- 1:40
yynn <- rep(c(TRUE, FALSE), each = 2,
length.out = length(subject_ids))
subject_ids[yynn]
```
```
## [1] 1 2 5 6 9 10 13 14 17 18 21 22 25 26 29 30 33 34 37 38
```
#### 2\.6\.1\.3 Vectorized Operations
R performs calculations on vectors in a special way. Let’s look at an example using \\(z\\)\-scores. A \\(z\\)\-score is a [deviation score](https://psyteachr.github.io/glossary/d#deviation-score "A score minus the mean")(a score minus a mean) divided by a standard deviation. Let’s say we have a set of four IQ scores.
```
## example IQ scores: mu = 100, sigma = 15
iq <- c(86, 101, 127, 99)
```
If we want to subtract the mean from these four scores, we just use the following code:
```
iq - 100
```
```
## [1] -14 1 27 -1
```
This subtracts 100 from each element of the vector. R automatically assumes that this is what you wanted to do; it is called a [vectorized](https://psyteachr.github.io/glossary/v#vectorized "An operator or function that acts on each element in a vector") operation and it makes it possible to express operations more efficiently.
To calculate \\(z\\)\-scores we use the formula:
\\(z \= \\frac{X \- \\mu}{\\sigma}\\)
where X are the scores, \\(\\mu\\) is the mean, and \\(\\sigma\\) is the standard deviation. We can expression this formula in R as follows:
```
## z-scores
(iq - 100) / 15
```
```
## [1] -0.93333333 0.06666667 1.80000000 -0.06666667
```
You can see that it computed all four \\(z\\)\-scores with a single line of code. In later chapters, we’ll use vectorised operations to process our data, such as reverse\-scoring some questionnaire items.
### 2\.6\.2 Lists
Recall that vectors can contain data of only one type. What if you want to store a collection of data of different data types? For that purpose you would use a [list](https://psyteachr.github.io/glossary/l#list "A container data type that allows items with different data types to be grouped together."). Define a list using the `list()` function.
```
data_types <- list(
double = 10.0,
integer = 10L,
character = "10",
logical = TRUE
)
str(data_types) # str() prints lists in a condensed format
```
```
## List of 4
## $ double : num 10
## $ integer : int 10
## $ character: chr "10"
## $ logical : logi TRUE
```
You can refer to elements of a list using square brackets like a vector, but you can also use the dollar sign notation (`$`) if the list items have names.
```
data_types$logical
```
```
## [1] TRUE
```
Explore the 5 ways shown below to extract a value from a list. What data type is each object? What is the difference between the single and double brackets? Which one is the same as the dollar sign?
```
bracket1 <- data_types[1]
bracket2 <- data_types[[1]]
name1 <- data_types["double"]
name2 <- data_types[["double"]]
dollar <- data_types$double
```
### 2\.6\.3 Tables
The built\-in, imported, and created data above are [tabular data](https://psyteachr.github.io/glossary/t#tabular-data "Data in a rectangular table format, where each row has an entry for each column."), data arranged in the form of a table.
Tabular data structures allow for a collection of data of different types (characters, integers, logical, etc.) but subject to the constraint that each “column” of the table (element of the list) must have the same number of elements. The base R version of a table is called a `data.frame`, while the ‘tidyverse’ version is called a `tibble`. Tibbles are far easier to work with, so we’ll be using those. To learn more about differences between these two data structures, see `vignette("tibble")`.
Tabular data becomes especially important for when we talk about [tidy data](https://psyteachr.github.io/glossary/t#tidy-data "A format for data that maps the meaning onto the structure.") in [chapter 4](tidyr.html#tidyr), which consists of a set of simple principles for structuring data.
#### 2\.6\.3\.1 Creating a table
We learned how to create a table by importing a Excel or CSV file, and creating a table from scratch using the `tibble()` function. You can also use the `tibble::tribble()` function to create a table by row, rather than by column. You start by listing the column names, each preceded by a tilde (`~`), then you list the values for each column, row by row, separated by commas (don’t forget a comma at the end of each row). This method can be easier for some data, but doesn’t let you use shortcuts, like setting all of the values in a column to the same value or a [repeating sequence](data.html#rep_seq).
```
# by column using tibble
avatar_by_col <- tibble(
name = c("Katara", "Toph", "Sokka", "Azula"),
bends = c("water", "earth", NA, "fire"),
friendly = rep(c(TRUE, FALSE), c(3, 1))
)
# by row using tribble
avatar_by_row <- tribble(
~name, ~bends, ~friendly,
"Katara", "water", TRUE,
"Toph", "earth", TRUE,
"Sokka", NA, TRUE,
"Azula", "fire", FALSE
)
```
#### 2\.6\.3\.2 Table info
We can get information about the table using the functions `ncol()` (number of columns), `nrow()` (number of rows), `dim()` (the number of rows and number of columns), and `name()` (the column names).
```
nrow(avatar) # how many rows?
ncol(avatar) # how many columns?
dim(avatar) # what are the table dimensions?
names(avatar) # what are the column names?
```
```
## [1] 3
## [1] 3
## [1] 3 3
## [1] "name" "bends" "friendly"
```
#### 2\.6\.3\.3 Accessing rows and columns
There are various ways of accessing specific columns or rows from a table. The ones below are from [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") and are useful to know about, but you’ll be learning easier (and more readable) ways in the [tidyr](tidyr.html#tidyr) and [dplyr](dplyr.html#dplyr) lessons. Examples of these base R accessing functions are provided here for reference, since you might see them in other people’s scripts.
```
katara <- avatar[1, ] # first row
type <- avatar[, 2] # second column (bends)
benders <- avatar[c(1, 2), ] # selected rows (by number)
bends_name <- avatar[, c("bends", "name")] # selected columns (by name)
friendly <- avatar$friendly # by column name
```
#### 2\.6\.3\.1 Creating a table
We learned how to create a table by importing a Excel or CSV file, and creating a table from scratch using the `tibble()` function. You can also use the `tibble::tribble()` function to create a table by row, rather than by column. You start by listing the column names, each preceded by a tilde (`~`), then you list the values for each column, row by row, separated by commas (don’t forget a comma at the end of each row). This method can be easier for some data, but doesn’t let you use shortcuts, like setting all of the values in a column to the same value or a [repeating sequence](data.html#rep_seq).
```
# by column using tibble
avatar_by_col <- tibble(
name = c("Katara", "Toph", "Sokka", "Azula"),
bends = c("water", "earth", NA, "fire"),
friendly = rep(c(TRUE, FALSE), c(3, 1))
)
# by row using tribble
avatar_by_row <- tribble(
~name, ~bends, ~friendly,
"Katara", "water", TRUE,
"Toph", "earth", TRUE,
"Sokka", NA, TRUE,
"Azula", "fire", FALSE
)
```
#### 2\.6\.3\.2 Table info
We can get information about the table using the functions `ncol()` (number of columns), `nrow()` (number of rows), `dim()` (the number of rows and number of columns), and `name()` (the column names).
```
nrow(avatar) # how many rows?
ncol(avatar) # how many columns?
dim(avatar) # what are the table dimensions?
names(avatar) # what are the column names?
```
```
## [1] 3
## [1] 3
## [1] 3 3
## [1] "name" "bends" "friendly"
```
#### 2\.6\.3\.3 Accessing rows and columns
There are various ways of accessing specific columns or rows from a table. The ones below are from [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") and are useful to know about, but you’ll be learning easier (and more readable) ways in the [tidyr](tidyr.html#tidyr) and [dplyr](dplyr.html#dplyr) lessons. Examples of these base R accessing functions are provided here for reference, since you might see them in other people’s scripts.
```
katara <- avatar[1, ] # first row
type <- avatar[, 2] # second column (bends)
benders <- avatar[c(1, 2), ] # selected rows (by number)
bends_name <- avatar[, c("bends", "name")] # selected columns (by name)
friendly <- avatar$friendly # by column name
```
2\.7 Troubleshooting
--------------------
What if you import some data and it guesses the wrong column type? The most common reason is that a numeric column has some non\-numbers in it somewhere. Maybe someone wrote a note in an otherwise numeric column. Columns have to be all one data type, so if there are any characters, the whole column is converted to character strings, and numbers like `1.2` get represented as “1\.2,” which will cause very weird errors like `"100" < "9" == TRUE`. You can catch this by looking at the output from `read_csv()` or using `glimpse()` to check your data.
The data directory you created with `dataskills::getdata()` contains a file called “mess.csv.” Let’s try loading this dataset.
```
mess <- read_csv("data/mess.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## `This is my messy dataset` = col_character()
## )
```
```
## Warning: 27 parsing failures.
## row col expected actual file
## 1 -- 1 columns 7 columns 'data/mess.csv'
## 2 -- 1 columns 7 columns 'data/mess.csv'
## 3 -- 1 columns 7 columns 'data/mess.csv'
## 4 -- 1 columns 7 columns 'data/mess.csv'
## 5 -- 1 columns 7 columns 'data/mess.csv'
## ... ... ......... ......... ...............
## See problems(...) for more details.
```
You’ll get a warning with many parsing errors and `mess` is just a single column of the word “junk.” View the file `data/mess.csv` by clicking on it in the File pane, and choosing “View File.” Here are the first 10 lines. What went wrong?
```
This is my messy dataset
junk,order,score,letter,good,min_max,date
junk,1,-1,a,1,1 - 2,2020-01-1
junk,missing,0.72,b,1,2 - 3,2020-01-2
junk,3,-0.62,c,FALSE,3 - 4,2020-01-3
junk,4,2.03,d,T,4 - 5,2020-01-4
```
First, the file starts with a note: “This is my messy dataset.” We want to skip the first two lines. You can do this with the argument `skip` in `read_csv()`.
```
mess <- read_csv("data/mess.csv", skip = 2)
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## junk = col_character(),
## order = col_character(),
## score = col_double(),
## letter = col_character(),
## good = col_character(),
## min_max = col_character(),
## date = col_character()
## )
```
```
mess
```
| junk | order | score | letter | good | min\_max | date |
| --- | --- | --- | --- | --- | --- | --- |
| junk | 1 | \-1\.00 | a | 1 | 1 \- 2 | 2020\-01\-1 |
| junk | missing | 0\.72 | b | 1 | 2 \- 3 | 2020\-01\-2 |
| junk | 3 | \-0\.62 | c | FALSE | 3 \- 4 | 2020\-01\-3 |
| junk | 4 | 2\.03 | d | T | 4 \- 5 | 2020\-01\-4 |
| junk | 5 | NA | e | 1 | 5 \- 6 | 2020\-01\-5 |
| junk | 6 | 0\.99 | f | 0 | 6 \- 7 | 2020\-01\-6 |
| junk | 7 | 0\.03 | g | T | 7 \- 8 | 2020\-01\-7 |
| junk | 8 | 0\.67 | h | TRUE | 8 \- 9 | 2020\-01\-8 |
| junk | 9 | 0\.57 | i | 1 | 9 \- 10 | 2020\-01\-9 |
| junk | 10 | 0\.90 | j | T | 10 \- 11 | 2020\-01\-10 |
| junk | 11 | \-1\.55 | k | F | 11 \- 12 | 2020\-01\-11 |
| junk | 12 | NA | l | FALSE | 12 \- 13 | 2020\-01\-12 |
| junk | 13 | 0\.15 | m | T | 13 \- 14 | 2020\-01\-13 |
| junk | 14 | \-0\.66 | n | TRUE | 14 \- 15 | 2020\-01\-14 |
| junk | 15 | \-0\.99 | o | 1 | 15 \- 16 | 2020\-01\-15 |
| junk | 16 | 1\.97 | p | T | 16 \- 17 | 2020\-01\-16 |
| junk | 17 | \-0\.44 | q | TRUE | 17 \- 18 | 2020\-01\-17 |
| junk | 18 | \-0\.90 | r | F | 18 \- 19 | 2020\-01\-18 |
| junk | 19 | \-0\.15 | s | FALSE | 19 \- 20 | 2020\-01\-19 |
| junk | 20 | \-0\.83 | t | 0 | 20 \- 21 | 2020\-01\-20 |
| junk | 21 | 1\.99 | u | T | 21 \- 22 | 2020\-01\-21 |
| junk | 22 | 0\.04 | v | F | 22 \- 23 | 2020\-01\-22 |
| junk | 23 | \-0\.40 | w | F | 23 \- 24 | 2020\-01\-23 |
| junk | 24 | \-0\.47 | x | 0 | 24 \- 25 | 2020\-01\-24 |
| junk | 25 | \-0\.41 | y | TRUE | 25 \- 26 | 2020\-01\-25 |
| junk | 26 | 0\.68 | z | 0 | 26 \- 27 | 2020\-01\-26 |
OK, that’s a little better, but this table is still a serious mess in several ways:
* `junk` is a column that we don’t need
* `order` should be an integer column
* `good` should be a logical column
* `good` uses all kinds of different ways to record TRUE and FALSE values
* `min_max` contains two pieces of numeric information, but is a character column
* `date` should be a date column
We’ll learn how to deal with this mess in the chapters on [tidy data](tidyr.html#tidyr) and [data wrangling](dplyr.html#dplyr), but we can fix a few things by setting the `col_types` argument in `read_csv()` to specify the column types for our two columns that were guessed wrong and skip the “junk” column. The argument `col_types` takes a list where the name of each item in the list is a column name and the value is from the table below. You can use the function, like `col_double()` or the abbreviation, like `"l"`. Omitted column names are guessed.
| function | | abbreviation |
| --- | --- | --- |
| col\_logical() | l | logical values |
| col\_integer() | i | integer values |
| col\_double() | d | numeric values |
| col\_character() | c | strings |
| col\_factor(levels, ordered) | f | a fixed set of values |
| col\_date(format \= "") | D | with the locale’s date\_format |
| col\_time(format \= "") | t | with the locale’s time\_format |
| col\_datetime(format \= "") | T | ISO8601 date time |
| col\_number() | n | numbers containing the grouping\_mark |
| col\_skip() | \_, \- | don’t import this column |
| col\_guess() | ? | parse using the “best” type based on the input |
```
# omitted values are guessed
# ?col_date for format options
ct <- list(
junk = "-", # skip this column
order = "i",
good = "l",
date = col_date(format = "%Y-%m-%d")
)
tidier <- read_csv("data/mess.csv",
skip = 2,
col_types = ct)
```
```
## Warning: 1 parsing failure.
## row col expected actual file
## 2 order an integer missing 'data/mess.csv'
```
You will get a message about “1 parsing failure” when you run this. Warnings look scary at first, but always start by reading the message. The table tells you what row (`2`) and column (`order`) the error was found in, what kind of data was expected (`integer`), and what the actual value was (`missing`). If you specifically tell `read_csv()` to import a column as an integer, any characters in the column will produce a warning like this and then be recorded as `NA`. You can manually set what the missing values are recorded as with the `na` argument.
```
tidiest <- read_csv("data/mess.csv",
skip = 2,
na = "missing",
col_types = ct)
```
Now `order` is an integer where “missing” is now `NA`, `good` is a logical value, where `0` and `F` are converted to `FALSE` and `1` and `T` are converted to `TRUE`, and `date` is a date type (adding leading zeros to the day). We’ll learn in later chapters how to fix the other problems.
```
tidiest
```
| order | score | letter | good | min\_max | date |
| --- | --- | --- | --- | --- | --- |
| 1 | \-1 | a | TRUE | 1 \- 2 | 2020\-01\-01 |
| NA | 0\.72 | b | TRUE | 2 \- 3 | 2020\-01\-02 |
| 3 | \-0\.62 | c | FALSE | 3 \- 4 | 2020\-01\-03 |
| 4 | 2\.03 | d | TRUE | 4 \- 5 | 2020\-01\-04 |
| 5 | NA | e | TRUE | 5 \- 6 | 2020\-01\-05 |
| 6 | 0\.99 | f | FALSE | 6 \- 7 | 2020\-01\-06 |
| 7 | 0\.03 | g | TRUE | 7 \- 8 | 2020\-01\-07 |
| 8 | 0\.67 | h | TRUE | 8 \- 9 | 2020\-01\-08 |
| 9 | 0\.57 | i | TRUE | 9 \- 10 | 2020\-01\-09 |
| 10 | 0\.9 | j | TRUE | 10 \- 11 | 2020\-01\-10 |
| 11 | \-1\.55 | k | FALSE | 11 \- 12 | 2020\-01\-11 |
| 12 | NA | l | FALSE | 12 \- 13 | 2020\-01\-12 |
| 13 | 0\.15 | m | TRUE | 13 \- 14 | 2020\-01\-13 |
| 14 | \-0\.66 | n | TRUE | 14 \- 15 | 2020\-01\-14 |
| 15 | \-0\.99 | o | TRUE | 15 \- 16 | 2020\-01\-15 |
| 16 | 1\.97 | p | TRUE | 16 \- 17 | 2020\-01\-16 |
| 17 | \-0\.44 | q | TRUE | 17 \- 18 | 2020\-01\-17 |
| 18 | \-0\.9 | r | FALSE | 18 \- 19 | 2020\-01\-18 |
| 19 | \-0\.15 | s | FALSE | 19 \- 20 | 2020\-01\-19 |
| 20 | \-0\.83 | t | FALSE | 20 \- 21 | 2020\-01\-20 |
| 21 | 1\.99 | u | TRUE | 21 \- 22 | 2020\-01\-21 |
| 22 | 0\.04 | v | FALSE | 22 \- 23 | 2020\-01\-22 |
| 23 | \-0\.4 | w | FALSE | 23 \- 24 | 2020\-01\-23 |
| 24 | \-0\.47 | x | FALSE | 24 \- 25 | 2020\-01\-24 |
| 25 | \-0\.41 | y | TRUE | 25 \- 26 | 2020\-01\-25 |
| 26 | 0\.68 | z | FALSE | 26 \- 27 | 2020\-01\-26 |
2\.8 Glossary
-------------
| term | definition |
| --- | --- |
| [base r](https://psyteachr.github.io/glossary/b#base.r) | The set of R functions that come with a basic installation of R, before you add external packages |
| [character](https://psyteachr.github.io/glossary/c#character) | A data type representing strings of text. |
| [csv](https://psyteachr.github.io/glossary/c#csv) | Comma\-separated variable: a file type for representing data where each variable is separated from the next by a comma. |
| [data type](https://psyteachr.github.io/glossary/d#data.type) | The kind of data represented by an object. |
| [deviation score](https://psyteachr.github.io/glossary/d#deviation.score) | A score minus the mean |
| [double](https://psyteachr.github.io/glossary/d#double) | A data type representing a real decimal number |
| [escape](https://psyteachr.github.io/glossary/e#escape) | Include special characters like " inside of a string by prefacing them with a backslash. |
| [extension](https://psyteachr.github.io/glossary/e#extension) | The end part of a file name that tells you what type of file it is (e.g., .R or .Rmd). |
| [extract operator](https://psyteachr.github.io/glossary/e#extract.operator) | A symbol used to get values from a container object, such as \[, \[\[, or $ |
| [factor](https://psyteachr.github.io/glossary/f#factor) | A data type where a specific set of values are stored with labels; An explanatory variable manipulated by the experimenter |
| [global environment](https://psyteachr.github.io/glossary/g#global.environment) | The interactive workspace where your script runs |
| [integer](https://psyteachr.github.io/glossary/i#integer) | A data type representing whole numbers. |
| [list](https://psyteachr.github.io/glossary/l#list) | A container data type that allows items with different data types to be grouped together. |
| [logical](https://psyteachr.github.io/glossary/l#logical) | A data type representing TRUE or FALSE values. |
| [numeric](https://psyteachr.github.io/glossary/n#numeric) | A data type representing a real decimal number or integer. |
| [operator](https://psyteachr.github.io/glossary/o#operator) | A symbol that performs a mathematical operation, such as \+, \-, \*, / |
| [tabular data](https://psyteachr.github.io/glossary/t#tabular.data) | Data in a rectangular table format, where each row has an entry for each column. |
| [tidy data](https://psyteachr.github.io/glossary/t#tidy.data) | A format for data that maps the meaning onto the structure. |
| [tidyverse](https://psyteachr.github.io/glossary/t#tidyverse) | A set of R packages that help you create and work with tidy data |
| [vector](https://psyteachr.github.io/glossary/v#vector) | A type of data structure that is basically a list of things like T/F values, numbers, or strings. |
| [vectorized](https://psyteachr.github.io/glossary/v#vectorized) | An operator or function that acts on each element in a vector |
2\.9 Exercises
--------------
Download the [exercises](exercises/02_data_exercise.Rmd). See the [answers](exercises/02_data_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(2)
# run this to access the answers
dataskills::exercise(2, answers = TRUE)
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/ggplot.html |
Chapter 3 Data Visualisation
============================
3\.1 Learning Objectives
------------------------
### 3\.1\.1 Basic
1. Understand what types of graphs are best for [different types of data](ggplot.html#vartypes) [(video)](https://youtu.be/tOFQFPRgZ3M)
* 1 discrete
* 1 continuous
* 2 discrete
* 2 continuous
* 1 discrete, 1 continuous
* 3 continuous
2. Create common types of graphs with ggplot2 [(video)](https://youtu.be/kKlQupjD__g)
* [`geom_bar()`](ggplot.html#geom_bar)
* [`geom_density()`](ggplot.html#geom_density)
* [`geom_freqpoly()`](ggplot.html#geom_freqpoly)
* [`geom_histogram()`](ggplot.html#geom_histogram)
* [`geom_col()`](ggplot.html#geom_col)
* [`geom_boxplot()`](ggplot.html#geom_boxplot)
* [`geom_violin()`](ggplot.html#geom_violin)
* [Vertical Intervals](ggplot.html#vertical_intervals)
+ `geom_crossbar()`
+ `geom_errorbar()`
+ `geom_linerange()`
+ `geom_pointrange()`
* [`geom_point()`](ggplot.html#geom_point)
* [`geom_smooth()`](ggplot.html#geom_smooth)
3. Set custom [labels](ggplot.html#custom-labels), [colours](ggplot.html#custom-colours), and [themes](ggplot.html#themes) [(video)](https://youtu.be/6pHuCbOh86s)
4. [Combine plots](combo_plots) on the same plot, as facets, or as a grid using cowplot [(video)](https://youtu.be/AnqlfuU-VZk)
5. [Save plots](ggplot.html#ggsave) as an image file [(video)](https://youtu.be/f1Y53mjEli0)
### 3\.1\.2 Intermediate
6. Add lines to graphs
7. Deal with [overlapping data](ggplot.html#overlap)
8. Create less common types of graphs
* [`geom_tile()`](ggplot.html#geom_tile)
* [`geom_density2d()`](ggplot.html#geom_density2d)
* [`geom_bin2d()`](ggplot.html#geom_bin2d)
* [`geom_hex()`](ggplot.html#geom_hex)
* [`geom_count()`](ggplot.html#geom_count)
9. Adjust axes (e.g., flip coordinates, set axis limits)
10. Create interactive graphs with [`plotly`](ggplot.html#plotly)
3\.2 Resources
--------------
* [Chapter 3: Data Visualisation](http://r4ds.had.co.nz/data-visualisation.html) of *R for Data Science*
* [ggplot2 cheat sheet](https://github.com/rstudio/cheatsheets/raw/master/data-visualization-2.1.pdf)
* [Chapter 28: Graphics for communication](http://r4ds.had.co.nz/graphics-for-communication.html) of *R for Data Science*
* [Look at Data](http://socviz.co/look-at-data.html) from [Data Vizualization for Social Science](http://socviz.co/)
* [Hack Your Data Beautiful](https://psyteachr.github.io/hack-your-data/) workshop by University of Glasgow postgraduate students
* [Graphs](http://www.cookbook-r.com/Graphs) in *Cookbook for R*
* [ggplot2 documentation](https://ggplot2.tidyverse.org/reference/)
* [The R Graph Gallery](http://www.r-graph-gallery.com/) (this is really useful)
* [Top 50 ggplot2 Visualizations](http://r-statistics.co/Top50-Ggplot2-Visualizations-MasterList-R-Code.html)
* [R Graphics Cookbook](http://www.cookbook-r.com/Graphs/) by Winston Chang
* [ggplot extensions](https://www.ggplot2-exts.org/)
* [plotly](https://plot.ly/ggplot2/) for creating interactive graphs
3\.3 Setup
----------
```
# libraries needed for these graphs
library(tidyverse)
library(dataskills)
library(plotly)
library(cowplot)
set.seed(30250) # makes sure random numbers are reproducible
```
3\.4 Common Variable Combinations
---------------------------------
[Continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") variables are properties you can measure, like height. [Discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") variables are things you can count, like the number of pets you have. Categorical variables can be [nominal](https://psyteachr.github.io/glossary/n#nominal "Categorical variables that don’t have an inherent order, such as types of animal."), where the categories don’t really have an order, like cats, dogs and ferrets (even though ferrets are obviously best). They can also be [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs"), where there is a clear order, but the distance between the categories isn’t something you could exactly equate, like points on a [Likert](https://psyteachr.github.io/glossary/l#likert "A rating scale with a small number of discrete points in order") rating scale.
Different types of visualisations are good for different types of variables.
Load the `pets` dataset from the `dataskills` package and explore it with `glimpse(pets)` or `View(pets)`. This is a simulated dataset with one random factor (`id`), two categorical factors (`pet`, `country`) and three continuous variables (`score`, `age`, `weight`).
```
data("pets")
# if you don't have the dataskills package, use:
# pets <- read_csv("https://psyteachr.github.io/msc-data-skills/data/pets.csv", col_types = "cffiid")
glimpse(pets)
```
```
## Rows: 800
## Columns: 6
## $ id <chr> "S001", "S002", "S003", "S004", "S005", "S006", "S007", "S008"…
## $ pet <fct> dog, dog, dog, dog, dog, dog, dog, dog, dog, dog, dog, dog, do…
## $ country <fct> UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK…
## $ score <int> 90, 107, 94, 120, 111, 110, 100, 107, 106, 109, 85, 110, 102, …
## $ age <int> 6, 8, 2, 10, 4, 8, 9, 8, 6, 11, 5, 9, 1, 10, 7, 8, 1, 8, 5, 13…
## $ weight <dbl> 19.78932, 20.01422, 19.14863, 19.56953, 21.39259, 21.31880, 19…
```
Before you read ahead, come up with an example of each type of variable combination and sketch the types of graphs that would best display these data.
* 1 categorical
* 1 continuous
* 2 categorical
* 2 continuous
* 1 categorical, 1 continuous
* 3 continuous
3\.5 Basic Plots
----------------
R has some basic plotting functions, but they’re difficult to use and aesthetically not very nice. They can be useful to have a quick look at data while you’re working on a script, though. The function `plot()` usually defaults to a sensible type of plot, depending on whether the arguments `x` and `y` are categorical, continuous, or missing.
```
plot(x = pets$pet)
```
Figure 3\.1: plot() with categorical x
```
plot(x = pets$pet, y = pets$score)
```
Figure 3\.2: plot() with categorical x and continuous y
```
plot(x = pets$age, y = pets$weight)
```
Figure 3\.3: plot() with continuous x and y
The function `hist()` creates a quick histogram so you can see the distribution of your data. You can adjust how many columns are plotted with the argument `breaks`.
```
hist(pets$score, breaks = 20)
```
Figure 3\.4: hist()
3\.6 GGplots
------------
While the functions above are nice for quick visualisations, it’s hard to make pretty, publication\-ready plots. The package `ggplot2` (loaded with `tidyverse`) is one of the most common packages for creating beautiful visualisations.
`ggplot2` creates plots using a “grammar of graphics” where you add [geoms](https://psyteachr.github.io/glossary/g#geom "The geometric style in which data are displayed, such as boxplot, density, or histogram.") in layers. It can be complex to understand, but it’s very powerful once you have a mental model of how it works.
Let’s start with a totally empty plot layer created by the `ggplot()` function with no arguments.
```
ggplot()
```
Figure 3\.5: A plot base created by ggplot()
The first argument to `ggplot()` is the `data` table you want to plot. Let’s use the `pets` data we loaded above. The second argument is the `mapping` for which columns in your data table correspond to which properties of the plot, such as the `x`\-axis, the `y`\-axis, line `colour` or `linetype`, point `shape`, or object `fill`. These mappings are specified by the `aes()` function. Just adding this to the `ggplot` function creates the labels and ranges for the `x` and `y` axes. They usually have sensible default values, given your data, but we’ll learn how to change them later.
```
mapping <- aes(x = pet,
y = score,
colour = country,
fill = country)
ggplot(data = pets, mapping = mapping)
```
Figure 3\.6: Empty ggplot with x and y labels
People usually omit the argument names and just put the `aes()` function directly as the second argument to `ggplot`. They also usually omit `x` and `y` as argument names to `aes()` (but you have to name the other properties). Next we can add “geoms,” or plot styles. You literally add them with the `+` symbol. You can also add other plot attributes, such as labels, or change the theme and base font size.
```
ggplot(pets, aes(pet, score, colour = country, fill = country)) +
geom_violin(alpha = 0.5) +
labs(x = "Pet type",
y = "Score on an Important Test",
colour = "Country of Origin",
fill = "Country of Origin",
title = "My first plot!") +
theme_bw(base_size = 15)
```
Figure 3\.7: Violin plot with country represented by colour.
3\.7 Common Plot Types
----------------------
There are many geoms, and they can take different arguments to customise their appearance. We’ll learn about some of the most common below.
### 3\.7\.1 Bar plot
Bar plots are good for categorical data where you want to represent the count.
```
ggplot(pets, aes(pet)) +
geom_bar()
```
Figure 3\.8: Bar plot
### 3\.7\.2 Density plot
Density plots are good for one continuous variable, but only if you have a fairly large number of observations.
```
ggplot(pets, aes(score)) +
geom_density()
```
Figure 3\.9: Density plot
You can represent subsets of a variable by assigning the category variable to the argument `group`, `fill`, or `color`.
```
ggplot(pets, aes(score, fill = pet)) +
geom_density(alpha = 0.5)
```
Figure 3\.10: Grouped density plot
Try changing the `alpha` argument to figure out what it does.
### 3\.7\.3 Frequency polygons
If you want the y\-axis to represent count rather than density, try `geom_freqpoly()`.
```
ggplot(pets, aes(score, color = pet)) +
geom_freqpoly(binwidth = 5)
```
Figure 3\.11: Frequency ploygon plot
Try changing the `binwidth` argument to 10 and 1\. How do you figure out the right value?
### 3\.7\.4 Histogram
Histograms are also good for one continuous variable, and work well if you don’t have many observations. Set the `binwidth` to control how wide each bar is.
```
ggplot(pets, aes(score)) +
geom_histogram(binwidth = 5, fill = "white", color = "black")
```
Figure 3\.12: Histogram
Histograms in ggplot look pretty bad unless you set the `fill` and `color`.
If you show grouped histograms, you also probably want to change the default `position` argument.
```
ggplot(pets, aes(score, fill=pet)) +
geom_histogram(binwidth = 5, alpha = 0.5,
position = "dodge")
```
Figure 3\.13: Grouped Histogram
Try changing the `position` argument to “identity,” “fill,” “dodge,” or “stack.”
### 3\.7\.5 Column plot
Column plots are the worst way to represent grouped continuous data, but also one of the most common. If your data are already aggregated (e.g., you have rows for each group with columns for the mean and standard error), you can use `geom_bar` or `geom_col` and `geom_errorbar` directly. If not, you can use the function `stat_summary` to calculate the mean and standard error and send those numbers to the appropriate geom for plotting.
```
ggplot(pets, aes(pet, score, fill=pet)) +
stat_summary(fun = mean, geom = "col", alpha = 0.5) +
stat_summary(fun.data = mean_se, geom = "errorbar",
width = 0.25) +
coord_cartesian(ylim = c(80, 120))
```
Figure 3\.14: Column plot
Try changing the values for `coord_cartesian`. What does this do?
### 3\.7\.6 Boxplot
Boxplots are great for representing the distribution of grouped continuous variables. They fix most of the problems with using bar/column plots for continuous data.
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_boxplot(alpha = 0.5)
```
Figure 3\.15: Box plot
### 3\.7\.7 Violin plot
Violin pots are like sideways, mirrored density plots. They give even more information than a boxplot about distribution and are especially useful when you have non\-normal distributions.
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(draw_quantiles = .5,
trim = FALSE, alpha = 0.5,)
```
Figure 3\.16: Violin plot
Try changing the `quantile` argument. Set it to a vector of the numbers 0\.1 to 0\.9 in steps of 0\.1\.
### 3\.7\.8 Vertical intervals
Boxplots and violin plots don’t always map well onto inferential stats that use the mean. You can represent the mean and standard error or any other value you can calculate.
Here, we will create a table with the means and standard errors for two groups. We’ll learn how to calculate this from raw data in the chapter on [data wrangling](dplyr.html#dplyr). We also create a new object called `gg` that sets up the base of the plot.
```
dat <- tibble(
group = c("A", "B"),
mean = c(10, 20),
se = c(2, 3)
)
gg <- ggplot(dat, aes(group, mean,
ymin = mean-se,
ymax = mean+se))
```
The trick above can be useful if you want to represent the same data in different ways. You can add different geoms to the base plot without having to re\-type the base plot code.
```
gg + geom_crossbar()
```
Figure 3\.17: geom\_crossbar()
```
gg + geom_errorbar()
```
Figure 3\.18: geom\_errorbar()
```
gg + geom_linerange()
```
Figure 3\.19: geom\_linerange()
```
gg + geom_pointrange()
```
Figure 3\.20: geom\_pointrange()
You can also use the function `stats_summary` to calculate mean, standard error, or any other value for your data and display it using any geom.
```
ggplot(pets, aes(pet, score, color=pet)) +
stat_summary(fun.data = mean_se, geom = "crossbar") +
stat_summary(fun.min = function(x) mean(x) - sd(x),
fun.max = function(x) mean(x) + sd(x),
geom = "errorbar", width = 0) +
theme(legend.position = "none") # gets rid of the legend
```
Figure 3\.21: Vertical intervals with stats\_summary()
### 3\.7\.9 Scatter plot
Scatter plots are a good way to represent the relationship between two continuous variables.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_point()
```
Figure 3\.22: Scatter plot using geom\_point()
### 3\.7\.10 Line graph
You often want to represent the relationship as a single line.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.23: Line plot using geom\_smooth()
What are some other options for the `method` argument to `geom_smooth`? When might you want to use them?
You can plot functions other than the linear `y ~ x`. The code below creates a data table where `x` is 101 values between \-10 and 10\. and `y` is `x` squared plus `3*x` plus `1`. You’ll probably recognise this from algebra as the quadratic equation. You can set the `formula` argument in `geom_smooth` to a quadratic formula (`y ~ x + I(x^2)`) to fit a quadratic function to the data.
```
quad <- tibble(
x = seq(-10, 10, length.out = 101),
y = x^2 + 3*x + 1
)
ggplot(quad, aes(x, y)) +
geom_point() +
geom_smooth(formula = y ~ x + I(x^2),
method="lm")
```
Figure 3\.24: Fitting quadratic functions
3\.8 Customisation
------------------
### 3\.8\.1 Labels
You can set custom titles and axis labels in a few different ways.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
labs(title = "Pet score with Age",
x = "Age (in Years)",
y = "score Score",
color = "Pet Type")
```
Figure 3\.25: Set custom labels with labs()
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
ggtitle("Pet score with Age") +
xlab("Age (in Years)") +
ylab("score Score") +
scale_color_discrete(name = "Pet Type")
```
Figure 3\.26: Set custom labels with individual functions
### 3\.8\.2 Colours
You can set custom values for colour and fill using functions like `scale_colour_manual()` and `scale_fill_manual()`. The [Colours chapter in Cookbook for R](http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/) has many more ways to customise colour.
```
ggplot(pets, aes(pet, score, colour = pet, fill = pet)) +
geom_violin() +
scale_color_manual(values = c("darkgreen", "dodgerblue", "orange")) +
scale_fill_manual(values = c("#CCFFCC", "#BBDDFF", "#FFCC66"))
```
Figure 3\.27: Set custom colour
### 3\.8\.3 Themes
GGplot comes with several additional themes and the ability to fully customise your theme. Type `?theme` into the console to see the full list. Other packages such as `cowplot` also have custom themes. You can add a custom theme to the end of your ggplot object and specify a new `base_size` to make the default fonts and lines larger or smaller.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
theme_minimal(base_size = 18)
```
Figure 3\.28: Minimal theme with 18\-point base font size
It’s more complicated, but you can fully customise your theme with `theme()`. You can save this to an object and add it to the end of all of your plots to make the style consistent. Alternatively, you can set the theme at the top of a script with `theme_set()` and this will apply to all subsequent ggplot plots.
```
vampire_theme <- theme(
rect = element_rect(fill = "black"),
panel.background = element_rect(fill = "black"),
text = element_text(size = 20, colour = "white"),
axis.text = element_text(size = 16, colour = "grey70"),
line = element_line(colour = "white", size = 2),
panel.grid = element_blank(),
axis.line = element_line(colour = "white"),
axis.ticks = element_blank(),
legend.position = "top"
)
theme_set(vampire_theme)
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.29: Custom theme
### 3\.8\.4 Save as file
You can save a ggplot using `ggsave()`. It saves the last ggplot you made, by default, but you can specify which plot you want to save if you assigned that plot to a variable.
You can set the `width` and `height` of your plot. The default units are inches, but you can change the `units` argument to “in,” “cm,” or “mm.”
```
box <- ggplot(pets, aes(pet, score, fill=pet)) +
geom_boxplot(alpha = 0.5)
violin <- ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(alpha = 0.5)
ggsave("demog_violin_plot.png", width = 5, height = 7)
ggsave("demog_box_plot.jpg", plot = box, width = 5, height = 7)
```
The file type is set from the filename suffix, or by specifying the argument `device`, which can take the following values: “eps,” “ps,” “tex,” “pdf,” “jpeg,” “tiff,” “png,” “bmp,” “svg” or “wmf.”
3\.9 Combination Plots
----------------------
### 3\.9\.1 Violinbox plot
A combination of a violin plot to show the shape of the distribution and a boxplot to show the median and interquartile ranges can be a very useful visualisation.
```
ggplot(pets, aes(pet, score, fill = pet)) +
geom_violin(show.legend = FALSE) +
geom_boxplot(width = 0.2, fill = "white",
show.legend = FALSE)
```
Figure 3\.30: Violin\-box plot
Set the `show.legend` argument to `FALSE` to hide the legend. We do this here because the x\-axis already labels the pet types.
### 3\.9\.2 Violin\-point\-range plot
You can use `stat_summary()` to superimpose a point\-range plot showning the mean ± 1 SD. You’ll learn how to write your own functions in the lesson on [Iteration and Functions](func.html#func).
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(trim = FALSE, alpha = 0.5) +
stat_summary(
fun = mean,
fun.max = function(x) {mean(x) + sd(x)},
fun.min = function(x) {mean(x) - sd(x)},
geom="pointrange"
)
```
Figure 3\.31: Point\-range plot using stat\_summary()
### 3\.9\.3 Violin\-jitter plot
If you don’t have a lot of data points, it’s good to represent them individually. You can use `geom_jitter` to do this.
```
# sample_n chooses 50 random observations from the dataset
ggplot(sample_n(pets, 50), aes(pet, score, fill=pet)) +
geom_violin(
trim = FALSE,
draw_quantiles = c(0.25, 0.5, 0.75),
alpha = 0.5
) +
geom_jitter(
width = 0.15, # points spread out over 15% of available width
height = 0, # do not move position on the y-axis
alpha = 0.5,
size = 3
)
```
Figure 3\.32: Violin\-jitter plot
### 3\.9\.4 Scatter\-line graph
If your graph isn’t too complicated, it’s good to also show the individual data points behind the line.
```
ggplot(sample_n(pets, 50), aes(age, weight, colour = pet)) +
geom_point() +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.33: Scatter\-line plot
### 3\.9\.5 Grid of plots
You can use the [`cowplot`](https://cran.r-project.org/web/packages/cowplot/vignettes/introduction.html) package to easily make grids of different graphs. First, you have to assign each plot a name. Then you list all the plots as the first arguments of `plot_grid()` and provide a vector of labels.
```
gg <- ggplot(pets, aes(pet, score, colour = pet))
nolegend <- theme(legend.position = 0)
vp <- gg + geom_violin(alpha = 0.5) + nolegend +
ggtitle("Violin Plot")
bp <- gg + geom_boxplot(alpha = 0.5) + nolegend +
ggtitle("Box Plot")
cp <- gg + stat_summary(fun = mean, geom = "col", fill = "white") + nolegend +
ggtitle("Column Plot")
dp <- ggplot(pets, aes(score, colour = pet)) +
geom_density() + nolegend +
ggtitle("Density Plot")
plot_grid(vp, bp, cp, dp, labels = LETTERS[1:4])
```
Figure 3\.34: Grid of plots
3\.10 Overlapping Discrete Data
-------------------------------
### 3\.10\.1 Reducing Opacity
You can deal with overlapping data points (very common if you’re using Likert scales) by reducing the opacity of the points. You need to use trial and error to adjust these so they look right.
```
ggplot(pets, aes(age, score, colour = pet)) +
geom_point(alpha = 0.25) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.35: Deal with overlapping data using transparency
### 3\.10\.2 Proportional Dot Plots
Or you can set the size of the dot proportional to the number of overlapping observations using `geom_count()`.
```
ggplot(pets, aes(age, score, colour = pet)) +
geom_count()
```
Figure 3\.36: Deal with overlapping data using geom\_count()
Alternatively, you can transform your data (we will learn to do this in the [data wrangling](dplyr.html#dplyr) chapter) to create a count column and use the count to set the dot colour.
```
pets %>%
group_by(age, score) %>%
summarise(count = n(), .groups = "drop") %>%
ggplot(aes(age, score, color=count)) +
geom_point(size = 2) +
scale_color_viridis_c()
```
Figure 3\.37: Deal with overlapping data using dot colour
The [viridis package](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html) changes the colour themes to be easier to read by people with colourblindness and to print better in greyscale. Viridis is built into `ggplot2` since v3\.0\.0\. It uses `scale_colour_viridis_c()` and `scale_fill_viridis_c()` for continuous variables and `scale_colour_viridis_d()` and `scale_fill_viridis_d()` for discrete variables.
3\.11 Overlapping Continuous Data
---------------------------------
Even if the variables are continuous, overplotting might obscure any relationships if you have lots of data.
```
ggplot(pets, aes(age, score)) +
geom_point()
```
Figure 3\.38: Overplotted data
### 3\.11\.1 2D Density Plot
Use `geom_density2d()` to create a contour map.
```
ggplot(pets, aes(age, score)) +
geom_density2d()
```
Figure 3\.39: Contour map with geom\_density2d()
You can use `stat_density_2d(aes(fill = ..level..), geom = "polygon")` to create a heatmap\-style density plot.
```
ggplot(pets, aes(age, score)) +
stat_density_2d(aes(fill = ..level..), geom = "polygon") +
scale_fill_viridis_c()
```
Figure 3\.40: Heatmap\-density plot
### 3\.11\.2 2D Histogram
Use `geom_bin2d()` to create a rectangular heatmap of bin counts. Set the `binwidth` to the x and y dimensions to capture in each box.
```
ggplot(pets, aes(age, score)) +
geom_bin2d(binwidth = c(1, 5))
```
Figure 3\.41: Heatmap of bin counts
### 3\.11\.3 Hexagonal Heatmap
Use `geomhex()` to create a hexagonal heatmap of bin counts. Adjust the `binwidth`, `xlim()`, `ylim()` and/or the figure dimensions to make the hexagons more or less stretched.
```
ggplot(pets, aes(age, score)) +
geom_hex(binwidth = c(1, 5))
```
Figure 3\.42: Hexagonal heatmap of bin counts
### 3\.11\.4 Correlation Heatmap
I’ve included the code for creating a correlation matrix from a table of variables, but you don’t need to understand how this is done yet. We’ll cover `mutate()` and `gather()` functions in the [dplyr](dplyr.html#dplyr) and [tidyr](tidyr.html#tidyr) lessons.
```
heatmap <- pets %>%
select_if(is.numeric) %>% # get just the numeric columns
cor() %>% # create the correlation matrix
as_tibble(rownames = "V1") %>% # make it a tibble
gather("V2", "r", 2:ncol(.)) # wide to long (V2)
```
Once you have a correlation matrix in the correct (long) format, it’s easy to make a heatmap using `geom_tile()`.
```
ggplot(heatmap, aes(V1, V2, fill=r)) +
geom_tile() +
scale_fill_viridis_c()
```
Figure 3\.43: Heatmap using geom\_tile()
3\.12 Interactive Plots
-----------------------
You can use the `plotly` package to make interactive graphs. Just assign your ggplot to a variable and use the function `ggplotly()`.
```
demog_plot <- ggplot(pets, aes(age, score, fill=pet)) +
geom_point() +
geom_smooth(formula = y~x, method = lm)
ggplotly(demog_plot)
```
Figure 3\.44: Interactive graph using plotly
Hover over the data points above and click on the legend items.
3\.13 Glossary
--------------
| term | definition |
| --- | --- |
| [continuous](https://psyteachr.github.io/glossary/c#continuous) | Data that can take on any values between other existing values. |
| [discrete](https://psyteachr.github.io/glossary/d#discrete) | Data that can only take certain values, such as integers. |
| [geom](https://psyteachr.github.io/glossary/g#geom) | The geometric style in which data are displayed, such as boxplot, density, or histogram. |
| [likert](https://psyteachr.github.io/glossary/l#likert) | A rating scale with a small number of discrete points in order |
| [nominal](https://psyteachr.github.io/glossary/n#nominal) | Categorical variables that don’t have an inherent order, such as types of animal. |
| [ordinal](https://psyteachr.github.io/glossary/o#ordinal) | Discrete variables that have an inherent order, such as number of legs |
3\.14 Exercises
---------------
Download the [exercises](exercises/03_ggplot_exercise.Rmd). See the [plots](exercises/03_ggplot_answers.html) to see what your plots should look like (this doesn’t contain the answer code). See the [answers](exercises/03_ggplot_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(3)
# run this to access the answers
dataskills::exercise(3, answers = TRUE)
```
3\.1 Learning Objectives
------------------------
### 3\.1\.1 Basic
1. Understand what types of graphs are best for [different types of data](ggplot.html#vartypes) [(video)](https://youtu.be/tOFQFPRgZ3M)
* 1 discrete
* 1 continuous
* 2 discrete
* 2 continuous
* 1 discrete, 1 continuous
* 3 continuous
2. Create common types of graphs with ggplot2 [(video)](https://youtu.be/kKlQupjD__g)
* [`geom_bar()`](ggplot.html#geom_bar)
* [`geom_density()`](ggplot.html#geom_density)
* [`geom_freqpoly()`](ggplot.html#geom_freqpoly)
* [`geom_histogram()`](ggplot.html#geom_histogram)
* [`geom_col()`](ggplot.html#geom_col)
* [`geom_boxplot()`](ggplot.html#geom_boxplot)
* [`geom_violin()`](ggplot.html#geom_violin)
* [Vertical Intervals](ggplot.html#vertical_intervals)
+ `geom_crossbar()`
+ `geom_errorbar()`
+ `geom_linerange()`
+ `geom_pointrange()`
* [`geom_point()`](ggplot.html#geom_point)
* [`geom_smooth()`](ggplot.html#geom_smooth)
3. Set custom [labels](ggplot.html#custom-labels), [colours](ggplot.html#custom-colours), and [themes](ggplot.html#themes) [(video)](https://youtu.be/6pHuCbOh86s)
4. [Combine plots](combo_plots) on the same plot, as facets, or as a grid using cowplot [(video)](https://youtu.be/AnqlfuU-VZk)
5. [Save plots](ggplot.html#ggsave) as an image file [(video)](https://youtu.be/f1Y53mjEli0)
### 3\.1\.2 Intermediate
6. Add lines to graphs
7. Deal with [overlapping data](ggplot.html#overlap)
8. Create less common types of graphs
* [`geom_tile()`](ggplot.html#geom_tile)
* [`geom_density2d()`](ggplot.html#geom_density2d)
* [`geom_bin2d()`](ggplot.html#geom_bin2d)
* [`geom_hex()`](ggplot.html#geom_hex)
* [`geom_count()`](ggplot.html#geom_count)
9. Adjust axes (e.g., flip coordinates, set axis limits)
10. Create interactive graphs with [`plotly`](ggplot.html#plotly)
### 3\.1\.1 Basic
1. Understand what types of graphs are best for [different types of data](ggplot.html#vartypes) [(video)](https://youtu.be/tOFQFPRgZ3M)
* 1 discrete
* 1 continuous
* 2 discrete
* 2 continuous
* 1 discrete, 1 continuous
* 3 continuous
2. Create common types of graphs with ggplot2 [(video)](https://youtu.be/kKlQupjD__g)
* [`geom_bar()`](ggplot.html#geom_bar)
* [`geom_density()`](ggplot.html#geom_density)
* [`geom_freqpoly()`](ggplot.html#geom_freqpoly)
* [`geom_histogram()`](ggplot.html#geom_histogram)
* [`geom_col()`](ggplot.html#geom_col)
* [`geom_boxplot()`](ggplot.html#geom_boxplot)
* [`geom_violin()`](ggplot.html#geom_violin)
* [Vertical Intervals](ggplot.html#vertical_intervals)
+ `geom_crossbar()`
+ `geom_errorbar()`
+ `geom_linerange()`
+ `geom_pointrange()`
* [`geom_point()`](ggplot.html#geom_point)
* [`geom_smooth()`](ggplot.html#geom_smooth)
3. Set custom [labels](ggplot.html#custom-labels), [colours](ggplot.html#custom-colours), and [themes](ggplot.html#themes) [(video)](https://youtu.be/6pHuCbOh86s)
4. [Combine plots](combo_plots) on the same plot, as facets, or as a grid using cowplot [(video)](https://youtu.be/AnqlfuU-VZk)
5. [Save plots](ggplot.html#ggsave) as an image file [(video)](https://youtu.be/f1Y53mjEli0)
### 3\.1\.2 Intermediate
6. Add lines to graphs
7. Deal with [overlapping data](ggplot.html#overlap)
8. Create less common types of graphs
* [`geom_tile()`](ggplot.html#geom_tile)
* [`geom_density2d()`](ggplot.html#geom_density2d)
* [`geom_bin2d()`](ggplot.html#geom_bin2d)
* [`geom_hex()`](ggplot.html#geom_hex)
* [`geom_count()`](ggplot.html#geom_count)
9. Adjust axes (e.g., flip coordinates, set axis limits)
10. Create interactive graphs with [`plotly`](ggplot.html#plotly)
3\.2 Resources
--------------
* [Chapter 3: Data Visualisation](http://r4ds.had.co.nz/data-visualisation.html) of *R for Data Science*
* [ggplot2 cheat sheet](https://github.com/rstudio/cheatsheets/raw/master/data-visualization-2.1.pdf)
* [Chapter 28: Graphics for communication](http://r4ds.had.co.nz/graphics-for-communication.html) of *R for Data Science*
* [Look at Data](http://socviz.co/look-at-data.html) from [Data Vizualization for Social Science](http://socviz.co/)
* [Hack Your Data Beautiful](https://psyteachr.github.io/hack-your-data/) workshop by University of Glasgow postgraduate students
* [Graphs](http://www.cookbook-r.com/Graphs) in *Cookbook for R*
* [ggplot2 documentation](https://ggplot2.tidyverse.org/reference/)
* [The R Graph Gallery](http://www.r-graph-gallery.com/) (this is really useful)
* [Top 50 ggplot2 Visualizations](http://r-statistics.co/Top50-Ggplot2-Visualizations-MasterList-R-Code.html)
* [R Graphics Cookbook](http://www.cookbook-r.com/Graphs/) by Winston Chang
* [ggplot extensions](https://www.ggplot2-exts.org/)
* [plotly](https://plot.ly/ggplot2/) for creating interactive graphs
3\.3 Setup
----------
```
# libraries needed for these graphs
library(tidyverse)
library(dataskills)
library(plotly)
library(cowplot)
set.seed(30250) # makes sure random numbers are reproducible
```
3\.4 Common Variable Combinations
---------------------------------
[Continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") variables are properties you can measure, like height. [Discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") variables are things you can count, like the number of pets you have. Categorical variables can be [nominal](https://psyteachr.github.io/glossary/n#nominal "Categorical variables that don’t have an inherent order, such as types of animal."), where the categories don’t really have an order, like cats, dogs and ferrets (even though ferrets are obviously best). They can also be [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs"), where there is a clear order, but the distance between the categories isn’t something you could exactly equate, like points on a [Likert](https://psyteachr.github.io/glossary/l#likert "A rating scale with a small number of discrete points in order") rating scale.
Different types of visualisations are good for different types of variables.
Load the `pets` dataset from the `dataskills` package and explore it with `glimpse(pets)` or `View(pets)`. This is a simulated dataset with one random factor (`id`), two categorical factors (`pet`, `country`) and three continuous variables (`score`, `age`, `weight`).
```
data("pets")
# if you don't have the dataskills package, use:
# pets <- read_csv("https://psyteachr.github.io/msc-data-skills/data/pets.csv", col_types = "cffiid")
glimpse(pets)
```
```
## Rows: 800
## Columns: 6
## $ id <chr> "S001", "S002", "S003", "S004", "S005", "S006", "S007", "S008"…
## $ pet <fct> dog, dog, dog, dog, dog, dog, dog, dog, dog, dog, dog, dog, do…
## $ country <fct> UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK…
## $ score <int> 90, 107, 94, 120, 111, 110, 100, 107, 106, 109, 85, 110, 102, …
## $ age <int> 6, 8, 2, 10, 4, 8, 9, 8, 6, 11, 5, 9, 1, 10, 7, 8, 1, 8, 5, 13…
## $ weight <dbl> 19.78932, 20.01422, 19.14863, 19.56953, 21.39259, 21.31880, 19…
```
Before you read ahead, come up with an example of each type of variable combination and sketch the types of graphs that would best display these data.
* 1 categorical
* 1 continuous
* 2 categorical
* 2 continuous
* 1 categorical, 1 continuous
* 3 continuous
3\.5 Basic Plots
----------------
R has some basic plotting functions, but they’re difficult to use and aesthetically not very nice. They can be useful to have a quick look at data while you’re working on a script, though. The function `plot()` usually defaults to a sensible type of plot, depending on whether the arguments `x` and `y` are categorical, continuous, or missing.
```
plot(x = pets$pet)
```
Figure 3\.1: plot() with categorical x
```
plot(x = pets$pet, y = pets$score)
```
Figure 3\.2: plot() with categorical x and continuous y
```
plot(x = pets$age, y = pets$weight)
```
Figure 3\.3: plot() with continuous x and y
The function `hist()` creates a quick histogram so you can see the distribution of your data. You can adjust how many columns are plotted with the argument `breaks`.
```
hist(pets$score, breaks = 20)
```
Figure 3\.4: hist()
3\.6 GGplots
------------
While the functions above are nice for quick visualisations, it’s hard to make pretty, publication\-ready plots. The package `ggplot2` (loaded with `tidyverse`) is one of the most common packages for creating beautiful visualisations.
`ggplot2` creates plots using a “grammar of graphics” where you add [geoms](https://psyteachr.github.io/glossary/g#geom "The geometric style in which data are displayed, such as boxplot, density, or histogram.") in layers. It can be complex to understand, but it’s very powerful once you have a mental model of how it works.
Let’s start with a totally empty plot layer created by the `ggplot()` function with no arguments.
```
ggplot()
```
Figure 3\.5: A plot base created by ggplot()
The first argument to `ggplot()` is the `data` table you want to plot. Let’s use the `pets` data we loaded above. The second argument is the `mapping` for which columns in your data table correspond to which properties of the plot, such as the `x`\-axis, the `y`\-axis, line `colour` or `linetype`, point `shape`, or object `fill`. These mappings are specified by the `aes()` function. Just adding this to the `ggplot` function creates the labels and ranges for the `x` and `y` axes. They usually have sensible default values, given your data, but we’ll learn how to change them later.
```
mapping <- aes(x = pet,
y = score,
colour = country,
fill = country)
ggplot(data = pets, mapping = mapping)
```
Figure 3\.6: Empty ggplot with x and y labels
People usually omit the argument names and just put the `aes()` function directly as the second argument to `ggplot`. They also usually omit `x` and `y` as argument names to `aes()` (but you have to name the other properties). Next we can add “geoms,” or plot styles. You literally add them with the `+` symbol. You can also add other plot attributes, such as labels, or change the theme and base font size.
```
ggplot(pets, aes(pet, score, colour = country, fill = country)) +
geom_violin(alpha = 0.5) +
labs(x = "Pet type",
y = "Score on an Important Test",
colour = "Country of Origin",
fill = "Country of Origin",
title = "My first plot!") +
theme_bw(base_size = 15)
```
Figure 3\.7: Violin plot with country represented by colour.
3\.7 Common Plot Types
----------------------
There are many geoms, and they can take different arguments to customise their appearance. We’ll learn about some of the most common below.
### 3\.7\.1 Bar plot
Bar plots are good for categorical data where you want to represent the count.
```
ggplot(pets, aes(pet)) +
geom_bar()
```
Figure 3\.8: Bar plot
### 3\.7\.2 Density plot
Density plots are good for one continuous variable, but only if you have a fairly large number of observations.
```
ggplot(pets, aes(score)) +
geom_density()
```
Figure 3\.9: Density plot
You can represent subsets of a variable by assigning the category variable to the argument `group`, `fill`, or `color`.
```
ggplot(pets, aes(score, fill = pet)) +
geom_density(alpha = 0.5)
```
Figure 3\.10: Grouped density plot
Try changing the `alpha` argument to figure out what it does.
### 3\.7\.3 Frequency polygons
If you want the y\-axis to represent count rather than density, try `geom_freqpoly()`.
```
ggplot(pets, aes(score, color = pet)) +
geom_freqpoly(binwidth = 5)
```
Figure 3\.11: Frequency ploygon plot
Try changing the `binwidth` argument to 10 and 1\. How do you figure out the right value?
### 3\.7\.4 Histogram
Histograms are also good for one continuous variable, and work well if you don’t have many observations. Set the `binwidth` to control how wide each bar is.
```
ggplot(pets, aes(score)) +
geom_histogram(binwidth = 5, fill = "white", color = "black")
```
Figure 3\.12: Histogram
Histograms in ggplot look pretty bad unless you set the `fill` and `color`.
If you show grouped histograms, you also probably want to change the default `position` argument.
```
ggplot(pets, aes(score, fill=pet)) +
geom_histogram(binwidth = 5, alpha = 0.5,
position = "dodge")
```
Figure 3\.13: Grouped Histogram
Try changing the `position` argument to “identity,” “fill,” “dodge,” or “stack.”
### 3\.7\.5 Column plot
Column plots are the worst way to represent grouped continuous data, but also one of the most common. If your data are already aggregated (e.g., you have rows for each group with columns for the mean and standard error), you can use `geom_bar` or `geom_col` and `geom_errorbar` directly. If not, you can use the function `stat_summary` to calculate the mean and standard error and send those numbers to the appropriate geom for plotting.
```
ggplot(pets, aes(pet, score, fill=pet)) +
stat_summary(fun = mean, geom = "col", alpha = 0.5) +
stat_summary(fun.data = mean_se, geom = "errorbar",
width = 0.25) +
coord_cartesian(ylim = c(80, 120))
```
Figure 3\.14: Column plot
Try changing the values for `coord_cartesian`. What does this do?
### 3\.7\.6 Boxplot
Boxplots are great for representing the distribution of grouped continuous variables. They fix most of the problems with using bar/column plots for continuous data.
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_boxplot(alpha = 0.5)
```
Figure 3\.15: Box plot
### 3\.7\.7 Violin plot
Violin pots are like sideways, mirrored density plots. They give even more information than a boxplot about distribution and are especially useful when you have non\-normal distributions.
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(draw_quantiles = .5,
trim = FALSE, alpha = 0.5,)
```
Figure 3\.16: Violin plot
Try changing the `quantile` argument. Set it to a vector of the numbers 0\.1 to 0\.9 in steps of 0\.1\.
### 3\.7\.8 Vertical intervals
Boxplots and violin plots don’t always map well onto inferential stats that use the mean. You can represent the mean and standard error or any other value you can calculate.
Here, we will create a table with the means and standard errors for two groups. We’ll learn how to calculate this from raw data in the chapter on [data wrangling](dplyr.html#dplyr). We also create a new object called `gg` that sets up the base of the plot.
```
dat <- tibble(
group = c("A", "B"),
mean = c(10, 20),
se = c(2, 3)
)
gg <- ggplot(dat, aes(group, mean,
ymin = mean-se,
ymax = mean+se))
```
The trick above can be useful if you want to represent the same data in different ways. You can add different geoms to the base plot without having to re\-type the base plot code.
```
gg + geom_crossbar()
```
Figure 3\.17: geom\_crossbar()
```
gg + geom_errorbar()
```
Figure 3\.18: geom\_errorbar()
```
gg + geom_linerange()
```
Figure 3\.19: geom\_linerange()
```
gg + geom_pointrange()
```
Figure 3\.20: geom\_pointrange()
You can also use the function `stats_summary` to calculate mean, standard error, or any other value for your data and display it using any geom.
```
ggplot(pets, aes(pet, score, color=pet)) +
stat_summary(fun.data = mean_se, geom = "crossbar") +
stat_summary(fun.min = function(x) mean(x) - sd(x),
fun.max = function(x) mean(x) + sd(x),
geom = "errorbar", width = 0) +
theme(legend.position = "none") # gets rid of the legend
```
Figure 3\.21: Vertical intervals with stats\_summary()
### 3\.7\.9 Scatter plot
Scatter plots are a good way to represent the relationship between two continuous variables.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_point()
```
Figure 3\.22: Scatter plot using geom\_point()
### 3\.7\.10 Line graph
You often want to represent the relationship as a single line.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.23: Line plot using geom\_smooth()
What are some other options for the `method` argument to `geom_smooth`? When might you want to use them?
You can plot functions other than the linear `y ~ x`. The code below creates a data table where `x` is 101 values between \-10 and 10\. and `y` is `x` squared plus `3*x` plus `1`. You’ll probably recognise this from algebra as the quadratic equation. You can set the `formula` argument in `geom_smooth` to a quadratic formula (`y ~ x + I(x^2)`) to fit a quadratic function to the data.
```
quad <- tibble(
x = seq(-10, 10, length.out = 101),
y = x^2 + 3*x + 1
)
ggplot(quad, aes(x, y)) +
geom_point() +
geom_smooth(formula = y ~ x + I(x^2),
method="lm")
```
Figure 3\.24: Fitting quadratic functions
### 3\.7\.1 Bar plot
Bar plots are good for categorical data where you want to represent the count.
```
ggplot(pets, aes(pet)) +
geom_bar()
```
Figure 3\.8: Bar plot
### 3\.7\.2 Density plot
Density plots are good for one continuous variable, but only if you have a fairly large number of observations.
```
ggplot(pets, aes(score)) +
geom_density()
```
Figure 3\.9: Density plot
You can represent subsets of a variable by assigning the category variable to the argument `group`, `fill`, or `color`.
```
ggplot(pets, aes(score, fill = pet)) +
geom_density(alpha = 0.5)
```
Figure 3\.10: Grouped density plot
Try changing the `alpha` argument to figure out what it does.
### 3\.7\.3 Frequency polygons
If you want the y\-axis to represent count rather than density, try `geom_freqpoly()`.
```
ggplot(pets, aes(score, color = pet)) +
geom_freqpoly(binwidth = 5)
```
Figure 3\.11: Frequency ploygon plot
Try changing the `binwidth` argument to 10 and 1\. How do you figure out the right value?
### 3\.7\.4 Histogram
Histograms are also good for one continuous variable, and work well if you don’t have many observations. Set the `binwidth` to control how wide each bar is.
```
ggplot(pets, aes(score)) +
geom_histogram(binwidth = 5, fill = "white", color = "black")
```
Figure 3\.12: Histogram
Histograms in ggplot look pretty bad unless you set the `fill` and `color`.
If you show grouped histograms, you also probably want to change the default `position` argument.
```
ggplot(pets, aes(score, fill=pet)) +
geom_histogram(binwidth = 5, alpha = 0.5,
position = "dodge")
```
Figure 3\.13: Grouped Histogram
Try changing the `position` argument to “identity,” “fill,” “dodge,” or “stack.”
### 3\.7\.5 Column plot
Column plots are the worst way to represent grouped continuous data, but also one of the most common. If your data are already aggregated (e.g., you have rows for each group with columns for the mean and standard error), you can use `geom_bar` or `geom_col` and `geom_errorbar` directly. If not, you can use the function `stat_summary` to calculate the mean and standard error and send those numbers to the appropriate geom for plotting.
```
ggplot(pets, aes(pet, score, fill=pet)) +
stat_summary(fun = mean, geom = "col", alpha = 0.5) +
stat_summary(fun.data = mean_se, geom = "errorbar",
width = 0.25) +
coord_cartesian(ylim = c(80, 120))
```
Figure 3\.14: Column plot
Try changing the values for `coord_cartesian`. What does this do?
### 3\.7\.6 Boxplot
Boxplots are great for representing the distribution of grouped continuous variables. They fix most of the problems with using bar/column plots for continuous data.
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_boxplot(alpha = 0.5)
```
Figure 3\.15: Box plot
### 3\.7\.7 Violin plot
Violin pots are like sideways, mirrored density plots. They give even more information than a boxplot about distribution and are especially useful when you have non\-normal distributions.
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(draw_quantiles = .5,
trim = FALSE, alpha = 0.5,)
```
Figure 3\.16: Violin plot
Try changing the `quantile` argument. Set it to a vector of the numbers 0\.1 to 0\.9 in steps of 0\.1\.
### 3\.7\.8 Vertical intervals
Boxplots and violin plots don’t always map well onto inferential stats that use the mean. You can represent the mean and standard error or any other value you can calculate.
Here, we will create a table with the means and standard errors for two groups. We’ll learn how to calculate this from raw data in the chapter on [data wrangling](dplyr.html#dplyr). We also create a new object called `gg` that sets up the base of the plot.
```
dat <- tibble(
group = c("A", "B"),
mean = c(10, 20),
se = c(2, 3)
)
gg <- ggplot(dat, aes(group, mean,
ymin = mean-se,
ymax = mean+se))
```
The trick above can be useful if you want to represent the same data in different ways. You can add different geoms to the base plot without having to re\-type the base plot code.
```
gg + geom_crossbar()
```
Figure 3\.17: geom\_crossbar()
```
gg + geom_errorbar()
```
Figure 3\.18: geom\_errorbar()
```
gg + geom_linerange()
```
Figure 3\.19: geom\_linerange()
```
gg + geom_pointrange()
```
Figure 3\.20: geom\_pointrange()
You can also use the function `stats_summary` to calculate mean, standard error, or any other value for your data and display it using any geom.
```
ggplot(pets, aes(pet, score, color=pet)) +
stat_summary(fun.data = mean_se, geom = "crossbar") +
stat_summary(fun.min = function(x) mean(x) - sd(x),
fun.max = function(x) mean(x) + sd(x),
geom = "errorbar", width = 0) +
theme(legend.position = "none") # gets rid of the legend
```
Figure 3\.21: Vertical intervals with stats\_summary()
### 3\.7\.9 Scatter plot
Scatter plots are a good way to represent the relationship between two continuous variables.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_point()
```
Figure 3\.22: Scatter plot using geom\_point()
### 3\.7\.10 Line graph
You often want to represent the relationship as a single line.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.23: Line plot using geom\_smooth()
What are some other options for the `method` argument to `geom_smooth`? When might you want to use them?
You can plot functions other than the linear `y ~ x`. The code below creates a data table where `x` is 101 values between \-10 and 10\. and `y` is `x` squared plus `3*x` plus `1`. You’ll probably recognise this from algebra as the quadratic equation. You can set the `formula` argument in `geom_smooth` to a quadratic formula (`y ~ x + I(x^2)`) to fit a quadratic function to the data.
```
quad <- tibble(
x = seq(-10, 10, length.out = 101),
y = x^2 + 3*x + 1
)
ggplot(quad, aes(x, y)) +
geom_point() +
geom_smooth(formula = y ~ x + I(x^2),
method="lm")
```
Figure 3\.24: Fitting quadratic functions
3\.8 Customisation
------------------
### 3\.8\.1 Labels
You can set custom titles and axis labels in a few different ways.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
labs(title = "Pet score with Age",
x = "Age (in Years)",
y = "score Score",
color = "Pet Type")
```
Figure 3\.25: Set custom labels with labs()
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
ggtitle("Pet score with Age") +
xlab("Age (in Years)") +
ylab("score Score") +
scale_color_discrete(name = "Pet Type")
```
Figure 3\.26: Set custom labels with individual functions
### 3\.8\.2 Colours
You can set custom values for colour and fill using functions like `scale_colour_manual()` and `scale_fill_manual()`. The [Colours chapter in Cookbook for R](http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/) has many more ways to customise colour.
```
ggplot(pets, aes(pet, score, colour = pet, fill = pet)) +
geom_violin() +
scale_color_manual(values = c("darkgreen", "dodgerblue", "orange")) +
scale_fill_manual(values = c("#CCFFCC", "#BBDDFF", "#FFCC66"))
```
Figure 3\.27: Set custom colour
### 3\.8\.3 Themes
GGplot comes with several additional themes and the ability to fully customise your theme. Type `?theme` into the console to see the full list. Other packages such as `cowplot` also have custom themes. You can add a custom theme to the end of your ggplot object and specify a new `base_size` to make the default fonts and lines larger or smaller.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
theme_minimal(base_size = 18)
```
Figure 3\.28: Minimal theme with 18\-point base font size
It’s more complicated, but you can fully customise your theme with `theme()`. You can save this to an object and add it to the end of all of your plots to make the style consistent. Alternatively, you can set the theme at the top of a script with `theme_set()` and this will apply to all subsequent ggplot plots.
```
vampire_theme <- theme(
rect = element_rect(fill = "black"),
panel.background = element_rect(fill = "black"),
text = element_text(size = 20, colour = "white"),
axis.text = element_text(size = 16, colour = "grey70"),
line = element_line(colour = "white", size = 2),
panel.grid = element_blank(),
axis.line = element_line(colour = "white"),
axis.ticks = element_blank(),
legend.position = "top"
)
theme_set(vampire_theme)
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.29: Custom theme
### 3\.8\.4 Save as file
You can save a ggplot using `ggsave()`. It saves the last ggplot you made, by default, but you can specify which plot you want to save if you assigned that plot to a variable.
You can set the `width` and `height` of your plot. The default units are inches, but you can change the `units` argument to “in,” “cm,” or “mm.”
```
box <- ggplot(pets, aes(pet, score, fill=pet)) +
geom_boxplot(alpha = 0.5)
violin <- ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(alpha = 0.5)
ggsave("demog_violin_plot.png", width = 5, height = 7)
ggsave("demog_box_plot.jpg", plot = box, width = 5, height = 7)
```
The file type is set from the filename suffix, or by specifying the argument `device`, which can take the following values: “eps,” “ps,” “tex,” “pdf,” “jpeg,” “tiff,” “png,” “bmp,” “svg” or “wmf.”
### 3\.8\.1 Labels
You can set custom titles and axis labels in a few different ways.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
labs(title = "Pet score with Age",
x = "Age (in Years)",
y = "score Score",
color = "Pet Type")
```
Figure 3\.25: Set custom labels with labs()
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
ggtitle("Pet score with Age") +
xlab("Age (in Years)") +
ylab("score Score") +
scale_color_discrete(name = "Pet Type")
```
Figure 3\.26: Set custom labels with individual functions
### 3\.8\.2 Colours
You can set custom values for colour and fill using functions like `scale_colour_manual()` and `scale_fill_manual()`. The [Colours chapter in Cookbook for R](http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/) has many more ways to customise colour.
```
ggplot(pets, aes(pet, score, colour = pet, fill = pet)) +
geom_violin() +
scale_color_manual(values = c("darkgreen", "dodgerblue", "orange")) +
scale_fill_manual(values = c("#CCFFCC", "#BBDDFF", "#FFCC66"))
```
Figure 3\.27: Set custom colour
### 3\.8\.3 Themes
GGplot comes with several additional themes and the ability to fully customise your theme. Type `?theme` into the console to see the full list. Other packages such as `cowplot` also have custom themes. You can add a custom theme to the end of your ggplot object and specify a new `base_size` to make the default fonts and lines larger or smaller.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
theme_minimal(base_size = 18)
```
Figure 3\.28: Minimal theme with 18\-point base font size
It’s more complicated, but you can fully customise your theme with `theme()`. You can save this to an object and add it to the end of all of your plots to make the style consistent. Alternatively, you can set the theme at the top of a script with `theme_set()` and this will apply to all subsequent ggplot plots.
```
vampire_theme <- theme(
rect = element_rect(fill = "black"),
panel.background = element_rect(fill = "black"),
text = element_text(size = 20, colour = "white"),
axis.text = element_text(size = 16, colour = "grey70"),
line = element_line(colour = "white", size = 2),
panel.grid = element_blank(),
axis.line = element_line(colour = "white"),
axis.ticks = element_blank(),
legend.position = "top"
)
theme_set(vampire_theme)
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.29: Custom theme
### 3\.8\.4 Save as file
You can save a ggplot using `ggsave()`. It saves the last ggplot you made, by default, but you can specify which plot you want to save if you assigned that plot to a variable.
You can set the `width` and `height` of your plot. The default units are inches, but you can change the `units` argument to “in,” “cm,” or “mm.”
```
box <- ggplot(pets, aes(pet, score, fill=pet)) +
geom_boxplot(alpha = 0.5)
violin <- ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(alpha = 0.5)
ggsave("demog_violin_plot.png", width = 5, height = 7)
ggsave("demog_box_plot.jpg", plot = box, width = 5, height = 7)
```
The file type is set from the filename suffix, or by specifying the argument `device`, which can take the following values: “eps,” “ps,” “tex,” “pdf,” “jpeg,” “tiff,” “png,” “bmp,” “svg” or “wmf.”
3\.9 Combination Plots
----------------------
### 3\.9\.1 Violinbox plot
A combination of a violin plot to show the shape of the distribution and a boxplot to show the median and interquartile ranges can be a very useful visualisation.
```
ggplot(pets, aes(pet, score, fill = pet)) +
geom_violin(show.legend = FALSE) +
geom_boxplot(width = 0.2, fill = "white",
show.legend = FALSE)
```
Figure 3\.30: Violin\-box plot
Set the `show.legend` argument to `FALSE` to hide the legend. We do this here because the x\-axis already labels the pet types.
### 3\.9\.2 Violin\-point\-range plot
You can use `stat_summary()` to superimpose a point\-range plot showning the mean ± 1 SD. You’ll learn how to write your own functions in the lesson on [Iteration and Functions](func.html#func).
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(trim = FALSE, alpha = 0.5) +
stat_summary(
fun = mean,
fun.max = function(x) {mean(x) + sd(x)},
fun.min = function(x) {mean(x) - sd(x)},
geom="pointrange"
)
```
Figure 3\.31: Point\-range plot using stat\_summary()
### 3\.9\.3 Violin\-jitter plot
If you don’t have a lot of data points, it’s good to represent them individually. You can use `geom_jitter` to do this.
```
# sample_n chooses 50 random observations from the dataset
ggplot(sample_n(pets, 50), aes(pet, score, fill=pet)) +
geom_violin(
trim = FALSE,
draw_quantiles = c(0.25, 0.5, 0.75),
alpha = 0.5
) +
geom_jitter(
width = 0.15, # points spread out over 15% of available width
height = 0, # do not move position on the y-axis
alpha = 0.5,
size = 3
)
```
Figure 3\.32: Violin\-jitter plot
### 3\.9\.4 Scatter\-line graph
If your graph isn’t too complicated, it’s good to also show the individual data points behind the line.
```
ggplot(sample_n(pets, 50), aes(age, weight, colour = pet)) +
geom_point() +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.33: Scatter\-line plot
### 3\.9\.5 Grid of plots
You can use the [`cowplot`](https://cran.r-project.org/web/packages/cowplot/vignettes/introduction.html) package to easily make grids of different graphs. First, you have to assign each plot a name. Then you list all the plots as the first arguments of `plot_grid()` and provide a vector of labels.
```
gg <- ggplot(pets, aes(pet, score, colour = pet))
nolegend <- theme(legend.position = 0)
vp <- gg + geom_violin(alpha = 0.5) + nolegend +
ggtitle("Violin Plot")
bp <- gg + geom_boxplot(alpha = 0.5) + nolegend +
ggtitle("Box Plot")
cp <- gg + stat_summary(fun = mean, geom = "col", fill = "white") + nolegend +
ggtitle("Column Plot")
dp <- ggplot(pets, aes(score, colour = pet)) +
geom_density() + nolegend +
ggtitle("Density Plot")
plot_grid(vp, bp, cp, dp, labels = LETTERS[1:4])
```
Figure 3\.34: Grid of plots
### 3\.9\.1 Violinbox plot
A combination of a violin plot to show the shape of the distribution and a boxplot to show the median and interquartile ranges can be a very useful visualisation.
```
ggplot(pets, aes(pet, score, fill = pet)) +
geom_violin(show.legend = FALSE) +
geom_boxplot(width = 0.2, fill = "white",
show.legend = FALSE)
```
Figure 3\.30: Violin\-box plot
Set the `show.legend` argument to `FALSE` to hide the legend. We do this here because the x\-axis already labels the pet types.
### 3\.9\.2 Violin\-point\-range plot
You can use `stat_summary()` to superimpose a point\-range plot showning the mean ± 1 SD. You’ll learn how to write your own functions in the lesson on [Iteration and Functions](func.html#func).
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(trim = FALSE, alpha = 0.5) +
stat_summary(
fun = mean,
fun.max = function(x) {mean(x) + sd(x)},
fun.min = function(x) {mean(x) - sd(x)},
geom="pointrange"
)
```
Figure 3\.31: Point\-range plot using stat\_summary()
### 3\.9\.3 Violin\-jitter plot
If you don’t have a lot of data points, it’s good to represent them individually. You can use `geom_jitter` to do this.
```
# sample_n chooses 50 random observations from the dataset
ggplot(sample_n(pets, 50), aes(pet, score, fill=pet)) +
geom_violin(
trim = FALSE,
draw_quantiles = c(0.25, 0.5, 0.75),
alpha = 0.5
) +
geom_jitter(
width = 0.15, # points spread out over 15% of available width
height = 0, # do not move position on the y-axis
alpha = 0.5,
size = 3
)
```
Figure 3\.32: Violin\-jitter plot
### 3\.9\.4 Scatter\-line graph
If your graph isn’t too complicated, it’s good to also show the individual data points behind the line.
```
ggplot(sample_n(pets, 50), aes(age, weight, colour = pet)) +
geom_point() +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.33: Scatter\-line plot
### 3\.9\.5 Grid of plots
You can use the [`cowplot`](https://cran.r-project.org/web/packages/cowplot/vignettes/introduction.html) package to easily make grids of different graphs. First, you have to assign each plot a name. Then you list all the plots as the first arguments of `plot_grid()` and provide a vector of labels.
```
gg <- ggplot(pets, aes(pet, score, colour = pet))
nolegend <- theme(legend.position = 0)
vp <- gg + geom_violin(alpha = 0.5) + nolegend +
ggtitle("Violin Plot")
bp <- gg + geom_boxplot(alpha = 0.5) + nolegend +
ggtitle("Box Plot")
cp <- gg + stat_summary(fun = mean, geom = "col", fill = "white") + nolegend +
ggtitle("Column Plot")
dp <- ggplot(pets, aes(score, colour = pet)) +
geom_density() + nolegend +
ggtitle("Density Plot")
plot_grid(vp, bp, cp, dp, labels = LETTERS[1:4])
```
Figure 3\.34: Grid of plots
3\.10 Overlapping Discrete Data
-------------------------------
### 3\.10\.1 Reducing Opacity
You can deal with overlapping data points (very common if you’re using Likert scales) by reducing the opacity of the points. You need to use trial and error to adjust these so they look right.
```
ggplot(pets, aes(age, score, colour = pet)) +
geom_point(alpha = 0.25) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.35: Deal with overlapping data using transparency
### 3\.10\.2 Proportional Dot Plots
Or you can set the size of the dot proportional to the number of overlapping observations using `geom_count()`.
```
ggplot(pets, aes(age, score, colour = pet)) +
geom_count()
```
Figure 3\.36: Deal with overlapping data using geom\_count()
Alternatively, you can transform your data (we will learn to do this in the [data wrangling](dplyr.html#dplyr) chapter) to create a count column and use the count to set the dot colour.
```
pets %>%
group_by(age, score) %>%
summarise(count = n(), .groups = "drop") %>%
ggplot(aes(age, score, color=count)) +
geom_point(size = 2) +
scale_color_viridis_c()
```
Figure 3\.37: Deal with overlapping data using dot colour
The [viridis package](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html) changes the colour themes to be easier to read by people with colourblindness and to print better in greyscale. Viridis is built into `ggplot2` since v3\.0\.0\. It uses `scale_colour_viridis_c()` and `scale_fill_viridis_c()` for continuous variables and `scale_colour_viridis_d()` and `scale_fill_viridis_d()` for discrete variables.
### 3\.10\.1 Reducing Opacity
You can deal with overlapping data points (very common if you’re using Likert scales) by reducing the opacity of the points. You need to use trial and error to adjust these so they look right.
```
ggplot(pets, aes(age, score, colour = pet)) +
geom_point(alpha = 0.25) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.35: Deal with overlapping data using transparency
### 3\.10\.2 Proportional Dot Plots
Or you can set the size of the dot proportional to the number of overlapping observations using `geom_count()`.
```
ggplot(pets, aes(age, score, colour = pet)) +
geom_count()
```
Figure 3\.36: Deal with overlapping data using geom\_count()
Alternatively, you can transform your data (we will learn to do this in the [data wrangling](dplyr.html#dplyr) chapter) to create a count column and use the count to set the dot colour.
```
pets %>%
group_by(age, score) %>%
summarise(count = n(), .groups = "drop") %>%
ggplot(aes(age, score, color=count)) +
geom_point(size = 2) +
scale_color_viridis_c()
```
Figure 3\.37: Deal with overlapping data using dot colour
The [viridis package](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html) changes the colour themes to be easier to read by people with colourblindness and to print better in greyscale. Viridis is built into `ggplot2` since v3\.0\.0\. It uses `scale_colour_viridis_c()` and `scale_fill_viridis_c()` for continuous variables and `scale_colour_viridis_d()` and `scale_fill_viridis_d()` for discrete variables.
3\.11 Overlapping Continuous Data
---------------------------------
Even if the variables are continuous, overplotting might obscure any relationships if you have lots of data.
```
ggplot(pets, aes(age, score)) +
geom_point()
```
Figure 3\.38: Overplotted data
### 3\.11\.1 2D Density Plot
Use `geom_density2d()` to create a contour map.
```
ggplot(pets, aes(age, score)) +
geom_density2d()
```
Figure 3\.39: Contour map with geom\_density2d()
You can use `stat_density_2d(aes(fill = ..level..), geom = "polygon")` to create a heatmap\-style density plot.
```
ggplot(pets, aes(age, score)) +
stat_density_2d(aes(fill = ..level..), geom = "polygon") +
scale_fill_viridis_c()
```
Figure 3\.40: Heatmap\-density plot
### 3\.11\.2 2D Histogram
Use `geom_bin2d()` to create a rectangular heatmap of bin counts. Set the `binwidth` to the x and y dimensions to capture in each box.
```
ggplot(pets, aes(age, score)) +
geom_bin2d(binwidth = c(1, 5))
```
Figure 3\.41: Heatmap of bin counts
### 3\.11\.3 Hexagonal Heatmap
Use `geomhex()` to create a hexagonal heatmap of bin counts. Adjust the `binwidth`, `xlim()`, `ylim()` and/or the figure dimensions to make the hexagons more or less stretched.
```
ggplot(pets, aes(age, score)) +
geom_hex(binwidth = c(1, 5))
```
Figure 3\.42: Hexagonal heatmap of bin counts
### 3\.11\.4 Correlation Heatmap
I’ve included the code for creating a correlation matrix from a table of variables, but you don’t need to understand how this is done yet. We’ll cover `mutate()` and `gather()` functions in the [dplyr](dplyr.html#dplyr) and [tidyr](tidyr.html#tidyr) lessons.
```
heatmap <- pets %>%
select_if(is.numeric) %>% # get just the numeric columns
cor() %>% # create the correlation matrix
as_tibble(rownames = "V1") %>% # make it a tibble
gather("V2", "r", 2:ncol(.)) # wide to long (V2)
```
Once you have a correlation matrix in the correct (long) format, it’s easy to make a heatmap using `geom_tile()`.
```
ggplot(heatmap, aes(V1, V2, fill=r)) +
geom_tile() +
scale_fill_viridis_c()
```
Figure 3\.43: Heatmap using geom\_tile()
### 3\.11\.1 2D Density Plot
Use `geom_density2d()` to create a contour map.
```
ggplot(pets, aes(age, score)) +
geom_density2d()
```
Figure 3\.39: Contour map with geom\_density2d()
You can use `stat_density_2d(aes(fill = ..level..), geom = "polygon")` to create a heatmap\-style density plot.
```
ggplot(pets, aes(age, score)) +
stat_density_2d(aes(fill = ..level..), geom = "polygon") +
scale_fill_viridis_c()
```
Figure 3\.40: Heatmap\-density plot
### 3\.11\.2 2D Histogram
Use `geom_bin2d()` to create a rectangular heatmap of bin counts. Set the `binwidth` to the x and y dimensions to capture in each box.
```
ggplot(pets, aes(age, score)) +
geom_bin2d(binwidth = c(1, 5))
```
Figure 3\.41: Heatmap of bin counts
### 3\.11\.3 Hexagonal Heatmap
Use `geomhex()` to create a hexagonal heatmap of bin counts. Adjust the `binwidth`, `xlim()`, `ylim()` and/or the figure dimensions to make the hexagons more or less stretched.
```
ggplot(pets, aes(age, score)) +
geom_hex(binwidth = c(1, 5))
```
Figure 3\.42: Hexagonal heatmap of bin counts
### 3\.11\.4 Correlation Heatmap
I’ve included the code for creating a correlation matrix from a table of variables, but you don’t need to understand how this is done yet. We’ll cover `mutate()` and `gather()` functions in the [dplyr](dplyr.html#dplyr) and [tidyr](tidyr.html#tidyr) lessons.
```
heatmap <- pets %>%
select_if(is.numeric) %>% # get just the numeric columns
cor() %>% # create the correlation matrix
as_tibble(rownames = "V1") %>% # make it a tibble
gather("V2", "r", 2:ncol(.)) # wide to long (V2)
```
Once you have a correlation matrix in the correct (long) format, it’s easy to make a heatmap using `geom_tile()`.
```
ggplot(heatmap, aes(V1, V2, fill=r)) +
geom_tile() +
scale_fill_viridis_c()
```
Figure 3\.43: Heatmap using geom\_tile()
3\.12 Interactive Plots
-----------------------
You can use the `plotly` package to make interactive graphs. Just assign your ggplot to a variable and use the function `ggplotly()`.
```
demog_plot <- ggplot(pets, aes(age, score, fill=pet)) +
geom_point() +
geom_smooth(formula = y~x, method = lm)
ggplotly(demog_plot)
```
Figure 3\.44: Interactive graph using plotly
Hover over the data points above and click on the legend items.
3\.13 Glossary
--------------
| term | definition |
| --- | --- |
| [continuous](https://psyteachr.github.io/glossary/c#continuous) | Data that can take on any values between other existing values. |
| [discrete](https://psyteachr.github.io/glossary/d#discrete) | Data that can only take certain values, such as integers. |
| [geom](https://psyteachr.github.io/glossary/g#geom) | The geometric style in which data are displayed, such as boxplot, density, or histogram. |
| [likert](https://psyteachr.github.io/glossary/l#likert) | A rating scale with a small number of discrete points in order |
| [nominal](https://psyteachr.github.io/glossary/n#nominal) | Categorical variables that don’t have an inherent order, such as types of animal. |
| [ordinal](https://psyteachr.github.io/glossary/o#ordinal) | Discrete variables that have an inherent order, such as number of legs |
3\.14 Exercises
---------------
Download the [exercises](exercises/03_ggplot_exercise.Rmd). See the [plots](exercises/03_ggplot_answers.html) to see what your plots should look like (this doesn’t contain the answer code). See the [answers](exercises/03_ggplot_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(3)
# run this to access the answers
dataskills::exercise(3, answers = TRUE)
```
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/ggplot.html |
Chapter 3 Data Visualisation
============================
3\.1 Learning Objectives
------------------------
### 3\.1\.1 Basic
1. Understand what types of graphs are best for [different types of data](ggplot.html#vartypes) [(video)](https://youtu.be/tOFQFPRgZ3M)
* 1 discrete
* 1 continuous
* 2 discrete
* 2 continuous
* 1 discrete, 1 continuous
* 3 continuous
2. Create common types of graphs with ggplot2 [(video)](https://youtu.be/kKlQupjD__g)
* [`geom_bar()`](ggplot.html#geom_bar)
* [`geom_density()`](ggplot.html#geom_density)
* [`geom_freqpoly()`](ggplot.html#geom_freqpoly)
* [`geom_histogram()`](ggplot.html#geom_histogram)
* [`geom_col()`](ggplot.html#geom_col)
* [`geom_boxplot()`](ggplot.html#geom_boxplot)
* [`geom_violin()`](ggplot.html#geom_violin)
* [Vertical Intervals](ggplot.html#vertical_intervals)
+ `geom_crossbar()`
+ `geom_errorbar()`
+ `geom_linerange()`
+ `geom_pointrange()`
* [`geom_point()`](ggplot.html#geom_point)
* [`geom_smooth()`](ggplot.html#geom_smooth)
3. Set custom [labels](ggplot.html#custom-labels), [colours](ggplot.html#custom-colours), and [themes](ggplot.html#themes) [(video)](https://youtu.be/6pHuCbOh86s)
4. [Combine plots](combo_plots) on the same plot, as facets, or as a grid using cowplot [(video)](https://youtu.be/AnqlfuU-VZk)
5. [Save plots](ggplot.html#ggsave) as an image file [(video)](https://youtu.be/f1Y53mjEli0)
### 3\.1\.2 Intermediate
6. Add lines to graphs
7. Deal with [overlapping data](ggplot.html#overlap)
8. Create less common types of graphs
* [`geom_tile()`](ggplot.html#geom_tile)
* [`geom_density2d()`](ggplot.html#geom_density2d)
* [`geom_bin2d()`](ggplot.html#geom_bin2d)
* [`geom_hex()`](ggplot.html#geom_hex)
* [`geom_count()`](ggplot.html#geom_count)
9. Adjust axes (e.g., flip coordinates, set axis limits)
10. Create interactive graphs with [`plotly`](ggplot.html#plotly)
3\.2 Resources
--------------
* [Chapter 3: Data Visualisation](http://r4ds.had.co.nz/data-visualisation.html) of *R for Data Science*
* [ggplot2 cheat sheet](https://github.com/rstudio/cheatsheets/raw/master/data-visualization-2.1.pdf)
* [Chapter 28: Graphics for communication](http://r4ds.had.co.nz/graphics-for-communication.html) of *R for Data Science*
* [Look at Data](http://socviz.co/look-at-data.html) from [Data Vizualization for Social Science](http://socviz.co/)
* [Hack Your Data Beautiful](https://psyteachr.github.io/hack-your-data/) workshop by University of Glasgow postgraduate students
* [Graphs](http://www.cookbook-r.com/Graphs) in *Cookbook for R*
* [ggplot2 documentation](https://ggplot2.tidyverse.org/reference/)
* [The R Graph Gallery](http://www.r-graph-gallery.com/) (this is really useful)
* [Top 50 ggplot2 Visualizations](http://r-statistics.co/Top50-Ggplot2-Visualizations-MasterList-R-Code.html)
* [R Graphics Cookbook](http://www.cookbook-r.com/Graphs/) by Winston Chang
* [ggplot extensions](https://www.ggplot2-exts.org/)
* [plotly](https://plot.ly/ggplot2/) for creating interactive graphs
3\.3 Setup
----------
```
# libraries needed for these graphs
library(tidyverse)
library(dataskills)
library(plotly)
library(cowplot)
set.seed(30250) # makes sure random numbers are reproducible
```
3\.4 Common Variable Combinations
---------------------------------
[Continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") variables are properties you can measure, like height. [Discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") variables are things you can count, like the number of pets you have. Categorical variables can be [nominal](https://psyteachr.github.io/glossary/n#nominal "Categorical variables that don’t have an inherent order, such as types of animal."), where the categories don’t really have an order, like cats, dogs and ferrets (even though ferrets are obviously best). They can also be [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs"), where there is a clear order, but the distance between the categories isn’t something you could exactly equate, like points on a [Likert](https://psyteachr.github.io/glossary/l#likert "A rating scale with a small number of discrete points in order") rating scale.
Different types of visualisations are good for different types of variables.
Load the `pets` dataset from the `dataskills` package and explore it with `glimpse(pets)` or `View(pets)`. This is a simulated dataset with one random factor (`id`), two categorical factors (`pet`, `country`) and three continuous variables (`score`, `age`, `weight`).
```
data("pets")
# if you don't have the dataskills package, use:
# pets <- read_csv("https://psyteachr.github.io/msc-data-skills/data/pets.csv", col_types = "cffiid")
glimpse(pets)
```
```
## Rows: 800
## Columns: 6
## $ id <chr> "S001", "S002", "S003", "S004", "S005", "S006", "S007", "S008"…
## $ pet <fct> dog, dog, dog, dog, dog, dog, dog, dog, dog, dog, dog, dog, do…
## $ country <fct> UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK…
## $ score <int> 90, 107, 94, 120, 111, 110, 100, 107, 106, 109, 85, 110, 102, …
## $ age <int> 6, 8, 2, 10, 4, 8, 9, 8, 6, 11, 5, 9, 1, 10, 7, 8, 1, 8, 5, 13…
## $ weight <dbl> 19.78932, 20.01422, 19.14863, 19.56953, 21.39259, 21.31880, 19…
```
Before you read ahead, come up with an example of each type of variable combination and sketch the types of graphs that would best display these data.
* 1 categorical
* 1 continuous
* 2 categorical
* 2 continuous
* 1 categorical, 1 continuous
* 3 continuous
3\.5 Basic Plots
----------------
R has some basic plotting functions, but they’re difficult to use and aesthetically not very nice. They can be useful to have a quick look at data while you’re working on a script, though. The function `plot()` usually defaults to a sensible type of plot, depending on whether the arguments `x` and `y` are categorical, continuous, or missing.
```
plot(x = pets$pet)
```
Figure 3\.1: plot() with categorical x
```
plot(x = pets$pet, y = pets$score)
```
Figure 3\.2: plot() with categorical x and continuous y
```
plot(x = pets$age, y = pets$weight)
```
Figure 3\.3: plot() with continuous x and y
The function `hist()` creates a quick histogram so you can see the distribution of your data. You can adjust how many columns are plotted with the argument `breaks`.
```
hist(pets$score, breaks = 20)
```
Figure 3\.4: hist()
3\.6 GGplots
------------
While the functions above are nice for quick visualisations, it’s hard to make pretty, publication\-ready plots. The package `ggplot2` (loaded with `tidyverse`) is one of the most common packages for creating beautiful visualisations.
`ggplot2` creates plots using a “grammar of graphics” where you add [geoms](https://psyteachr.github.io/glossary/g#geom "The geometric style in which data are displayed, such as boxplot, density, or histogram.") in layers. It can be complex to understand, but it’s very powerful once you have a mental model of how it works.
Let’s start with a totally empty plot layer created by the `ggplot()` function with no arguments.
```
ggplot()
```
Figure 3\.5: A plot base created by ggplot()
The first argument to `ggplot()` is the `data` table you want to plot. Let’s use the `pets` data we loaded above. The second argument is the `mapping` for which columns in your data table correspond to which properties of the plot, such as the `x`\-axis, the `y`\-axis, line `colour` or `linetype`, point `shape`, or object `fill`. These mappings are specified by the `aes()` function. Just adding this to the `ggplot` function creates the labels and ranges for the `x` and `y` axes. They usually have sensible default values, given your data, but we’ll learn how to change them later.
```
mapping <- aes(x = pet,
y = score,
colour = country,
fill = country)
ggplot(data = pets, mapping = mapping)
```
Figure 3\.6: Empty ggplot with x and y labels
People usually omit the argument names and just put the `aes()` function directly as the second argument to `ggplot`. They also usually omit `x` and `y` as argument names to `aes()` (but you have to name the other properties). Next we can add “geoms,” or plot styles. You literally add them with the `+` symbol. You can also add other plot attributes, such as labels, or change the theme and base font size.
```
ggplot(pets, aes(pet, score, colour = country, fill = country)) +
geom_violin(alpha = 0.5) +
labs(x = "Pet type",
y = "Score on an Important Test",
colour = "Country of Origin",
fill = "Country of Origin",
title = "My first plot!") +
theme_bw(base_size = 15)
```
Figure 3\.7: Violin plot with country represented by colour.
3\.7 Common Plot Types
----------------------
There are many geoms, and they can take different arguments to customise their appearance. We’ll learn about some of the most common below.
### 3\.7\.1 Bar plot
Bar plots are good for categorical data where you want to represent the count.
```
ggplot(pets, aes(pet)) +
geom_bar()
```
Figure 3\.8: Bar plot
### 3\.7\.2 Density plot
Density plots are good for one continuous variable, but only if you have a fairly large number of observations.
```
ggplot(pets, aes(score)) +
geom_density()
```
Figure 3\.9: Density plot
You can represent subsets of a variable by assigning the category variable to the argument `group`, `fill`, or `color`.
```
ggplot(pets, aes(score, fill = pet)) +
geom_density(alpha = 0.5)
```
Figure 3\.10: Grouped density plot
Try changing the `alpha` argument to figure out what it does.
### 3\.7\.3 Frequency polygons
If you want the y\-axis to represent count rather than density, try `geom_freqpoly()`.
```
ggplot(pets, aes(score, color = pet)) +
geom_freqpoly(binwidth = 5)
```
Figure 3\.11: Frequency ploygon plot
Try changing the `binwidth` argument to 10 and 1\. How do you figure out the right value?
### 3\.7\.4 Histogram
Histograms are also good for one continuous variable, and work well if you don’t have many observations. Set the `binwidth` to control how wide each bar is.
```
ggplot(pets, aes(score)) +
geom_histogram(binwidth = 5, fill = "white", color = "black")
```
Figure 3\.12: Histogram
Histograms in ggplot look pretty bad unless you set the `fill` and `color`.
If you show grouped histograms, you also probably want to change the default `position` argument.
```
ggplot(pets, aes(score, fill=pet)) +
geom_histogram(binwidth = 5, alpha = 0.5,
position = "dodge")
```
Figure 3\.13: Grouped Histogram
Try changing the `position` argument to “identity,” “fill,” “dodge,” or “stack.”
### 3\.7\.5 Column plot
Column plots are the worst way to represent grouped continuous data, but also one of the most common. If your data are already aggregated (e.g., you have rows for each group with columns for the mean and standard error), you can use `geom_bar` or `geom_col` and `geom_errorbar` directly. If not, you can use the function `stat_summary` to calculate the mean and standard error and send those numbers to the appropriate geom for plotting.
```
ggplot(pets, aes(pet, score, fill=pet)) +
stat_summary(fun = mean, geom = "col", alpha = 0.5) +
stat_summary(fun.data = mean_se, geom = "errorbar",
width = 0.25) +
coord_cartesian(ylim = c(80, 120))
```
Figure 3\.14: Column plot
Try changing the values for `coord_cartesian`. What does this do?
### 3\.7\.6 Boxplot
Boxplots are great for representing the distribution of grouped continuous variables. They fix most of the problems with using bar/column plots for continuous data.
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_boxplot(alpha = 0.5)
```
Figure 3\.15: Box plot
### 3\.7\.7 Violin plot
Violin pots are like sideways, mirrored density plots. They give even more information than a boxplot about distribution and are especially useful when you have non\-normal distributions.
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(draw_quantiles = .5,
trim = FALSE, alpha = 0.5,)
```
Figure 3\.16: Violin plot
Try changing the `quantile` argument. Set it to a vector of the numbers 0\.1 to 0\.9 in steps of 0\.1\.
### 3\.7\.8 Vertical intervals
Boxplots and violin plots don’t always map well onto inferential stats that use the mean. You can represent the mean and standard error or any other value you can calculate.
Here, we will create a table with the means and standard errors for two groups. We’ll learn how to calculate this from raw data in the chapter on [data wrangling](dplyr.html#dplyr). We also create a new object called `gg` that sets up the base of the plot.
```
dat <- tibble(
group = c("A", "B"),
mean = c(10, 20),
se = c(2, 3)
)
gg <- ggplot(dat, aes(group, mean,
ymin = mean-se,
ymax = mean+se))
```
The trick above can be useful if you want to represent the same data in different ways. You can add different geoms to the base plot without having to re\-type the base plot code.
```
gg + geom_crossbar()
```
Figure 3\.17: geom\_crossbar()
```
gg + geom_errorbar()
```
Figure 3\.18: geom\_errorbar()
```
gg + geom_linerange()
```
Figure 3\.19: geom\_linerange()
```
gg + geom_pointrange()
```
Figure 3\.20: geom\_pointrange()
You can also use the function `stats_summary` to calculate mean, standard error, or any other value for your data and display it using any geom.
```
ggplot(pets, aes(pet, score, color=pet)) +
stat_summary(fun.data = mean_se, geom = "crossbar") +
stat_summary(fun.min = function(x) mean(x) - sd(x),
fun.max = function(x) mean(x) + sd(x),
geom = "errorbar", width = 0) +
theme(legend.position = "none") # gets rid of the legend
```
Figure 3\.21: Vertical intervals with stats\_summary()
### 3\.7\.9 Scatter plot
Scatter plots are a good way to represent the relationship between two continuous variables.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_point()
```
Figure 3\.22: Scatter plot using geom\_point()
### 3\.7\.10 Line graph
You often want to represent the relationship as a single line.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.23: Line plot using geom\_smooth()
What are some other options for the `method` argument to `geom_smooth`? When might you want to use them?
You can plot functions other than the linear `y ~ x`. The code below creates a data table where `x` is 101 values between \-10 and 10\. and `y` is `x` squared plus `3*x` plus `1`. You’ll probably recognise this from algebra as the quadratic equation. You can set the `formula` argument in `geom_smooth` to a quadratic formula (`y ~ x + I(x^2)`) to fit a quadratic function to the data.
```
quad <- tibble(
x = seq(-10, 10, length.out = 101),
y = x^2 + 3*x + 1
)
ggplot(quad, aes(x, y)) +
geom_point() +
geom_smooth(formula = y ~ x + I(x^2),
method="lm")
```
Figure 3\.24: Fitting quadratic functions
3\.8 Customisation
------------------
### 3\.8\.1 Labels
You can set custom titles and axis labels in a few different ways.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
labs(title = "Pet score with Age",
x = "Age (in Years)",
y = "score Score",
color = "Pet Type")
```
Figure 3\.25: Set custom labels with labs()
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
ggtitle("Pet score with Age") +
xlab("Age (in Years)") +
ylab("score Score") +
scale_color_discrete(name = "Pet Type")
```
Figure 3\.26: Set custom labels with individual functions
### 3\.8\.2 Colours
You can set custom values for colour and fill using functions like `scale_colour_manual()` and `scale_fill_manual()`. The [Colours chapter in Cookbook for R](http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/) has many more ways to customise colour.
```
ggplot(pets, aes(pet, score, colour = pet, fill = pet)) +
geom_violin() +
scale_color_manual(values = c("darkgreen", "dodgerblue", "orange")) +
scale_fill_manual(values = c("#CCFFCC", "#BBDDFF", "#FFCC66"))
```
Figure 3\.27: Set custom colour
### 3\.8\.3 Themes
GGplot comes with several additional themes and the ability to fully customise your theme. Type `?theme` into the console to see the full list. Other packages such as `cowplot` also have custom themes. You can add a custom theme to the end of your ggplot object and specify a new `base_size` to make the default fonts and lines larger or smaller.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
theme_minimal(base_size = 18)
```
Figure 3\.28: Minimal theme with 18\-point base font size
It’s more complicated, but you can fully customise your theme with `theme()`. You can save this to an object and add it to the end of all of your plots to make the style consistent. Alternatively, you can set the theme at the top of a script with `theme_set()` and this will apply to all subsequent ggplot plots.
```
vampire_theme <- theme(
rect = element_rect(fill = "black"),
panel.background = element_rect(fill = "black"),
text = element_text(size = 20, colour = "white"),
axis.text = element_text(size = 16, colour = "grey70"),
line = element_line(colour = "white", size = 2),
panel.grid = element_blank(),
axis.line = element_line(colour = "white"),
axis.ticks = element_blank(),
legend.position = "top"
)
theme_set(vampire_theme)
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.29: Custom theme
### 3\.8\.4 Save as file
You can save a ggplot using `ggsave()`. It saves the last ggplot you made, by default, but you can specify which plot you want to save if you assigned that plot to a variable.
You can set the `width` and `height` of your plot. The default units are inches, but you can change the `units` argument to “in,” “cm,” or “mm.”
```
box <- ggplot(pets, aes(pet, score, fill=pet)) +
geom_boxplot(alpha = 0.5)
violin <- ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(alpha = 0.5)
ggsave("demog_violin_plot.png", width = 5, height = 7)
ggsave("demog_box_plot.jpg", plot = box, width = 5, height = 7)
```
The file type is set from the filename suffix, or by specifying the argument `device`, which can take the following values: “eps,” “ps,” “tex,” “pdf,” “jpeg,” “tiff,” “png,” “bmp,” “svg” or “wmf.”
3\.9 Combination Plots
----------------------
### 3\.9\.1 Violinbox plot
A combination of a violin plot to show the shape of the distribution and a boxplot to show the median and interquartile ranges can be a very useful visualisation.
```
ggplot(pets, aes(pet, score, fill = pet)) +
geom_violin(show.legend = FALSE) +
geom_boxplot(width = 0.2, fill = "white",
show.legend = FALSE)
```
Figure 3\.30: Violin\-box plot
Set the `show.legend` argument to `FALSE` to hide the legend. We do this here because the x\-axis already labels the pet types.
### 3\.9\.2 Violin\-point\-range plot
You can use `stat_summary()` to superimpose a point\-range plot showning the mean ± 1 SD. You’ll learn how to write your own functions in the lesson on [Iteration and Functions](func.html#func).
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(trim = FALSE, alpha = 0.5) +
stat_summary(
fun = mean,
fun.max = function(x) {mean(x) + sd(x)},
fun.min = function(x) {mean(x) - sd(x)},
geom="pointrange"
)
```
Figure 3\.31: Point\-range plot using stat\_summary()
### 3\.9\.3 Violin\-jitter plot
If you don’t have a lot of data points, it’s good to represent them individually. You can use `geom_jitter` to do this.
```
# sample_n chooses 50 random observations from the dataset
ggplot(sample_n(pets, 50), aes(pet, score, fill=pet)) +
geom_violin(
trim = FALSE,
draw_quantiles = c(0.25, 0.5, 0.75),
alpha = 0.5
) +
geom_jitter(
width = 0.15, # points spread out over 15% of available width
height = 0, # do not move position on the y-axis
alpha = 0.5,
size = 3
)
```
Figure 3\.32: Violin\-jitter plot
### 3\.9\.4 Scatter\-line graph
If your graph isn’t too complicated, it’s good to also show the individual data points behind the line.
```
ggplot(sample_n(pets, 50), aes(age, weight, colour = pet)) +
geom_point() +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.33: Scatter\-line plot
### 3\.9\.5 Grid of plots
You can use the [`cowplot`](https://cran.r-project.org/web/packages/cowplot/vignettes/introduction.html) package to easily make grids of different graphs. First, you have to assign each plot a name. Then you list all the plots as the first arguments of `plot_grid()` and provide a vector of labels.
```
gg <- ggplot(pets, aes(pet, score, colour = pet))
nolegend <- theme(legend.position = 0)
vp <- gg + geom_violin(alpha = 0.5) + nolegend +
ggtitle("Violin Plot")
bp <- gg + geom_boxplot(alpha = 0.5) + nolegend +
ggtitle("Box Plot")
cp <- gg + stat_summary(fun = mean, geom = "col", fill = "white") + nolegend +
ggtitle("Column Plot")
dp <- ggplot(pets, aes(score, colour = pet)) +
geom_density() + nolegend +
ggtitle("Density Plot")
plot_grid(vp, bp, cp, dp, labels = LETTERS[1:4])
```
Figure 3\.34: Grid of plots
3\.10 Overlapping Discrete Data
-------------------------------
### 3\.10\.1 Reducing Opacity
You can deal with overlapping data points (very common if you’re using Likert scales) by reducing the opacity of the points. You need to use trial and error to adjust these so they look right.
```
ggplot(pets, aes(age, score, colour = pet)) +
geom_point(alpha = 0.25) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.35: Deal with overlapping data using transparency
### 3\.10\.2 Proportional Dot Plots
Or you can set the size of the dot proportional to the number of overlapping observations using `geom_count()`.
```
ggplot(pets, aes(age, score, colour = pet)) +
geom_count()
```
Figure 3\.36: Deal with overlapping data using geom\_count()
Alternatively, you can transform your data (we will learn to do this in the [data wrangling](dplyr.html#dplyr) chapter) to create a count column and use the count to set the dot colour.
```
pets %>%
group_by(age, score) %>%
summarise(count = n(), .groups = "drop") %>%
ggplot(aes(age, score, color=count)) +
geom_point(size = 2) +
scale_color_viridis_c()
```
Figure 3\.37: Deal with overlapping data using dot colour
The [viridis package](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html) changes the colour themes to be easier to read by people with colourblindness and to print better in greyscale. Viridis is built into `ggplot2` since v3\.0\.0\. It uses `scale_colour_viridis_c()` and `scale_fill_viridis_c()` for continuous variables and `scale_colour_viridis_d()` and `scale_fill_viridis_d()` for discrete variables.
3\.11 Overlapping Continuous Data
---------------------------------
Even if the variables are continuous, overplotting might obscure any relationships if you have lots of data.
```
ggplot(pets, aes(age, score)) +
geom_point()
```
Figure 3\.38: Overplotted data
### 3\.11\.1 2D Density Plot
Use `geom_density2d()` to create a contour map.
```
ggplot(pets, aes(age, score)) +
geom_density2d()
```
Figure 3\.39: Contour map with geom\_density2d()
You can use `stat_density_2d(aes(fill = ..level..), geom = "polygon")` to create a heatmap\-style density plot.
```
ggplot(pets, aes(age, score)) +
stat_density_2d(aes(fill = ..level..), geom = "polygon") +
scale_fill_viridis_c()
```
Figure 3\.40: Heatmap\-density plot
### 3\.11\.2 2D Histogram
Use `geom_bin2d()` to create a rectangular heatmap of bin counts. Set the `binwidth` to the x and y dimensions to capture in each box.
```
ggplot(pets, aes(age, score)) +
geom_bin2d(binwidth = c(1, 5))
```
Figure 3\.41: Heatmap of bin counts
### 3\.11\.3 Hexagonal Heatmap
Use `geomhex()` to create a hexagonal heatmap of bin counts. Adjust the `binwidth`, `xlim()`, `ylim()` and/or the figure dimensions to make the hexagons more or less stretched.
```
ggplot(pets, aes(age, score)) +
geom_hex(binwidth = c(1, 5))
```
Figure 3\.42: Hexagonal heatmap of bin counts
### 3\.11\.4 Correlation Heatmap
I’ve included the code for creating a correlation matrix from a table of variables, but you don’t need to understand how this is done yet. We’ll cover `mutate()` and `gather()` functions in the [dplyr](dplyr.html#dplyr) and [tidyr](tidyr.html#tidyr) lessons.
```
heatmap <- pets %>%
select_if(is.numeric) %>% # get just the numeric columns
cor() %>% # create the correlation matrix
as_tibble(rownames = "V1") %>% # make it a tibble
gather("V2", "r", 2:ncol(.)) # wide to long (V2)
```
Once you have a correlation matrix in the correct (long) format, it’s easy to make a heatmap using `geom_tile()`.
```
ggplot(heatmap, aes(V1, V2, fill=r)) +
geom_tile() +
scale_fill_viridis_c()
```
Figure 3\.43: Heatmap using geom\_tile()
3\.12 Interactive Plots
-----------------------
You can use the `plotly` package to make interactive graphs. Just assign your ggplot to a variable and use the function `ggplotly()`.
```
demog_plot <- ggplot(pets, aes(age, score, fill=pet)) +
geom_point() +
geom_smooth(formula = y~x, method = lm)
ggplotly(demog_plot)
```
Figure 3\.44: Interactive graph using plotly
Hover over the data points above and click on the legend items.
3\.13 Glossary
--------------
| term | definition |
| --- | --- |
| [continuous](https://psyteachr.github.io/glossary/c#continuous) | Data that can take on any values between other existing values. |
| [discrete](https://psyteachr.github.io/glossary/d#discrete) | Data that can only take certain values, such as integers. |
| [geom](https://psyteachr.github.io/glossary/g#geom) | The geometric style in which data are displayed, such as boxplot, density, or histogram. |
| [likert](https://psyteachr.github.io/glossary/l#likert) | A rating scale with a small number of discrete points in order |
| [nominal](https://psyteachr.github.io/glossary/n#nominal) | Categorical variables that don’t have an inherent order, such as types of animal. |
| [ordinal](https://psyteachr.github.io/glossary/o#ordinal) | Discrete variables that have an inherent order, such as number of legs |
3\.14 Exercises
---------------
Download the [exercises](exercises/03_ggplot_exercise.Rmd). See the [plots](exercises/03_ggplot_answers.html) to see what your plots should look like (this doesn’t contain the answer code). See the [answers](exercises/03_ggplot_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(3)
# run this to access the answers
dataskills::exercise(3, answers = TRUE)
```
3\.1 Learning Objectives
------------------------
### 3\.1\.1 Basic
1. Understand what types of graphs are best for [different types of data](ggplot.html#vartypes) [(video)](https://youtu.be/tOFQFPRgZ3M)
* 1 discrete
* 1 continuous
* 2 discrete
* 2 continuous
* 1 discrete, 1 continuous
* 3 continuous
2. Create common types of graphs with ggplot2 [(video)](https://youtu.be/kKlQupjD__g)
* [`geom_bar()`](ggplot.html#geom_bar)
* [`geom_density()`](ggplot.html#geom_density)
* [`geom_freqpoly()`](ggplot.html#geom_freqpoly)
* [`geom_histogram()`](ggplot.html#geom_histogram)
* [`geom_col()`](ggplot.html#geom_col)
* [`geom_boxplot()`](ggplot.html#geom_boxplot)
* [`geom_violin()`](ggplot.html#geom_violin)
* [Vertical Intervals](ggplot.html#vertical_intervals)
+ `geom_crossbar()`
+ `geom_errorbar()`
+ `geom_linerange()`
+ `geom_pointrange()`
* [`geom_point()`](ggplot.html#geom_point)
* [`geom_smooth()`](ggplot.html#geom_smooth)
3. Set custom [labels](ggplot.html#custom-labels), [colours](ggplot.html#custom-colours), and [themes](ggplot.html#themes) [(video)](https://youtu.be/6pHuCbOh86s)
4. [Combine plots](combo_plots) on the same plot, as facets, or as a grid using cowplot [(video)](https://youtu.be/AnqlfuU-VZk)
5. [Save plots](ggplot.html#ggsave) as an image file [(video)](https://youtu.be/f1Y53mjEli0)
### 3\.1\.2 Intermediate
6. Add lines to graphs
7. Deal with [overlapping data](ggplot.html#overlap)
8. Create less common types of graphs
* [`geom_tile()`](ggplot.html#geom_tile)
* [`geom_density2d()`](ggplot.html#geom_density2d)
* [`geom_bin2d()`](ggplot.html#geom_bin2d)
* [`geom_hex()`](ggplot.html#geom_hex)
* [`geom_count()`](ggplot.html#geom_count)
9. Adjust axes (e.g., flip coordinates, set axis limits)
10. Create interactive graphs with [`plotly`](ggplot.html#plotly)
### 3\.1\.1 Basic
1. Understand what types of graphs are best for [different types of data](ggplot.html#vartypes) [(video)](https://youtu.be/tOFQFPRgZ3M)
* 1 discrete
* 1 continuous
* 2 discrete
* 2 continuous
* 1 discrete, 1 continuous
* 3 continuous
2. Create common types of graphs with ggplot2 [(video)](https://youtu.be/kKlQupjD__g)
* [`geom_bar()`](ggplot.html#geom_bar)
* [`geom_density()`](ggplot.html#geom_density)
* [`geom_freqpoly()`](ggplot.html#geom_freqpoly)
* [`geom_histogram()`](ggplot.html#geom_histogram)
* [`geom_col()`](ggplot.html#geom_col)
* [`geom_boxplot()`](ggplot.html#geom_boxplot)
* [`geom_violin()`](ggplot.html#geom_violin)
* [Vertical Intervals](ggplot.html#vertical_intervals)
+ `geom_crossbar()`
+ `geom_errorbar()`
+ `geom_linerange()`
+ `geom_pointrange()`
* [`geom_point()`](ggplot.html#geom_point)
* [`geom_smooth()`](ggplot.html#geom_smooth)
3. Set custom [labels](ggplot.html#custom-labels), [colours](ggplot.html#custom-colours), and [themes](ggplot.html#themes) [(video)](https://youtu.be/6pHuCbOh86s)
4. [Combine plots](combo_plots) on the same plot, as facets, or as a grid using cowplot [(video)](https://youtu.be/AnqlfuU-VZk)
5. [Save plots](ggplot.html#ggsave) as an image file [(video)](https://youtu.be/f1Y53mjEli0)
### 3\.1\.2 Intermediate
6. Add lines to graphs
7. Deal with [overlapping data](ggplot.html#overlap)
8. Create less common types of graphs
* [`geom_tile()`](ggplot.html#geom_tile)
* [`geom_density2d()`](ggplot.html#geom_density2d)
* [`geom_bin2d()`](ggplot.html#geom_bin2d)
* [`geom_hex()`](ggplot.html#geom_hex)
* [`geom_count()`](ggplot.html#geom_count)
9. Adjust axes (e.g., flip coordinates, set axis limits)
10. Create interactive graphs with [`plotly`](ggplot.html#plotly)
3\.2 Resources
--------------
* [Chapter 3: Data Visualisation](http://r4ds.had.co.nz/data-visualisation.html) of *R for Data Science*
* [ggplot2 cheat sheet](https://github.com/rstudio/cheatsheets/raw/master/data-visualization-2.1.pdf)
* [Chapter 28: Graphics for communication](http://r4ds.had.co.nz/graphics-for-communication.html) of *R for Data Science*
* [Look at Data](http://socviz.co/look-at-data.html) from [Data Vizualization for Social Science](http://socviz.co/)
* [Hack Your Data Beautiful](https://psyteachr.github.io/hack-your-data/) workshop by University of Glasgow postgraduate students
* [Graphs](http://www.cookbook-r.com/Graphs) in *Cookbook for R*
* [ggplot2 documentation](https://ggplot2.tidyverse.org/reference/)
* [The R Graph Gallery](http://www.r-graph-gallery.com/) (this is really useful)
* [Top 50 ggplot2 Visualizations](http://r-statistics.co/Top50-Ggplot2-Visualizations-MasterList-R-Code.html)
* [R Graphics Cookbook](http://www.cookbook-r.com/Graphs/) by Winston Chang
* [ggplot extensions](https://www.ggplot2-exts.org/)
* [plotly](https://plot.ly/ggplot2/) for creating interactive graphs
3\.3 Setup
----------
```
# libraries needed for these graphs
library(tidyverse)
library(dataskills)
library(plotly)
library(cowplot)
set.seed(30250) # makes sure random numbers are reproducible
```
3\.4 Common Variable Combinations
---------------------------------
[Continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") variables are properties you can measure, like height. [Discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") variables are things you can count, like the number of pets you have. Categorical variables can be [nominal](https://psyteachr.github.io/glossary/n#nominal "Categorical variables that don’t have an inherent order, such as types of animal."), where the categories don’t really have an order, like cats, dogs and ferrets (even though ferrets are obviously best). They can also be [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs"), where there is a clear order, but the distance between the categories isn’t something you could exactly equate, like points on a [Likert](https://psyteachr.github.io/glossary/l#likert "A rating scale with a small number of discrete points in order") rating scale.
Different types of visualisations are good for different types of variables.
Load the `pets` dataset from the `dataskills` package and explore it with `glimpse(pets)` or `View(pets)`. This is a simulated dataset with one random factor (`id`), two categorical factors (`pet`, `country`) and three continuous variables (`score`, `age`, `weight`).
```
data("pets")
# if you don't have the dataskills package, use:
# pets <- read_csv("https://psyteachr.github.io/msc-data-skills/data/pets.csv", col_types = "cffiid")
glimpse(pets)
```
```
## Rows: 800
## Columns: 6
## $ id <chr> "S001", "S002", "S003", "S004", "S005", "S006", "S007", "S008"…
## $ pet <fct> dog, dog, dog, dog, dog, dog, dog, dog, dog, dog, dog, dog, do…
## $ country <fct> UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK…
## $ score <int> 90, 107, 94, 120, 111, 110, 100, 107, 106, 109, 85, 110, 102, …
## $ age <int> 6, 8, 2, 10, 4, 8, 9, 8, 6, 11, 5, 9, 1, 10, 7, 8, 1, 8, 5, 13…
## $ weight <dbl> 19.78932, 20.01422, 19.14863, 19.56953, 21.39259, 21.31880, 19…
```
Before you read ahead, come up with an example of each type of variable combination and sketch the types of graphs that would best display these data.
* 1 categorical
* 1 continuous
* 2 categorical
* 2 continuous
* 1 categorical, 1 continuous
* 3 continuous
3\.5 Basic Plots
----------------
R has some basic plotting functions, but they’re difficult to use and aesthetically not very nice. They can be useful to have a quick look at data while you’re working on a script, though. The function `plot()` usually defaults to a sensible type of plot, depending on whether the arguments `x` and `y` are categorical, continuous, or missing.
```
plot(x = pets$pet)
```
Figure 3\.1: plot() with categorical x
```
plot(x = pets$pet, y = pets$score)
```
Figure 3\.2: plot() with categorical x and continuous y
```
plot(x = pets$age, y = pets$weight)
```
Figure 3\.3: plot() with continuous x and y
The function `hist()` creates a quick histogram so you can see the distribution of your data. You can adjust how many columns are plotted with the argument `breaks`.
```
hist(pets$score, breaks = 20)
```
Figure 3\.4: hist()
3\.6 GGplots
------------
While the functions above are nice for quick visualisations, it’s hard to make pretty, publication\-ready plots. The package `ggplot2` (loaded with `tidyverse`) is one of the most common packages for creating beautiful visualisations.
`ggplot2` creates plots using a “grammar of graphics” where you add [geoms](https://psyteachr.github.io/glossary/g#geom "The geometric style in which data are displayed, such as boxplot, density, or histogram.") in layers. It can be complex to understand, but it’s very powerful once you have a mental model of how it works.
Let’s start with a totally empty plot layer created by the `ggplot()` function with no arguments.
```
ggplot()
```
Figure 3\.5: A plot base created by ggplot()
The first argument to `ggplot()` is the `data` table you want to plot. Let’s use the `pets` data we loaded above. The second argument is the `mapping` for which columns in your data table correspond to which properties of the plot, such as the `x`\-axis, the `y`\-axis, line `colour` or `linetype`, point `shape`, or object `fill`. These mappings are specified by the `aes()` function. Just adding this to the `ggplot` function creates the labels and ranges for the `x` and `y` axes. They usually have sensible default values, given your data, but we’ll learn how to change them later.
```
mapping <- aes(x = pet,
y = score,
colour = country,
fill = country)
ggplot(data = pets, mapping = mapping)
```
Figure 3\.6: Empty ggplot with x and y labels
People usually omit the argument names and just put the `aes()` function directly as the second argument to `ggplot`. They also usually omit `x` and `y` as argument names to `aes()` (but you have to name the other properties). Next we can add “geoms,” or plot styles. You literally add them with the `+` symbol. You can also add other plot attributes, such as labels, or change the theme and base font size.
```
ggplot(pets, aes(pet, score, colour = country, fill = country)) +
geom_violin(alpha = 0.5) +
labs(x = "Pet type",
y = "Score on an Important Test",
colour = "Country of Origin",
fill = "Country of Origin",
title = "My first plot!") +
theme_bw(base_size = 15)
```
Figure 3\.7: Violin plot with country represented by colour.
3\.7 Common Plot Types
----------------------
There are many geoms, and they can take different arguments to customise their appearance. We’ll learn about some of the most common below.
### 3\.7\.1 Bar plot
Bar plots are good for categorical data where you want to represent the count.
```
ggplot(pets, aes(pet)) +
geom_bar()
```
Figure 3\.8: Bar plot
### 3\.7\.2 Density plot
Density plots are good for one continuous variable, but only if you have a fairly large number of observations.
```
ggplot(pets, aes(score)) +
geom_density()
```
Figure 3\.9: Density plot
You can represent subsets of a variable by assigning the category variable to the argument `group`, `fill`, or `color`.
```
ggplot(pets, aes(score, fill = pet)) +
geom_density(alpha = 0.5)
```
Figure 3\.10: Grouped density plot
Try changing the `alpha` argument to figure out what it does.
### 3\.7\.3 Frequency polygons
If you want the y\-axis to represent count rather than density, try `geom_freqpoly()`.
```
ggplot(pets, aes(score, color = pet)) +
geom_freqpoly(binwidth = 5)
```
Figure 3\.11: Frequency ploygon plot
Try changing the `binwidth` argument to 10 and 1\. How do you figure out the right value?
### 3\.7\.4 Histogram
Histograms are also good for one continuous variable, and work well if you don’t have many observations. Set the `binwidth` to control how wide each bar is.
```
ggplot(pets, aes(score)) +
geom_histogram(binwidth = 5, fill = "white", color = "black")
```
Figure 3\.12: Histogram
Histograms in ggplot look pretty bad unless you set the `fill` and `color`.
If you show grouped histograms, you also probably want to change the default `position` argument.
```
ggplot(pets, aes(score, fill=pet)) +
geom_histogram(binwidth = 5, alpha = 0.5,
position = "dodge")
```
Figure 3\.13: Grouped Histogram
Try changing the `position` argument to “identity,” “fill,” “dodge,” or “stack.”
### 3\.7\.5 Column plot
Column plots are the worst way to represent grouped continuous data, but also one of the most common. If your data are already aggregated (e.g., you have rows for each group with columns for the mean and standard error), you can use `geom_bar` or `geom_col` and `geom_errorbar` directly. If not, you can use the function `stat_summary` to calculate the mean and standard error and send those numbers to the appropriate geom for plotting.
```
ggplot(pets, aes(pet, score, fill=pet)) +
stat_summary(fun = mean, geom = "col", alpha = 0.5) +
stat_summary(fun.data = mean_se, geom = "errorbar",
width = 0.25) +
coord_cartesian(ylim = c(80, 120))
```
Figure 3\.14: Column plot
Try changing the values for `coord_cartesian`. What does this do?
### 3\.7\.6 Boxplot
Boxplots are great for representing the distribution of grouped continuous variables. They fix most of the problems with using bar/column plots for continuous data.
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_boxplot(alpha = 0.5)
```
Figure 3\.15: Box plot
### 3\.7\.7 Violin plot
Violin pots are like sideways, mirrored density plots. They give even more information than a boxplot about distribution and are especially useful when you have non\-normal distributions.
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(draw_quantiles = .5,
trim = FALSE, alpha = 0.5,)
```
Figure 3\.16: Violin plot
Try changing the `quantile` argument. Set it to a vector of the numbers 0\.1 to 0\.9 in steps of 0\.1\.
### 3\.7\.8 Vertical intervals
Boxplots and violin plots don’t always map well onto inferential stats that use the mean. You can represent the mean and standard error or any other value you can calculate.
Here, we will create a table with the means and standard errors for two groups. We’ll learn how to calculate this from raw data in the chapter on [data wrangling](dplyr.html#dplyr). We also create a new object called `gg` that sets up the base of the plot.
```
dat <- tibble(
group = c("A", "B"),
mean = c(10, 20),
se = c(2, 3)
)
gg <- ggplot(dat, aes(group, mean,
ymin = mean-se,
ymax = mean+se))
```
The trick above can be useful if you want to represent the same data in different ways. You can add different geoms to the base plot without having to re\-type the base plot code.
```
gg + geom_crossbar()
```
Figure 3\.17: geom\_crossbar()
```
gg + geom_errorbar()
```
Figure 3\.18: geom\_errorbar()
```
gg + geom_linerange()
```
Figure 3\.19: geom\_linerange()
```
gg + geom_pointrange()
```
Figure 3\.20: geom\_pointrange()
You can also use the function `stats_summary` to calculate mean, standard error, or any other value for your data and display it using any geom.
```
ggplot(pets, aes(pet, score, color=pet)) +
stat_summary(fun.data = mean_se, geom = "crossbar") +
stat_summary(fun.min = function(x) mean(x) - sd(x),
fun.max = function(x) mean(x) + sd(x),
geom = "errorbar", width = 0) +
theme(legend.position = "none") # gets rid of the legend
```
Figure 3\.21: Vertical intervals with stats\_summary()
### 3\.7\.9 Scatter plot
Scatter plots are a good way to represent the relationship between two continuous variables.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_point()
```
Figure 3\.22: Scatter plot using geom\_point()
### 3\.7\.10 Line graph
You often want to represent the relationship as a single line.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.23: Line plot using geom\_smooth()
What are some other options for the `method` argument to `geom_smooth`? When might you want to use them?
You can plot functions other than the linear `y ~ x`. The code below creates a data table where `x` is 101 values between \-10 and 10\. and `y` is `x` squared plus `3*x` plus `1`. You’ll probably recognise this from algebra as the quadratic equation. You can set the `formula` argument in `geom_smooth` to a quadratic formula (`y ~ x + I(x^2)`) to fit a quadratic function to the data.
```
quad <- tibble(
x = seq(-10, 10, length.out = 101),
y = x^2 + 3*x + 1
)
ggplot(quad, aes(x, y)) +
geom_point() +
geom_smooth(formula = y ~ x + I(x^2),
method="lm")
```
Figure 3\.24: Fitting quadratic functions
### 3\.7\.1 Bar plot
Bar plots are good for categorical data where you want to represent the count.
```
ggplot(pets, aes(pet)) +
geom_bar()
```
Figure 3\.8: Bar plot
### 3\.7\.2 Density plot
Density plots are good for one continuous variable, but only if you have a fairly large number of observations.
```
ggplot(pets, aes(score)) +
geom_density()
```
Figure 3\.9: Density plot
You can represent subsets of a variable by assigning the category variable to the argument `group`, `fill`, or `color`.
```
ggplot(pets, aes(score, fill = pet)) +
geom_density(alpha = 0.5)
```
Figure 3\.10: Grouped density plot
Try changing the `alpha` argument to figure out what it does.
### 3\.7\.3 Frequency polygons
If you want the y\-axis to represent count rather than density, try `geom_freqpoly()`.
```
ggplot(pets, aes(score, color = pet)) +
geom_freqpoly(binwidth = 5)
```
Figure 3\.11: Frequency ploygon plot
Try changing the `binwidth` argument to 10 and 1\. How do you figure out the right value?
### 3\.7\.4 Histogram
Histograms are also good for one continuous variable, and work well if you don’t have many observations. Set the `binwidth` to control how wide each bar is.
```
ggplot(pets, aes(score)) +
geom_histogram(binwidth = 5, fill = "white", color = "black")
```
Figure 3\.12: Histogram
Histograms in ggplot look pretty bad unless you set the `fill` and `color`.
If you show grouped histograms, you also probably want to change the default `position` argument.
```
ggplot(pets, aes(score, fill=pet)) +
geom_histogram(binwidth = 5, alpha = 0.5,
position = "dodge")
```
Figure 3\.13: Grouped Histogram
Try changing the `position` argument to “identity,” “fill,” “dodge,” or “stack.”
### 3\.7\.5 Column plot
Column plots are the worst way to represent grouped continuous data, but also one of the most common. If your data are already aggregated (e.g., you have rows for each group with columns for the mean and standard error), you can use `geom_bar` or `geom_col` and `geom_errorbar` directly. If not, you can use the function `stat_summary` to calculate the mean and standard error and send those numbers to the appropriate geom for plotting.
```
ggplot(pets, aes(pet, score, fill=pet)) +
stat_summary(fun = mean, geom = "col", alpha = 0.5) +
stat_summary(fun.data = mean_se, geom = "errorbar",
width = 0.25) +
coord_cartesian(ylim = c(80, 120))
```
Figure 3\.14: Column plot
Try changing the values for `coord_cartesian`. What does this do?
### 3\.7\.6 Boxplot
Boxplots are great for representing the distribution of grouped continuous variables. They fix most of the problems with using bar/column plots for continuous data.
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_boxplot(alpha = 0.5)
```
Figure 3\.15: Box plot
### 3\.7\.7 Violin plot
Violin pots are like sideways, mirrored density plots. They give even more information than a boxplot about distribution and are especially useful when you have non\-normal distributions.
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(draw_quantiles = .5,
trim = FALSE, alpha = 0.5,)
```
Figure 3\.16: Violin plot
Try changing the `quantile` argument. Set it to a vector of the numbers 0\.1 to 0\.9 in steps of 0\.1\.
### 3\.7\.8 Vertical intervals
Boxplots and violin plots don’t always map well onto inferential stats that use the mean. You can represent the mean and standard error or any other value you can calculate.
Here, we will create a table with the means and standard errors for two groups. We’ll learn how to calculate this from raw data in the chapter on [data wrangling](dplyr.html#dplyr). We also create a new object called `gg` that sets up the base of the plot.
```
dat <- tibble(
group = c("A", "B"),
mean = c(10, 20),
se = c(2, 3)
)
gg <- ggplot(dat, aes(group, mean,
ymin = mean-se,
ymax = mean+se))
```
The trick above can be useful if you want to represent the same data in different ways. You can add different geoms to the base plot without having to re\-type the base plot code.
```
gg + geom_crossbar()
```
Figure 3\.17: geom\_crossbar()
```
gg + geom_errorbar()
```
Figure 3\.18: geom\_errorbar()
```
gg + geom_linerange()
```
Figure 3\.19: geom\_linerange()
```
gg + geom_pointrange()
```
Figure 3\.20: geom\_pointrange()
You can also use the function `stats_summary` to calculate mean, standard error, or any other value for your data and display it using any geom.
```
ggplot(pets, aes(pet, score, color=pet)) +
stat_summary(fun.data = mean_se, geom = "crossbar") +
stat_summary(fun.min = function(x) mean(x) - sd(x),
fun.max = function(x) mean(x) + sd(x),
geom = "errorbar", width = 0) +
theme(legend.position = "none") # gets rid of the legend
```
Figure 3\.21: Vertical intervals with stats\_summary()
### 3\.7\.9 Scatter plot
Scatter plots are a good way to represent the relationship between two continuous variables.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_point()
```
Figure 3\.22: Scatter plot using geom\_point()
### 3\.7\.10 Line graph
You often want to represent the relationship as a single line.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.23: Line plot using geom\_smooth()
What are some other options for the `method` argument to `geom_smooth`? When might you want to use them?
You can plot functions other than the linear `y ~ x`. The code below creates a data table where `x` is 101 values between \-10 and 10\. and `y` is `x` squared plus `3*x` plus `1`. You’ll probably recognise this from algebra as the quadratic equation. You can set the `formula` argument in `geom_smooth` to a quadratic formula (`y ~ x + I(x^2)`) to fit a quadratic function to the data.
```
quad <- tibble(
x = seq(-10, 10, length.out = 101),
y = x^2 + 3*x + 1
)
ggplot(quad, aes(x, y)) +
geom_point() +
geom_smooth(formula = y ~ x + I(x^2),
method="lm")
```
Figure 3\.24: Fitting quadratic functions
3\.8 Customisation
------------------
### 3\.8\.1 Labels
You can set custom titles and axis labels in a few different ways.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
labs(title = "Pet score with Age",
x = "Age (in Years)",
y = "score Score",
color = "Pet Type")
```
Figure 3\.25: Set custom labels with labs()
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
ggtitle("Pet score with Age") +
xlab("Age (in Years)") +
ylab("score Score") +
scale_color_discrete(name = "Pet Type")
```
Figure 3\.26: Set custom labels with individual functions
### 3\.8\.2 Colours
You can set custom values for colour and fill using functions like `scale_colour_manual()` and `scale_fill_manual()`. The [Colours chapter in Cookbook for R](http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/) has many more ways to customise colour.
```
ggplot(pets, aes(pet, score, colour = pet, fill = pet)) +
geom_violin() +
scale_color_manual(values = c("darkgreen", "dodgerblue", "orange")) +
scale_fill_manual(values = c("#CCFFCC", "#BBDDFF", "#FFCC66"))
```
Figure 3\.27: Set custom colour
### 3\.8\.3 Themes
GGplot comes with several additional themes and the ability to fully customise your theme. Type `?theme` into the console to see the full list. Other packages such as `cowplot` also have custom themes. You can add a custom theme to the end of your ggplot object and specify a new `base_size` to make the default fonts and lines larger or smaller.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
theme_minimal(base_size = 18)
```
Figure 3\.28: Minimal theme with 18\-point base font size
It’s more complicated, but you can fully customise your theme with `theme()`. You can save this to an object and add it to the end of all of your plots to make the style consistent. Alternatively, you can set the theme at the top of a script with `theme_set()` and this will apply to all subsequent ggplot plots.
```
vampire_theme <- theme(
rect = element_rect(fill = "black"),
panel.background = element_rect(fill = "black"),
text = element_text(size = 20, colour = "white"),
axis.text = element_text(size = 16, colour = "grey70"),
line = element_line(colour = "white", size = 2),
panel.grid = element_blank(),
axis.line = element_line(colour = "white"),
axis.ticks = element_blank(),
legend.position = "top"
)
theme_set(vampire_theme)
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.29: Custom theme
### 3\.8\.4 Save as file
You can save a ggplot using `ggsave()`. It saves the last ggplot you made, by default, but you can specify which plot you want to save if you assigned that plot to a variable.
You can set the `width` and `height` of your plot. The default units are inches, but you can change the `units` argument to “in,” “cm,” or “mm.”
```
box <- ggplot(pets, aes(pet, score, fill=pet)) +
geom_boxplot(alpha = 0.5)
violin <- ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(alpha = 0.5)
ggsave("demog_violin_plot.png", width = 5, height = 7)
ggsave("demog_box_plot.jpg", plot = box, width = 5, height = 7)
```
The file type is set from the filename suffix, or by specifying the argument `device`, which can take the following values: “eps,” “ps,” “tex,” “pdf,” “jpeg,” “tiff,” “png,” “bmp,” “svg” or “wmf.”
### 3\.8\.1 Labels
You can set custom titles and axis labels in a few different ways.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
labs(title = "Pet score with Age",
x = "Age (in Years)",
y = "score Score",
color = "Pet Type")
```
Figure 3\.25: Set custom labels with labs()
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
ggtitle("Pet score with Age") +
xlab("Age (in Years)") +
ylab("score Score") +
scale_color_discrete(name = "Pet Type")
```
Figure 3\.26: Set custom labels with individual functions
### 3\.8\.2 Colours
You can set custom values for colour and fill using functions like `scale_colour_manual()` and `scale_fill_manual()`. The [Colours chapter in Cookbook for R](http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/) has many more ways to customise colour.
```
ggplot(pets, aes(pet, score, colour = pet, fill = pet)) +
geom_violin() +
scale_color_manual(values = c("darkgreen", "dodgerblue", "orange")) +
scale_fill_manual(values = c("#CCFFCC", "#BBDDFF", "#FFCC66"))
```
Figure 3\.27: Set custom colour
### 3\.8\.3 Themes
GGplot comes with several additional themes and the ability to fully customise your theme. Type `?theme` into the console to see the full list. Other packages such as `cowplot` also have custom themes. You can add a custom theme to the end of your ggplot object and specify a new `base_size` to make the default fonts and lines larger or smaller.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
theme_minimal(base_size = 18)
```
Figure 3\.28: Minimal theme with 18\-point base font size
It’s more complicated, but you can fully customise your theme with `theme()`. You can save this to an object and add it to the end of all of your plots to make the style consistent. Alternatively, you can set the theme at the top of a script with `theme_set()` and this will apply to all subsequent ggplot plots.
```
vampire_theme <- theme(
rect = element_rect(fill = "black"),
panel.background = element_rect(fill = "black"),
text = element_text(size = 20, colour = "white"),
axis.text = element_text(size = 16, colour = "grey70"),
line = element_line(colour = "white", size = 2),
panel.grid = element_blank(),
axis.line = element_line(colour = "white"),
axis.ticks = element_blank(),
legend.position = "top"
)
theme_set(vampire_theme)
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.29: Custom theme
### 3\.8\.4 Save as file
You can save a ggplot using `ggsave()`. It saves the last ggplot you made, by default, but you can specify which plot you want to save if you assigned that plot to a variable.
You can set the `width` and `height` of your plot. The default units are inches, but you can change the `units` argument to “in,” “cm,” or “mm.”
```
box <- ggplot(pets, aes(pet, score, fill=pet)) +
geom_boxplot(alpha = 0.5)
violin <- ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(alpha = 0.5)
ggsave("demog_violin_plot.png", width = 5, height = 7)
ggsave("demog_box_plot.jpg", plot = box, width = 5, height = 7)
```
The file type is set from the filename suffix, or by specifying the argument `device`, which can take the following values: “eps,” “ps,” “tex,” “pdf,” “jpeg,” “tiff,” “png,” “bmp,” “svg” or “wmf.”
3\.9 Combination Plots
----------------------
### 3\.9\.1 Violinbox plot
A combination of a violin plot to show the shape of the distribution and a boxplot to show the median and interquartile ranges can be a very useful visualisation.
```
ggplot(pets, aes(pet, score, fill = pet)) +
geom_violin(show.legend = FALSE) +
geom_boxplot(width = 0.2, fill = "white",
show.legend = FALSE)
```
Figure 3\.30: Violin\-box plot
Set the `show.legend` argument to `FALSE` to hide the legend. We do this here because the x\-axis already labels the pet types.
### 3\.9\.2 Violin\-point\-range plot
You can use `stat_summary()` to superimpose a point\-range plot showning the mean ± 1 SD. You’ll learn how to write your own functions in the lesson on [Iteration and Functions](func.html#func).
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(trim = FALSE, alpha = 0.5) +
stat_summary(
fun = mean,
fun.max = function(x) {mean(x) + sd(x)},
fun.min = function(x) {mean(x) - sd(x)},
geom="pointrange"
)
```
Figure 3\.31: Point\-range plot using stat\_summary()
### 3\.9\.3 Violin\-jitter plot
If you don’t have a lot of data points, it’s good to represent them individually. You can use `geom_jitter` to do this.
```
# sample_n chooses 50 random observations from the dataset
ggplot(sample_n(pets, 50), aes(pet, score, fill=pet)) +
geom_violin(
trim = FALSE,
draw_quantiles = c(0.25, 0.5, 0.75),
alpha = 0.5
) +
geom_jitter(
width = 0.15, # points spread out over 15% of available width
height = 0, # do not move position on the y-axis
alpha = 0.5,
size = 3
)
```
Figure 3\.32: Violin\-jitter plot
### 3\.9\.4 Scatter\-line graph
If your graph isn’t too complicated, it’s good to also show the individual data points behind the line.
```
ggplot(sample_n(pets, 50), aes(age, weight, colour = pet)) +
geom_point() +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.33: Scatter\-line plot
### 3\.9\.5 Grid of plots
You can use the [`cowplot`](https://cran.r-project.org/web/packages/cowplot/vignettes/introduction.html) package to easily make grids of different graphs. First, you have to assign each plot a name. Then you list all the plots as the first arguments of `plot_grid()` and provide a vector of labels.
```
gg <- ggplot(pets, aes(pet, score, colour = pet))
nolegend <- theme(legend.position = 0)
vp <- gg + geom_violin(alpha = 0.5) + nolegend +
ggtitle("Violin Plot")
bp <- gg + geom_boxplot(alpha = 0.5) + nolegend +
ggtitle("Box Plot")
cp <- gg + stat_summary(fun = mean, geom = "col", fill = "white") + nolegend +
ggtitle("Column Plot")
dp <- ggplot(pets, aes(score, colour = pet)) +
geom_density() + nolegend +
ggtitle("Density Plot")
plot_grid(vp, bp, cp, dp, labels = LETTERS[1:4])
```
Figure 3\.34: Grid of plots
### 3\.9\.1 Violinbox plot
A combination of a violin plot to show the shape of the distribution and a boxplot to show the median and interquartile ranges can be a very useful visualisation.
```
ggplot(pets, aes(pet, score, fill = pet)) +
geom_violin(show.legend = FALSE) +
geom_boxplot(width = 0.2, fill = "white",
show.legend = FALSE)
```
Figure 3\.30: Violin\-box plot
Set the `show.legend` argument to `FALSE` to hide the legend. We do this here because the x\-axis already labels the pet types.
### 3\.9\.2 Violin\-point\-range plot
You can use `stat_summary()` to superimpose a point\-range plot showning the mean ± 1 SD. You’ll learn how to write your own functions in the lesson on [Iteration and Functions](func.html#func).
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(trim = FALSE, alpha = 0.5) +
stat_summary(
fun = mean,
fun.max = function(x) {mean(x) + sd(x)},
fun.min = function(x) {mean(x) - sd(x)},
geom="pointrange"
)
```
Figure 3\.31: Point\-range plot using stat\_summary()
### 3\.9\.3 Violin\-jitter plot
If you don’t have a lot of data points, it’s good to represent them individually. You can use `geom_jitter` to do this.
```
# sample_n chooses 50 random observations from the dataset
ggplot(sample_n(pets, 50), aes(pet, score, fill=pet)) +
geom_violin(
trim = FALSE,
draw_quantiles = c(0.25, 0.5, 0.75),
alpha = 0.5
) +
geom_jitter(
width = 0.15, # points spread out over 15% of available width
height = 0, # do not move position on the y-axis
alpha = 0.5,
size = 3
)
```
Figure 3\.32: Violin\-jitter plot
### 3\.9\.4 Scatter\-line graph
If your graph isn’t too complicated, it’s good to also show the individual data points behind the line.
```
ggplot(sample_n(pets, 50), aes(age, weight, colour = pet)) +
geom_point() +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.33: Scatter\-line plot
### 3\.9\.5 Grid of plots
You can use the [`cowplot`](https://cran.r-project.org/web/packages/cowplot/vignettes/introduction.html) package to easily make grids of different graphs. First, you have to assign each plot a name. Then you list all the plots as the first arguments of `plot_grid()` and provide a vector of labels.
```
gg <- ggplot(pets, aes(pet, score, colour = pet))
nolegend <- theme(legend.position = 0)
vp <- gg + geom_violin(alpha = 0.5) + nolegend +
ggtitle("Violin Plot")
bp <- gg + geom_boxplot(alpha = 0.5) + nolegend +
ggtitle("Box Plot")
cp <- gg + stat_summary(fun = mean, geom = "col", fill = "white") + nolegend +
ggtitle("Column Plot")
dp <- ggplot(pets, aes(score, colour = pet)) +
geom_density() + nolegend +
ggtitle("Density Plot")
plot_grid(vp, bp, cp, dp, labels = LETTERS[1:4])
```
Figure 3\.34: Grid of plots
3\.10 Overlapping Discrete Data
-------------------------------
### 3\.10\.1 Reducing Opacity
You can deal with overlapping data points (very common if you’re using Likert scales) by reducing the opacity of the points. You need to use trial and error to adjust these so they look right.
```
ggplot(pets, aes(age, score, colour = pet)) +
geom_point(alpha = 0.25) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.35: Deal with overlapping data using transparency
### 3\.10\.2 Proportional Dot Plots
Or you can set the size of the dot proportional to the number of overlapping observations using `geom_count()`.
```
ggplot(pets, aes(age, score, colour = pet)) +
geom_count()
```
Figure 3\.36: Deal with overlapping data using geom\_count()
Alternatively, you can transform your data (we will learn to do this in the [data wrangling](dplyr.html#dplyr) chapter) to create a count column and use the count to set the dot colour.
```
pets %>%
group_by(age, score) %>%
summarise(count = n(), .groups = "drop") %>%
ggplot(aes(age, score, color=count)) +
geom_point(size = 2) +
scale_color_viridis_c()
```
Figure 3\.37: Deal with overlapping data using dot colour
The [viridis package](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html) changes the colour themes to be easier to read by people with colourblindness and to print better in greyscale. Viridis is built into `ggplot2` since v3\.0\.0\. It uses `scale_colour_viridis_c()` and `scale_fill_viridis_c()` for continuous variables and `scale_colour_viridis_d()` and `scale_fill_viridis_d()` for discrete variables.
### 3\.10\.1 Reducing Opacity
You can deal with overlapping data points (very common if you’re using Likert scales) by reducing the opacity of the points. You need to use trial and error to adjust these so they look right.
```
ggplot(pets, aes(age, score, colour = pet)) +
geom_point(alpha = 0.25) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.35: Deal with overlapping data using transparency
### 3\.10\.2 Proportional Dot Plots
Or you can set the size of the dot proportional to the number of overlapping observations using `geom_count()`.
```
ggplot(pets, aes(age, score, colour = pet)) +
geom_count()
```
Figure 3\.36: Deal with overlapping data using geom\_count()
Alternatively, you can transform your data (we will learn to do this in the [data wrangling](dplyr.html#dplyr) chapter) to create a count column and use the count to set the dot colour.
```
pets %>%
group_by(age, score) %>%
summarise(count = n(), .groups = "drop") %>%
ggplot(aes(age, score, color=count)) +
geom_point(size = 2) +
scale_color_viridis_c()
```
Figure 3\.37: Deal with overlapping data using dot colour
The [viridis package](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html) changes the colour themes to be easier to read by people with colourblindness and to print better in greyscale. Viridis is built into `ggplot2` since v3\.0\.0\. It uses `scale_colour_viridis_c()` and `scale_fill_viridis_c()` for continuous variables and `scale_colour_viridis_d()` and `scale_fill_viridis_d()` for discrete variables.
3\.11 Overlapping Continuous Data
---------------------------------
Even if the variables are continuous, overplotting might obscure any relationships if you have lots of data.
```
ggplot(pets, aes(age, score)) +
geom_point()
```
Figure 3\.38: Overplotted data
### 3\.11\.1 2D Density Plot
Use `geom_density2d()` to create a contour map.
```
ggplot(pets, aes(age, score)) +
geom_density2d()
```
Figure 3\.39: Contour map with geom\_density2d()
You can use `stat_density_2d(aes(fill = ..level..), geom = "polygon")` to create a heatmap\-style density plot.
```
ggplot(pets, aes(age, score)) +
stat_density_2d(aes(fill = ..level..), geom = "polygon") +
scale_fill_viridis_c()
```
Figure 3\.40: Heatmap\-density plot
### 3\.11\.2 2D Histogram
Use `geom_bin2d()` to create a rectangular heatmap of bin counts. Set the `binwidth` to the x and y dimensions to capture in each box.
```
ggplot(pets, aes(age, score)) +
geom_bin2d(binwidth = c(1, 5))
```
Figure 3\.41: Heatmap of bin counts
### 3\.11\.3 Hexagonal Heatmap
Use `geomhex()` to create a hexagonal heatmap of bin counts. Adjust the `binwidth`, `xlim()`, `ylim()` and/or the figure dimensions to make the hexagons more or less stretched.
```
ggplot(pets, aes(age, score)) +
geom_hex(binwidth = c(1, 5))
```
Figure 3\.42: Hexagonal heatmap of bin counts
### 3\.11\.4 Correlation Heatmap
I’ve included the code for creating a correlation matrix from a table of variables, but you don’t need to understand how this is done yet. We’ll cover `mutate()` and `gather()` functions in the [dplyr](dplyr.html#dplyr) and [tidyr](tidyr.html#tidyr) lessons.
```
heatmap <- pets %>%
select_if(is.numeric) %>% # get just the numeric columns
cor() %>% # create the correlation matrix
as_tibble(rownames = "V1") %>% # make it a tibble
gather("V2", "r", 2:ncol(.)) # wide to long (V2)
```
Once you have a correlation matrix in the correct (long) format, it’s easy to make a heatmap using `geom_tile()`.
```
ggplot(heatmap, aes(V1, V2, fill=r)) +
geom_tile() +
scale_fill_viridis_c()
```
Figure 3\.43: Heatmap using geom\_tile()
### 3\.11\.1 2D Density Plot
Use `geom_density2d()` to create a contour map.
```
ggplot(pets, aes(age, score)) +
geom_density2d()
```
Figure 3\.39: Contour map with geom\_density2d()
You can use `stat_density_2d(aes(fill = ..level..), geom = "polygon")` to create a heatmap\-style density plot.
```
ggplot(pets, aes(age, score)) +
stat_density_2d(aes(fill = ..level..), geom = "polygon") +
scale_fill_viridis_c()
```
Figure 3\.40: Heatmap\-density plot
### 3\.11\.2 2D Histogram
Use `geom_bin2d()` to create a rectangular heatmap of bin counts. Set the `binwidth` to the x and y dimensions to capture in each box.
```
ggplot(pets, aes(age, score)) +
geom_bin2d(binwidth = c(1, 5))
```
Figure 3\.41: Heatmap of bin counts
### 3\.11\.3 Hexagonal Heatmap
Use `geomhex()` to create a hexagonal heatmap of bin counts. Adjust the `binwidth`, `xlim()`, `ylim()` and/or the figure dimensions to make the hexagons more or less stretched.
```
ggplot(pets, aes(age, score)) +
geom_hex(binwidth = c(1, 5))
```
Figure 3\.42: Hexagonal heatmap of bin counts
### 3\.11\.4 Correlation Heatmap
I’ve included the code for creating a correlation matrix from a table of variables, but you don’t need to understand how this is done yet. We’ll cover `mutate()` and `gather()` functions in the [dplyr](dplyr.html#dplyr) and [tidyr](tidyr.html#tidyr) lessons.
```
heatmap <- pets %>%
select_if(is.numeric) %>% # get just the numeric columns
cor() %>% # create the correlation matrix
as_tibble(rownames = "V1") %>% # make it a tibble
gather("V2", "r", 2:ncol(.)) # wide to long (V2)
```
Once you have a correlation matrix in the correct (long) format, it’s easy to make a heatmap using `geom_tile()`.
```
ggplot(heatmap, aes(V1, V2, fill=r)) +
geom_tile() +
scale_fill_viridis_c()
```
Figure 3\.43: Heatmap using geom\_tile()
3\.12 Interactive Plots
-----------------------
You can use the `plotly` package to make interactive graphs. Just assign your ggplot to a variable and use the function `ggplotly()`.
```
demog_plot <- ggplot(pets, aes(age, score, fill=pet)) +
geom_point() +
geom_smooth(formula = y~x, method = lm)
ggplotly(demog_plot)
```
Figure 3\.44: Interactive graph using plotly
Hover over the data points above and click on the legend items.
3\.13 Glossary
--------------
| term | definition |
| --- | --- |
| [continuous](https://psyteachr.github.io/glossary/c#continuous) | Data that can take on any values between other existing values. |
| [discrete](https://psyteachr.github.io/glossary/d#discrete) | Data that can only take certain values, such as integers. |
| [geom](https://psyteachr.github.io/glossary/g#geom) | The geometric style in which data are displayed, such as boxplot, density, or histogram. |
| [likert](https://psyteachr.github.io/glossary/l#likert) | A rating scale with a small number of discrete points in order |
| [nominal](https://psyteachr.github.io/glossary/n#nominal) | Categorical variables that don’t have an inherent order, such as types of animal. |
| [ordinal](https://psyteachr.github.io/glossary/o#ordinal) | Discrete variables that have an inherent order, such as number of legs |
3\.14 Exercises
---------------
Download the [exercises](exercises/03_ggplot_exercise.Rmd). See the [plots](exercises/03_ggplot_answers.html) to see what your plots should look like (this doesn’t contain the answer code). See the [answers](exercises/03_ggplot_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(3)
# run this to access the answers
dataskills::exercise(3, answers = TRUE)
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/ggplot.html |
Chapter 3 Data Visualisation
============================
3\.1 Learning Objectives
------------------------
### 3\.1\.1 Basic
1. Understand what types of graphs are best for [different types of data](ggplot.html#vartypes) [(video)](https://youtu.be/tOFQFPRgZ3M)
* 1 discrete
* 1 continuous
* 2 discrete
* 2 continuous
* 1 discrete, 1 continuous
* 3 continuous
2. Create common types of graphs with ggplot2 [(video)](https://youtu.be/kKlQupjD__g)
* [`geom_bar()`](ggplot.html#geom_bar)
* [`geom_density()`](ggplot.html#geom_density)
* [`geom_freqpoly()`](ggplot.html#geom_freqpoly)
* [`geom_histogram()`](ggplot.html#geom_histogram)
* [`geom_col()`](ggplot.html#geom_col)
* [`geom_boxplot()`](ggplot.html#geom_boxplot)
* [`geom_violin()`](ggplot.html#geom_violin)
* [Vertical Intervals](ggplot.html#vertical_intervals)
+ `geom_crossbar()`
+ `geom_errorbar()`
+ `geom_linerange()`
+ `geom_pointrange()`
* [`geom_point()`](ggplot.html#geom_point)
* [`geom_smooth()`](ggplot.html#geom_smooth)
3. Set custom [labels](ggplot.html#custom-labels), [colours](ggplot.html#custom-colours), and [themes](ggplot.html#themes) [(video)](https://youtu.be/6pHuCbOh86s)
4. [Combine plots](combo_plots) on the same plot, as facets, or as a grid using cowplot [(video)](https://youtu.be/AnqlfuU-VZk)
5. [Save plots](ggplot.html#ggsave) as an image file [(video)](https://youtu.be/f1Y53mjEli0)
### 3\.1\.2 Intermediate
6. Add lines to graphs
7. Deal with [overlapping data](ggplot.html#overlap)
8. Create less common types of graphs
* [`geom_tile()`](ggplot.html#geom_tile)
* [`geom_density2d()`](ggplot.html#geom_density2d)
* [`geom_bin2d()`](ggplot.html#geom_bin2d)
* [`geom_hex()`](ggplot.html#geom_hex)
* [`geom_count()`](ggplot.html#geom_count)
9. Adjust axes (e.g., flip coordinates, set axis limits)
10. Create interactive graphs with [`plotly`](ggplot.html#plotly)
3\.2 Resources
--------------
* [Chapter 3: Data Visualisation](http://r4ds.had.co.nz/data-visualisation.html) of *R for Data Science*
* [ggplot2 cheat sheet](https://github.com/rstudio/cheatsheets/raw/master/data-visualization-2.1.pdf)
* [Chapter 28: Graphics for communication](http://r4ds.had.co.nz/graphics-for-communication.html) of *R for Data Science*
* [Look at Data](http://socviz.co/look-at-data.html) from [Data Vizualization for Social Science](http://socviz.co/)
* [Hack Your Data Beautiful](https://psyteachr.github.io/hack-your-data/) workshop by University of Glasgow postgraduate students
* [Graphs](http://www.cookbook-r.com/Graphs) in *Cookbook for R*
* [ggplot2 documentation](https://ggplot2.tidyverse.org/reference/)
* [The R Graph Gallery](http://www.r-graph-gallery.com/) (this is really useful)
* [Top 50 ggplot2 Visualizations](http://r-statistics.co/Top50-Ggplot2-Visualizations-MasterList-R-Code.html)
* [R Graphics Cookbook](http://www.cookbook-r.com/Graphs/) by Winston Chang
* [ggplot extensions](https://www.ggplot2-exts.org/)
* [plotly](https://plot.ly/ggplot2/) for creating interactive graphs
3\.3 Setup
----------
```
# libraries needed for these graphs
library(tidyverse)
library(dataskills)
library(plotly)
library(cowplot)
set.seed(30250) # makes sure random numbers are reproducible
```
3\.4 Common Variable Combinations
---------------------------------
[Continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") variables are properties you can measure, like height. [Discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") variables are things you can count, like the number of pets you have. Categorical variables can be [nominal](https://psyteachr.github.io/glossary/n#nominal "Categorical variables that don’t have an inherent order, such as types of animal."), where the categories don’t really have an order, like cats, dogs and ferrets (even though ferrets are obviously best). They can also be [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs"), where there is a clear order, but the distance between the categories isn’t something you could exactly equate, like points on a [Likert](https://psyteachr.github.io/glossary/l#likert "A rating scale with a small number of discrete points in order") rating scale.
Different types of visualisations are good for different types of variables.
Load the `pets` dataset from the `dataskills` package and explore it with `glimpse(pets)` or `View(pets)`. This is a simulated dataset with one random factor (`id`), two categorical factors (`pet`, `country`) and three continuous variables (`score`, `age`, `weight`).
```
data("pets")
# if you don't have the dataskills package, use:
# pets <- read_csv("https://psyteachr.github.io/msc-data-skills/data/pets.csv", col_types = "cffiid")
glimpse(pets)
```
```
## Rows: 800
## Columns: 6
## $ id <chr> "S001", "S002", "S003", "S004", "S005", "S006", "S007", "S008"…
## $ pet <fct> dog, dog, dog, dog, dog, dog, dog, dog, dog, dog, dog, dog, do…
## $ country <fct> UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK…
## $ score <int> 90, 107, 94, 120, 111, 110, 100, 107, 106, 109, 85, 110, 102, …
## $ age <int> 6, 8, 2, 10, 4, 8, 9, 8, 6, 11, 5, 9, 1, 10, 7, 8, 1, 8, 5, 13…
## $ weight <dbl> 19.78932, 20.01422, 19.14863, 19.56953, 21.39259, 21.31880, 19…
```
Before you read ahead, come up with an example of each type of variable combination and sketch the types of graphs that would best display these data.
* 1 categorical
* 1 continuous
* 2 categorical
* 2 continuous
* 1 categorical, 1 continuous
* 3 continuous
3\.5 Basic Plots
----------------
R has some basic plotting functions, but they’re difficult to use and aesthetically not very nice. They can be useful to have a quick look at data while you’re working on a script, though. The function `plot()` usually defaults to a sensible type of plot, depending on whether the arguments `x` and `y` are categorical, continuous, or missing.
```
plot(x = pets$pet)
```
Figure 3\.1: plot() with categorical x
```
plot(x = pets$pet, y = pets$score)
```
Figure 3\.2: plot() with categorical x and continuous y
```
plot(x = pets$age, y = pets$weight)
```
Figure 3\.3: plot() with continuous x and y
The function `hist()` creates a quick histogram so you can see the distribution of your data. You can adjust how many columns are plotted with the argument `breaks`.
```
hist(pets$score, breaks = 20)
```
Figure 3\.4: hist()
3\.6 GGplots
------------
While the functions above are nice for quick visualisations, it’s hard to make pretty, publication\-ready plots. The package `ggplot2` (loaded with `tidyverse`) is one of the most common packages for creating beautiful visualisations.
`ggplot2` creates plots using a “grammar of graphics” where you add [geoms](https://psyteachr.github.io/glossary/g#geom "The geometric style in which data are displayed, such as boxplot, density, or histogram.") in layers. It can be complex to understand, but it’s very powerful once you have a mental model of how it works.
Let’s start with a totally empty plot layer created by the `ggplot()` function with no arguments.
```
ggplot()
```
Figure 3\.5: A plot base created by ggplot()
The first argument to `ggplot()` is the `data` table you want to plot. Let’s use the `pets` data we loaded above. The second argument is the `mapping` for which columns in your data table correspond to which properties of the plot, such as the `x`\-axis, the `y`\-axis, line `colour` or `linetype`, point `shape`, or object `fill`. These mappings are specified by the `aes()` function. Just adding this to the `ggplot` function creates the labels and ranges for the `x` and `y` axes. They usually have sensible default values, given your data, but we’ll learn how to change them later.
```
mapping <- aes(x = pet,
y = score,
colour = country,
fill = country)
ggplot(data = pets, mapping = mapping)
```
Figure 3\.6: Empty ggplot with x and y labels
People usually omit the argument names and just put the `aes()` function directly as the second argument to `ggplot`. They also usually omit `x` and `y` as argument names to `aes()` (but you have to name the other properties). Next we can add “geoms,” or plot styles. You literally add them with the `+` symbol. You can also add other plot attributes, such as labels, or change the theme and base font size.
```
ggplot(pets, aes(pet, score, colour = country, fill = country)) +
geom_violin(alpha = 0.5) +
labs(x = "Pet type",
y = "Score on an Important Test",
colour = "Country of Origin",
fill = "Country of Origin",
title = "My first plot!") +
theme_bw(base_size = 15)
```
Figure 3\.7: Violin plot with country represented by colour.
3\.7 Common Plot Types
----------------------
There are many geoms, and they can take different arguments to customise their appearance. We’ll learn about some of the most common below.
### 3\.7\.1 Bar plot
Bar plots are good for categorical data where you want to represent the count.
```
ggplot(pets, aes(pet)) +
geom_bar()
```
Figure 3\.8: Bar plot
### 3\.7\.2 Density plot
Density plots are good for one continuous variable, but only if you have a fairly large number of observations.
```
ggplot(pets, aes(score)) +
geom_density()
```
Figure 3\.9: Density plot
You can represent subsets of a variable by assigning the category variable to the argument `group`, `fill`, or `color`.
```
ggplot(pets, aes(score, fill = pet)) +
geom_density(alpha = 0.5)
```
Figure 3\.10: Grouped density plot
Try changing the `alpha` argument to figure out what it does.
### 3\.7\.3 Frequency polygons
If you want the y\-axis to represent count rather than density, try `geom_freqpoly()`.
```
ggplot(pets, aes(score, color = pet)) +
geom_freqpoly(binwidth = 5)
```
Figure 3\.11: Frequency ploygon plot
Try changing the `binwidth` argument to 10 and 1\. How do you figure out the right value?
### 3\.7\.4 Histogram
Histograms are also good for one continuous variable, and work well if you don’t have many observations. Set the `binwidth` to control how wide each bar is.
```
ggplot(pets, aes(score)) +
geom_histogram(binwidth = 5, fill = "white", color = "black")
```
Figure 3\.12: Histogram
Histograms in ggplot look pretty bad unless you set the `fill` and `color`.
If you show grouped histograms, you also probably want to change the default `position` argument.
```
ggplot(pets, aes(score, fill=pet)) +
geom_histogram(binwidth = 5, alpha = 0.5,
position = "dodge")
```
Figure 3\.13: Grouped Histogram
Try changing the `position` argument to “identity,” “fill,” “dodge,” or “stack.”
### 3\.7\.5 Column plot
Column plots are the worst way to represent grouped continuous data, but also one of the most common. If your data are already aggregated (e.g., you have rows for each group with columns for the mean and standard error), you can use `geom_bar` or `geom_col` and `geom_errorbar` directly. If not, you can use the function `stat_summary` to calculate the mean and standard error and send those numbers to the appropriate geom for plotting.
```
ggplot(pets, aes(pet, score, fill=pet)) +
stat_summary(fun = mean, geom = "col", alpha = 0.5) +
stat_summary(fun.data = mean_se, geom = "errorbar",
width = 0.25) +
coord_cartesian(ylim = c(80, 120))
```
Figure 3\.14: Column plot
Try changing the values for `coord_cartesian`. What does this do?
### 3\.7\.6 Boxplot
Boxplots are great for representing the distribution of grouped continuous variables. They fix most of the problems with using bar/column plots for continuous data.
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_boxplot(alpha = 0.5)
```
Figure 3\.15: Box plot
### 3\.7\.7 Violin plot
Violin pots are like sideways, mirrored density plots. They give even more information than a boxplot about distribution and are especially useful when you have non\-normal distributions.
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(draw_quantiles = .5,
trim = FALSE, alpha = 0.5,)
```
Figure 3\.16: Violin plot
Try changing the `quantile` argument. Set it to a vector of the numbers 0\.1 to 0\.9 in steps of 0\.1\.
### 3\.7\.8 Vertical intervals
Boxplots and violin plots don’t always map well onto inferential stats that use the mean. You can represent the mean and standard error or any other value you can calculate.
Here, we will create a table with the means and standard errors for two groups. We’ll learn how to calculate this from raw data in the chapter on [data wrangling](dplyr.html#dplyr). We also create a new object called `gg` that sets up the base of the plot.
```
dat <- tibble(
group = c("A", "B"),
mean = c(10, 20),
se = c(2, 3)
)
gg <- ggplot(dat, aes(group, mean,
ymin = mean-se,
ymax = mean+se))
```
The trick above can be useful if you want to represent the same data in different ways. You can add different geoms to the base plot without having to re\-type the base plot code.
```
gg + geom_crossbar()
```
Figure 3\.17: geom\_crossbar()
```
gg + geom_errorbar()
```
Figure 3\.18: geom\_errorbar()
```
gg + geom_linerange()
```
Figure 3\.19: geom\_linerange()
```
gg + geom_pointrange()
```
Figure 3\.20: geom\_pointrange()
You can also use the function `stats_summary` to calculate mean, standard error, or any other value for your data and display it using any geom.
```
ggplot(pets, aes(pet, score, color=pet)) +
stat_summary(fun.data = mean_se, geom = "crossbar") +
stat_summary(fun.min = function(x) mean(x) - sd(x),
fun.max = function(x) mean(x) + sd(x),
geom = "errorbar", width = 0) +
theme(legend.position = "none") # gets rid of the legend
```
Figure 3\.21: Vertical intervals with stats\_summary()
### 3\.7\.9 Scatter plot
Scatter plots are a good way to represent the relationship between two continuous variables.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_point()
```
Figure 3\.22: Scatter plot using geom\_point()
### 3\.7\.10 Line graph
You often want to represent the relationship as a single line.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.23: Line plot using geom\_smooth()
What are some other options for the `method` argument to `geom_smooth`? When might you want to use them?
You can plot functions other than the linear `y ~ x`. The code below creates a data table where `x` is 101 values between \-10 and 10\. and `y` is `x` squared plus `3*x` plus `1`. You’ll probably recognise this from algebra as the quadratic equation. You can set the `formula` argument in `geom_smooth` to a quadratic formula (`y ~ x + I(x^2)`) to fit a quadratic function to the data.
```
quad <- tibble(
x = seq(-10, 10, length.out = 101),
y = x^2 + 3*x + 1
)
ggplot(quad, aes(x, y)) +
geom_point() +
geom_smooth(formula = y ~ x + I(x^2),
method="lm")
```
Figure 3\.24: Fitting quadratic functions
3\.8 Customisation
------------------
### 3\.8\.1 Labels
You can set custom titles and axis labels in a few different ways.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
labs(title = "Pet score with Age",
x = "Age (in Years)",
y = "score Score",
color = "Pet Type")
```
Figure 3\.25: Set custom labels with labs()
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
ggtitle("Pet score with Age") +
xlab("Age (in Years)") +
ylab("score Score") +
scale_color_discrete(name = "Pet Type")
```
Figure 3\.26: Set custom labels with individual functions
### 3\.8\.2 Colours
You can set custom values for colour and fill using functions like `scale_colour_manual()` and `scale_fill_manual()`. The [Colours chapter in Cookbook for R](http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/) has many more ways to customise colour.
```
ggplot(pets, aes(pet, score, colour = pet, fill = pet)) +
geom_violin() +
scale_color_manual(values = c("darkgreen", "dodgerblue", "orange")) +
scale_fill_manual(values = c("#CCFFCC", "#BBDDFF", "#FFCC66"))
```
Figure 3\.27: Set custom colour
### 3\.8\.3 Themes
GGplot comes with several additional themes and the ability to fully customise your theme. Type `?theme` into the console to see the full list. Other packages such as `cowplot` also have custom themes. You can add a custom theme to the end of your ggplot object and specify a new `base_size` to make the default fonts and lines larger or smaller.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
theme_minimal(base_size = 18)
```
Figure 3\.28: Minimal theme with 18\-point base font size
It’s more complicated, but you can fully customise your theme with `theme()`. You can save this to an object and add it to the end of all of your plots to make the style consistent. Alternatively, you can set the theme at the top of a script with `theme_set()` and this will apply to all subsequent ggplot plots.
```
vampire_theme <- theme(
rect = element_rect(fill = "black"),
panel.background = element_rect(fill = "black"),
text = element_text(size = 20, colour = "white"),
axis.text = element_text(size = 16, colour = "grey70"),
line = element_line(colour = "white", size = 2),
panel.grid = element_blank(),
axis.line = element_line(colour = "white"),
axis.ticks = element_blank(),
legend.position = "top"
)
theme_set(vampire_theme)
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.29: Custom theme
### 3\.8\.4 Save as file
You can save a ggplot using `ggsave()`. It saves the last ggplot you made, by default, but you can specify which plot you want to save if you assigned that plot to a variable.
You can set the `width` and `height` of your plot. The default units are inches, but you can change the `units` argument to “in,” “cm,” or “mm.”
```
box <- ggplot(pets, aes(pet, score, fill=pet)) +
geom_boxplot(alpha = 0.5)
violin <- ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(alpha = 0.5)
ggsave("demog_violin_plot.png", width = 5, height = 7)
ggsave("demog_box_plot.jpg", plot = box, width = 5, height = 7)
```
The file type is set from the filename suffix, or by specifying the argument `device`, which can take the following values: “eps,” “ps,” “tex,” “pdf,” “jpeg,” “tiff,” “png,” “bmp,” “svg” or “wmf.”
3\.9 Combination Plots
----------------------
### 3\.9\.1 Violinbox plot
A combination of a violin plot to show the shape of the distribution and a boxplot to show the median and interquartile ranges can be a very useful visualisation.
```
ggplot(pets, aes(pet, score, fill = pet)) +
geom_violin(show.legend = FALSE) +
geom_boxplot(width = 0.2, fill = "white",
show.legend = FALSE)
```
Figure 3\.30: Violin\-box plot
Set the `show.legend` argument to `FALSE` to hide the legend. We do this here because the x\-axis already labels the pet types.
### 3\.9\.2 Violin\-point\-range plot
You can use `stat_summary()` to superimpose a point\-range plot showning the mean ± 1 SD. You’ll learn how to write your own functions in the lesson on [Iteration and Functions](func.html#func).
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(trim = FALSE, alpha = 0.5) +
stat_summary(
fun = mean,
fun.max = function(x) {mean(x) + sd(x)},
fun.min = function(x) {mean(x) - sd(x)},
geom="pointrange"
)
```
Figure 3\.31: Point\-range plot using stat\_summary()
### 3\.9\.3 Violin\-jitter plot
If you don’t have a lot of data points, it’s good to represent them individually. You can use `geom_jitter` to do this.
```
# sample_n chooses 50 random observations from the dataset
ggplot(sample_n(pets, 50), aes(pet, score, fill=pet)) +
geom_violin(
trim = FALSE,
draw_quantiles = c(0.25, 0.5, 0.75),
alpha = 0.5
) +
geom_jitter(
width = 0.15, # points spread out over 15% of available width
height = 0, # do not move position on the y-axis
alpha = 0.5,
size = 3
)
```
Figure 3\.32: Violin\-jitter plot
### 3\.9\.4 Scatter\-line graph
If your graph isn’t too complicated, it’s good to also show the individual data points behind the line.
```
ggplot(sample_n(pets, 50), aes(age, weight, colour = pet)) +
geom_point() +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.33: Scatter\-line plot
### 3\.9\.5 Grid of plots
You can use the [`cowplot`](https://cran.r-project.org/web/packages/cowplot/vignettes/introduction.html) package to easily make grids of different graphs. First, you have to assign each plot a name. Then you list all the plots as the first arguments of `plot_grid()` and provide a vector of labels.
```
gg <- ggplot(pets, aes(pet, score, colour = pet))
nolegend <- theme(legend.position = 0)
vp <- gg + geom_violin(alpha = 0.5) + nolegend +
ggtitle("Violin Plot")
bp <- gg + geom_boxplot(alpha = 0.5) + nolegend +
ggtitle("Box Plot")
cp <- gg + stat_summary(fun = mean, geom = "col", fill = "white") + nolegend +
ggtitle("Column Plot")
dp <- ggplot(pets, aes(score, colour = pet)) +
geom_density() + nolegend +
ggtitle("Density Plot")
plot_grid(vp, bp, cp, dp, labels = LETTERS[1:4])
```
Figure 3\.34: Grid of plots
3\.10 Overlapping Discrete Data
-------------------------------
### 3\.10\.1 Reducing Opacity
You can deal with overlapping data points (very common if you’re using Likert scales) by reducing the opacity of the points. You need to use trial and error to adjust these so they look right.
```
ggplot(pets, aes(age, score, colour = pet)) +
geom_point(alpha = 0.25) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.35: Deal with overlapping data using transparency
### 3\.10\.2 Proportional Dot Plots
Or you can set the size of the dot proportional to the number of overlapping observations using `geom_count()`.
```
ggplot(pets, aes(age, score, colour = pet)) +
geom_count()
```
Figure 3\.36: Deal with overlapping data using geom\_count()
Alternatively, you can transform your data (we will learn to do this in the [data wrangling](dplyr.html#dplyr) chapter) to create a count column and use the count to set the dot colour.
```
pets %>%
group_by(age, score) %>%
summarise(count = n(), .groups = "drop") %>%
ggplot(aes(age, score, color=count)) +
geom_point(size = 2) +
scale_color_viridis_c()
```
Figure 3\.37: Deal with overlapping data using dot colour
The [viridis package](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html) changes the colour themes to be easier to read by people with colourblindness and to print better in greyscale. Viridis is built into `ggplot2` since v3\.0\.0\. It uses `scale_colour_viridis_c()` and `scale_fill_viridis_c()` for continuous variables and `scale_colour_viridis_d()` and `scale_fill_viridis_d()` for discrete variables.
3\.11 Overlapping Continuous Data
---------------------------------
Even if the variables are continuous, overplotting might obscure any relationships if you have lots of data.
```
ggplot(pets, aes(age, score)) +
geom_point()
```
Figure 3\.38: Overplotted data
### 3\.11\.1 2D Density Plot
Use `geom_density2d()` to create a contour map.
```
ggplot(pets, aes(age, score)) +
geom_density2d()
```
Figure 3\.39: Contour map with geom\_density2d()
You can use `stat_density_2d(aes(fill = ..level..), geom = "polygon")` to create a heatmap\-style density plot.
```
ggplot(pets, aes(age, score)) +
stat_density_2d(aes(fill = ..level..), geom = "polygon") +
scale_fill_viridis_c()
```
Figure 3\.40: Heatmap\-density plot
### 3\.11\.2 2D Histogram
Use `geom_bin2d()` to create a rectangular heatmap of bin counts. Set the `binwidth` to the x and y dimensions to capture in each box.
```
ggplot(pets, aes(age, score)) +
geom_bin2d(binwidth = c(1, 5))
```
Figure 3\.41: Heatmap of bin counts
### 3\.11\.3 Hexagonal Heatmap
Use `geomhex()` to create a hexagonal heatmap of bin counts. Adjust the `binwidth`, `xlim()`, `ylim()` and/or the figure dimensions to make the hexagons more or less stretched.
```
ggplot(pets, aes(age, score)) +
geom_hex(binwidth = c(1, 5))
```
Figure 3\.42: Hexagonal heatmap of bin counts
### 3\.11\.4 Correlation Heatmap
I’ve included the code for creating a correlation matrix from a table of variables, but you don’t need to understand how this is done yet. We’ll cover `mutate()` and `gather()` functions in the [dplyr](dplyr.html#dplyr) and [tidyr](tidyr.html#tidyr) lessons.
```
heatmap <- pets %>%
select_if(is.numeric) %>% # get just the numeric columns
cor() %>% # create the correlation matrix
as_tibble(rownames = "V1") %>% # make it a tibble
gather("V2", "r", 2:ncol(.)) # wide to long (V2)
```
Once you have a correlation matrix in the correct (long) format, it’s easy to make a heatmap using `geom_tile()`.
```
ggplot(heatmap, aes(V1, V2, fill=r)) +
geom_tile() +
scale_fill_viridis_c()
```
Figure 3\.43: Heatmap using geom\_tile()
3\.12 Interactive Plots
-----------------------
You can use the `plotly` package to make interactive graphs. Just assign your ggplot to a variable and use the function `ggplotly()`.
```
demog_plot <- ggplot(pets, aes(age, score, fill=pet)) +
geom_point() +
geom_smooth(formula = y~x, method = lm)
ggplotly(demog_plot)
```
Figure 3\.44: Interactive graph using plotly
Hover over the data points above and click on the legend items.
3\.13 Glossary
--------------
| term | definition |
| --- | --- |
| [continuous](https://psyteachr.github.io/glossary/c#continuous) | Data that can take on any values between other existing values. |
| [discrete](https://psyteachr.github.io/glossary/d#discrete) | Data that can only take certain values, such as integers. |
| [geom](https://psyteachr.github.io/glossary/g#geom) | The geometric style in which data are displayed, such as boxplot, density, or histogram. |
| [likert](https://psyteachr.github.io/glossary/l#likert) | A rating scale with a small number of discrete points in order |
| [nominal](https://psyteachr.github.io/glossary/n#nominal) | Categorical variables that don’t have an inherent order, such as types of animal. |
| [ordinal](https://psyteachr.github.io/glossary/o#ordinal) | Discrete variables that have an inherent order, such as number of legs |
3\.14 Exercises
---------------
Download the [exercises](exercises/03_ggplot_exercise.Rmd). See the [plots](exercises/03_ggplot_answers.html) to see what your plots should look like (this doesn’t contain the answer code). See the [answers](exercises/03_ggplot_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(3)
# run this to access the answers
dataskills::exercise(3, answers = TRUE)
```
3\.1 Learning Objectives
------------------------
### 3\.1\.1 Basic
1. Understand what types of graphs are best for [different types of data](ggplot.html#vartypes) [(video)](https://youtu.be/tOFQFPRgZ3M)
* 1 discrete
* 1 continuous
* 2 discrete
* 2 continuous
* 1 discrete, 1 continuous
* 3 continuous
2. Create common types of graphs with ggplot2 [(video)](https://youtu.be/kKlQupjD__g)
* [`geom_bar()`](ggplot.html#geom_bar)
* [`geom_density()`](ggplot.html#geom_density)
* [`geom_freqpoly()`](ggplot.html#geom_freqpoly)
* [`geom_histogram()`](ggplot.html#geom_histogram)
* [`geom_col()`](ggplot.html#geom_col)
* [`geom_boxplot()`](ggplot.html#geom_boxplot)
* [`geom_violin()`](ggplot.html#geom_violin)
* [Vertical Intervals](ggplot.html#vertical_intervals)
+ `geom_crossbar()`
+ `geom_errorbar()`
+ `geom_linerange()`
+ `geom_pointrange()`
* [`geom_point()`](ggplot.html#geom_point)
* [`geom_smooth()`](ggplot.html#geom_smooth)
3. Set custom [labels](ggplot.html#custom-labels), [colours](ggplot.html#custom-colours), and [themes](ggplot.html#themes) [(video)](https://youtu.be/6pHuCbOh86s)
4. [Combine plots](combo_plots) on the same plot, as facets, or as a grid using cowplot [(video)](https://youtu.be/AnqlfuU-VZk)
5. [Save plots](ggplot.html#ggsave) as an image file [(video)](https://youtu.be/f1Y53mjEli0)
### 3\.1\.2 Intermediate
6. Add lines to graphs
7. Deal with [overlapping data](ggplot.html#overlap)
8. Create less common types of graphs
* [`geom_tile()`](ggplot.html#geom_tile)
* [`geom_density2d()`](ggplot.html#geom_density2d)
* [`geom_bin2d()`](ggplot.html#geom_bin2d)
* [`geom_hex()`](ggplot.html#geom_hex)
* [`geom_count()`](ggplot.html#geom_count)
9. Adjust axes (e.g., flip coordinates, set axis limits)
10. Create interactive graphs with [`plotly`](ggplot.html#plotly)
### 3\.1\.1 Basic
1. Understand what types of graphs are best for [different types of data](ggplot.html#vartypes) [(video)](https://youtu.be/tOFQFPRgZ3M)
* 1 discrete
* 1 continuous
* 2 discrete
* 2 continuous
* 1 discrete, 1 continuous
* 3 continuous
2. Create common types of graphs with ggplot2 [(video)](https://youtu.be/kKlQupjD__g)
* [`geom_bar()`](ggplot.html#geom_bar)
* [`geom_density()`](ggplot.html#geom_density)
* [`geom_freqpoly()`](ggplot.html#geom_freqpoly)
* [`geom_histogram()`](ggplot.html#geom_histogram)
* [`geom_col()`](ggplot.html#geom_col)
* [`geom_boxplot()`](ggplot.html#geom_boxplot)
* [`geom_violin()`](ggplot.html#geom_violin)
* [Vertical Intervals](ggplot.html#vertical_intervals)
+ `geom_crossbar()`
+ `geom_errorbar()`
+ `geom_linerange()`
+ `geom_pointrange()`
* [`geom_point()`](ggplot.html#geom_point)
* [`geom_smooth()`](ggplot.html#geom_smooth)
3. Set custom [labels](ggplot.html#custom-labels), [colours](ggplot.html#custom-colours), and [themes](ggplot.html#themes) [(video)](https://youtu.be/6pHuCbOh86s)
4. [Combine plots](combo_plots) on the same plot, as facets, or as a grid using cowplot [(video)](https://youtu.be/AnqlfuU-VZk)
5. [Save plots](ggplot.html#ggsave) as an image file [(video)](https://youtu.be/f1Y53mjEli0)
### 3\.1\.2 Intermediate
6. Add lines to graphs
7. Deal with [overlapping data](ggplot.html#overlap)
8. Create less common types of graphs
* [`geom_tile()`](ggplot.html#geom_tile)
* [`geom_density2d()`](ggplot.html#geom_density2d)
* [`geom_bin2d()`](ggplot.html#geom_bin2d)
* [`geom_hex()`](ggplot.html#geom_hex)
* [`geom_count()`](ggplot.html#geom_count)
9. Adjust axes (e.g., flip coordinates, set axis limits)
10. Create interactive graphs with [`plotly`](ggplot.html#plotly)
3\.2 Resources
--------------
* [Chapter 3: Data Visualisation](http://r4ds.had.co.nz/data-visualisation.html) of *R for Data Science*
* [ggplot2 cheat sheet](https://github.com/rstudio/cheatsheets/raw/master/data-visualization-2.1.pdf)
* [Chapter 28: Graphics for communication](http://r4ds.had.co.nz/graphics-for-communication.html) of *R for Data Science*
* [Look at Data](http://socviz.co/look-at-data.html) from [Data Vizualization for Social Science](http://socviz.co/)
* [Hack Your Data Beautiful](https://psyteachr.github.io/hack-your-data/) workshop by University of Glasgow postgraduate students
* [Graphs](http://www.cookbook-r.com/Graphs) in *Cookbook for R*
* [ggplot2 documentation](https://ggplot2.tidyverse.org/reference/)
* [The R Graph Gallery](http://www.r-graph-gallery.com/) (this is really useful)
* [Top 50 ggplot2 Visualizations](http://r-statistics.co/Top50-Ggplot2-Visualizations-MasterList-R-Code.html)
* [R Graphics Cookbook](http://www.cookbook-r.com/Graphs/) by Winston Chang
* [ggplot extensions](https://www.ggplot2-exts.org/)
* [plotly](https://plot.ly/ggplot2/) for creating interactive graphs
3\.3 Setup
----------
```
# libraries needed for these graphs
library(tidyverse)
library(dataskills)
library(plotly)
library(cowplot)
set.seed(30250) # makes sure random numbers are reproducible
```
3\.4 Common Variable Combinations
---------------------------------
[Continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") variables are properties you can measure, like height. [Discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") variables are things you can count, like the number of pets you have. Categorical variables can be [nominal](https://psyteachr.github.io/glossary/n#nominal "Categorical variables that don’t have an inherent order, such as types of animal."), where the categories don’t really have an order, like cats, dogs and ferrets (even though ferrets are obviously best). They can also be [ordinal](https://psyteachr.github.io/glossary/o#ordinal "Discrete variables that have an inherent order, such as number of legs"), where there is a clear order, but the distance between the categories isn’t something you could exactly equate, like points on a [Likert](https://psyteachr.github.io/glossary/l#likert "A rating scale with a small number of discrete points in order") rating scale.
Different types of visualisations are good for different types of variables.
Load the `pets` dataset from the `dataskills` package and explore it with `glimpse(pets)` or `View(pets)`. This is a simulated dataset with one random factor (`id`), two categorical factors (`pet`, `country`) and three continuous variables (`score`, `age`, `weight`).
```
data("pets")
# if you don't have the dataskills package, use:
# pets <- read_csv("https://psyteachr.github.io/msc-data-skills/data/pets.csv", col_types = "cffiid")
glimpse(pets)
```
```
## Rows: 800
## Columns: 6
## $ id <chr> "S001", "S002", "S003", "S004", "S005", "S006", "S007", "S008"…
## $ pet <fct> dog, dog, dog, dog, dog, dog, dog, dog, dog, dog, dog, dog, do…
## $ country <fct> UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK, UK…
## $ score <int> 90, 107, 94, 120, 111, 110, 100, 107, 106, 109, 85, 110, 102, …
## $ age <int> 6, 8, 2, 10, 4, 8, 9, 8, 6, 11, 5, 9, 1, 10, 7, 8, 1, 8, 5, 13…
## $ weight <dbl> 19.78932, 20.01422, 19.14863, 19.56953, 21.39259, 21.31880, 19…
```
Before you read ahead, come up with an example of each type of variable combination and sketch the types of graphs that would best display these data.
* 1 categorical
* 1 continuous
* 2 categorical
* 2 continuous
* 1 categorical, 1 continuous
* 3 continuous
3\.5 Basic Plots
----------------
R has some basic plotting functions, but they’re difficult to use and aesthetically not very nice. They can be useful to have a quick look at data while you’re working on a script, though. The function `plot()` usually defaults to a sensible type of plot, depending on whether the arguments `x` and `y` are categorical, continuous, or missing.
```
plot(x = pets$pet)
```
Figure 3\.1: plot() with categorical x
```
plot(x = pets$pet, y = pets$score)
```
Figure 3\.2: plot() with categorical x and continuous y
```
plot(x = pets$age, y = pets$weight)
```
Figure 3\.3: plot() with continuous x and y
The function `hist()` creates a quick histogram so you can see the distribution of your data. You can adjust how many columns are plotted with the argument `breaks`.
```
hist(pets$score, breaks = 20)
```
Figure 3\.4: hist()
3\.6 GGplots
------------
While the functions above are nice for quick visualisations, it’s hard to make pretty, publication\-ready plots. The package `ggplot2` (loaded with `tidyverse`) is one of the most common packages for creating beautiful visualisations.
`ggplot2` creates plots using a “grammar of graphics” where you add [geoms](https://psyteachr.github.io/glossary/g#geom "The geometric style in which data are displayed, such as boxplot, density, or histogram.") in layers. It can be complex to understand, but it’s very powerful once you have a mental model of how it works.
Let’s start with a totally empty plot layer created by the `ggplot()` function with no arguments.
```
ggplot()
```
Figure 3\.5: A plot base created by ggplot()
The first argument to `ggplot()` is the `data` table you want to plot. Let’s use the `pets` data we loaded above. The second argument is the `mapping` for which columns in your data table correspond to which properties of the plot, such as the `x`\-axis, the `y`\-axis, line `colour` or `linetype`, point `shape`, or object `fill`. These mappings are specified by the `aes()` function. Just adding this to the `ggplot` function creates the labels and ranges for the `x` and `y` axes. They usually have sensible default values, given your data, but we’ll learn how to change them later.
```
mapping <- aes(x = pet,
y = score,
colour = country,
fill = country)
ggplot(data = pets, mapping = mapping)
```
Figure 3\.6: Empty ggplot with x and y labels
People usually omit the argument names and just put the `aes()` function directly as the second argument to `ggplot`. They also usually omit `x` and `y` as argument names to `aes()` (but you have to name the other properties). Next we can add “geoms,” or plot styles. You literally add them with the `+` symbol. You can also add other plot attributes, such as labels, or change the theme and base font size.
```
ggplot(pets, aes(pet, score, colour = country, fill = country)) +
geom_violin(alpha = 0.5) +
labs(x = "Pet type",
y = "Score on an Important Test",
colour = "Country of Origin",
fill = "Country of Origin",
title = "My first plot!") +
theme_bw(base_size = 15)
```
Figure 3\.7: Violin plot with country represented by colour.
3\.7 Common Plot Types
----------------------
There are many geoms, and they can take different arguments to customise their appearance. We’ll learn about some of the most common below.
### 3\.7\.1 Bar plot
Bar plots are good for categorical data where you want to represent the count.
```
ggplot(pets, aes(pet)) +
geom_bar()
```
Figure 3\.8: Bar plot
### 3\.7\.2 Density plot
Density plots are good for one continuous variable, but only if you have a fairly large number of observations.
```
ggplot(pets, aes(score)) +
geom_density()
```
Figure 3\.9: Density plot
You can represent subsets of a variable by assigning the category variable to the argument `group`, `fill`, or `color`.
```
ggplot(pets, aes(score, fill = pet)) +
geom_density(alpha = 0.5)
```
Figure 3\.10: Grouped density plot
Try changing the `alpha` argument to figure out what it does.
### 3\.7\.3 Frequency polygons
If you want the y\-axis to represent count rather than density, try `geom_freqpoly()`.
```
ggplot(pets, aes(score, color = pet)) +
geom_freqpoly(binwidth = 5)
```
Figure 3\.11: Frequency ploygon plot
Try changing the `binwidth` argument to 10 and 1\. How do you figure out the right value?
### 3\.7\.4 Histogram
Histograms are also good for one continuous variable, and work well if you don’t have many observations. Set the `binwidth` to control how wide each bar is.
```
ggplot(pets, aes(score)) +
geom_histogram(binwidth = 5, fill = "white", color = "black")
```
Figure 3\.12: Histogram
Histograms in ggplot look pretty bad unless you set the `fill` and `color`.
If you show grouped histograms, you also probably want to change the default `position` argument.
```
ggplot(pets, aes(score, fill=pet)) +
geom_histogram(binwidth = 5, alpha = 0.5,
position = "dodge")
```
Figure 3\.13: Grouped Histogram
Try changing the `position` argument to “identity,” “fill,” “dodge,” or “stack.”
### 3\.7\.5 Column plot
Column plots are the worst way to represent grouped continuous data, but also one of the most common. If your data are already aggregated (e.g., you have rows for each group with columns for the mean and standard error), you can use `geom_bar` or `geom_col` and `geom_errorbar` directly. If not, you can use the function `stat_summary` to calculate the mean and standard error and send those numbers to the appropriate geom for plotting.
```
ggplot(pets, aes(pet, score, fill=pet)) +
stat_summary(fun = mean, geom = "col", alpha = 0.5) +
stat_summary(fun.data = mean_se, geom = "errorbar",
width = 0.25) +
coord_cartesian(ylim = c(80, 120))
```
Figure 3\.14: Column plot
Try changing the values for `coord_cartesian`. What does this do?
### 3\.7\.6 Boxplot
Boxplots are great for representing the distribution of grouped continuous variables. They fix most of the problems with using bar/column plots for continuous data.
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_boxplot(alpha = 0.5)
```
Figure 3\.15: Box plot
### 3\.7\.7 Violin plot
Violin pots are like sideways, mirrored density plots. They give even more information than a boxplot about distribution and are especially useful when you have non\-normal distributions.
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(draw_quantiles = .5,
trim = FALSE, alpha = 0.5,)
```
Figure 3\.16: Violin plot
Try changing the `quantile` argument. Set it to a vector of the numbers 0\.1 to 0\.9 in steps of 0\.1\.
### 3\.7\.8 Vertical intervals
Boxplots and violin plots don’t always map well onto inferential stats that use the mean. You can represent the mean and standard error or any other value you can calculate.
Here, we will create a table with the means and standard errors for two groups. We’ll learn how to calculate this from raw data in the chapter on [data wrangling](dplyr.html#dplyr). We also create a new object called `gg` that sets up the base of the plot.
```
dat <- tibble(
group = c("A", "B"),
mean = c(10, 20),
se = c(2, 3)
)
gg <- ggplot(dat, aes(group, mean,
ymin = mean-se,
ymax = mean+se))
```
The trick above can be useful if you want to represent the same data in different ways. You can add different geoms to the base plot without having to re\-type the base plot code.
```
gg + geom_crossbar()
```
Figure 3\.17: geom\_crossbar()
```
gg + geom_errorbar()
```
Figure 3\.18: geom\_errorbar()
```
gg + geom_linerange()
```
Figure 3\.19: geom\_linerange()
```
gg + geom_pointrange()
```
Figure 3\.20: geom\_pointrange()
You can also use the function `stats_summary` to calculate mean, standard error, or any other value for your data and display it using any geom.
```
ggplot(pets, aes(pet, score, color=pet)) +
stat_summary(fun.data = mean_se, geom = "crossbar") +
stat_summary(fun.min = function(x) mean(x) - sd(x),
fun.max = function(x) mean(x) + sd(x),
geom = "errorbar", width = 0) +
theme(legend.position = "none") # gets rid of the legend
```
Figure 3\.21: Vertical intervals with stats\_summary()
### 3\.7\.9 Scatter plot
Scatter plots are a good way to represent the relationship between two continuous variables.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_point()
```
Figure 3\.22: Scatter plot using geom\_point()
### 3\.7\.10 Line graph
You often want to represent the relationship as a single line.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.23: Line plot using geom\_smooth()
What are some other options for the `method` argument to `geom_smooth`? When might you want to use them?
You can plot functions other than the linear `y ~ x`. The code below creates a data table where `x` is 101 values between \-10 and 10\. and `y` is `x` squared plus `3*x` plus `1`. You’ll probably recognise this from algebra as the quadratic equation. You can set the `formula` argument in `geom_smooth` to a quadratic formula (`y ~ x + I(x^2)`) to fit a quadratic function to the data.
```
quad <- tibble(
x = seq(-10, 10, length.out = 101),
y = x^2 + 3*x + 1
)
ggplot(quad, aes(x, y)) +
geom_point() +
geom_smooth(formula = y ~ x + I(x^2),
method="lm")
```
Figure 3\.24: Fitting quadratic functions
### 3\.7\.1 Bar plot
Bar plots are good for categorical data where you want to represent the count.
```
ggplot(pets, aes(pet)) +
geom_bar()
```
Figure 3\.8: Bar plot
### 3\.7\.2 Density plot
Density plots are good for one continuous variable, but only if you have a fairly large number of observations.
```
ggplot(pets, aes(score)) +
geom_density()
```
Figure 3\.9: Density plot
You can represent subsets of a variable by assigning the category variable to the argument `group`, `fill`, or `color`.
```
ggplot(pets, aes(score, fill = pet)) +
geom_density(alpha = 0.5)
```
Figure 3\.10: Grouped density plot
Try changing the `alpha` argument to figure out what it does.
### 3\.7\.3 Frequency polygons
If you want the y\-axis to represent count rather than density, try `geom_freqpoly()`.
```
ggplot(pets, aes(score, color = pet)) +
geom_freqpoly(binwidth = 5)
```
Figure 3\.11: Frequency ploygon plot
Try changing the `binwidth` argument to 10 and 1\. How do you figure out the right value?
### 3\.7\.4 Histogram
Histograms are also good for one continuous variable, and work well if you don’t have many observations. Set the `binwidth` to control how wide each bar is.
```
ggplot(pets, aes(score)) +
geom_histogram(binwidth = 5, fill = "white", color = "black")
```
Figure 3\.12: Histogram
Histograms in ggplot look pretty bad unless you set the `fill` and `color`.
If you show grouped histograms, you also probably want to change the default `position` argument.
```
ggplot(pets, aes(score, fill=pet)) +
geom_histogram(binwidth = 5, alpha = 0.5,
position = "dodge")
```
Figure 3\.13: Grouped Histogram
Try changing the `position` argument to “identity,” “fill,” “dodge,” or “stack.”
### 3\.7\.5 Column plot
Column plots are the worst way to represent grouped continuous data, but also one of the most common. If your data are already aggregated (e.g., you have rows for each group with columns for the mean and standard error), you can use `geom_bar` or `geom_col` and `geom_errorbar` directly. If not, you can use the function `stat_summary` to calculate the mean and standard error and send those numbers to the appropriate geom for plotting.
```
ggplot(pets, aes(pet, score, fill=pet)) +
stat_summary(fun = mean, geom = "col", alpha = 0.5) +
stat_summary(fun.data = mean_se, geom = "errorbar",
width = 0.25) +
coord_cartesian(ylim = c(80, 120))
```
Figure 3\.14: Column plot
Try changing the values for `coord_cartesian`. What does this do?
### 3\.7\.6 Boxplot
Boxplots are great for representing the distribution of grouped continuous variables. They fix most of the problems with using bar/column plots for continuous data.
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_boxplot(alpha = 0.5)
```
Figure 3\.15: Box plot
### 3\.7\.7 Violin plot
Violin pots are like sideways, mirrored density plots. They give even more information than a boxplot about distribution and are especially useful when you have non\-normal distributions.
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(draw_quantiles = .5,
trim = FALSE, alpha = 0.5,)
```
Figure 3\.16: Violin plot
Try changing the `quantile` argument. Set it to a vector of the numbers 0\.1 to 0\.9 in steps of 0\.1\.
### 3\.7\.8 Vertical intervals
Boxplots and violin plots don’t always map well onto inferential stats that use the mean. You can represent the mean and standard error or any other value you can calculate.
Here, we will create a table with the means and standard errors for two groups. We’ll learn how to calculate this from raw data in the chapter on [data wrangling](dplyr.html#dplyr). We also create a new object called `gg` that sets up the base of the plot.
```
dat <- tibble(
group = c("A", "B"),
mean = c(10, 20),
se = c(2, 3)
)
gg <- ggplot(dat, aes(group, mean,
ymin = mean-se,
ymax = mean+se))
```
The trick above can be useful if you want to represent the same data in different ways. You can add different geoms to the base plot without having to re\-type the base plot code.
```
gg + geom_crossbar()
```
Figure 3\.17: geom\_crossbar()
```
gg + geom_errorbar()
```
Figure 3\.18: geom\_errorbar()
```
gg + geom_linerange()
```
Figure 3\.19: geom\_linerange()
```
gg + geom_pointrange()
```
Figure 3\.20: geom\_pointrange()
You can also use the function `stats_summary` to calculate mean, standard error, or any other value for your data and display it using any geom.
```
ggplot(pets, aes(pet, score, color=pet)) +
stat_summary(fun.data = mean_se, geom = "crossbar") +
stat_summary(fun.min = function(x) mean(x) - sd(x),
fun.max = function(x) mean(x) + sd(x),
geom = "errorbar", width = 0) +
theme(legend.position = "none") # gets rid of the legend
```
Figure 3\.21: Vertical intervals with stats\_summary()
### 3\.7\.9 Scatter plot
Scatter plots are a good way to represent the relationship between two continuous variables.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_point()
```
Figure 3\.22: Scatter plot using geom\_point()
### 3\.7\.10 Line graph
You often want to represent the relationship as a single line.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.23: Line plot using geom\_smooth()
What are some other options for the `method` argument to `geom_smooth`? When might you want to use them?
You can plot functions other than the linear `y ~ x`. The code below creates a data table where `x` is 101 values between \-10 and 10\. and `y` is `x` squared plus `3*x` plus `1`. You’ll probably recognise this from algebra as the quadratic equation. You can set the `formula` argument in `geom_smooth` to a quadratic formula (`y ~ x + I(x^2)`) to fit a quadratic function to the data.
```
quad <- tibble(
x = seq(-10, 10, length.out = 101),
y = x^2 + 3*x + 1
)
ggplot(quad, aes(x, y)) +
geom_point() +
geom_smooth(formula = y ~ x + I(x^2),
method="lm")
```
Figure 3\.24: Fitting quadratic functions
3\.8 Customisation
------------------
### 3\.8\.1 Labels
You can set custom titles and axis labels in a few different ways.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
labs(title = "Pet score with Age",
x = "Age (in Years)",
y = "score Score",
color = "Pet Type")
```
Figure 3\.25: Set custom labels with labs()
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
ggtitle("Pet score with Age") +
xlab("Age (in Years)") +
ylab("score Score") +
scale_color_discrete(name = "Pet Type")
```
Figure 3\.26: Set custom labels with individual functions
### 3\.8\.2 Colours
You can set custom values for colour and fill using functions like `scale_colour_manual()` and `scale_fill_manual()`. The [Colours chapter in Cookbook for R](http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/) has many more ways to customise colour.
```
ggplot(pets, aes(pet, score, colour = pet, fill = pet)) +
geom_violin() +
scale_color_manual(values = c("darkgreen", "dodgerblue", "orange")) +
scale_fill_manual(values = c("#CCFFCC", "#BBDDFF", "#FFCC66"))
```
Figure 3\.27: Set custom colour
### 3\.8\.3 Themes
GGplot comes with several additional themes and the ability to fully customise your theme. Type `?theme` into the console to see the full list. Other packages such as `cowplot` also have custom themes. You can add a custom theme to the end of your ggplot object and specify a new `base_size` to make the default fonts and lines larger or smaller.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
theme_minimal(base_size = 18)
```
Figure 3\.28: Minimal theme with 18\-point base font size
It’s more complicated, but you can fully customise your theme with `theme()`. You can save this to an object and add it to the end of all of your plots to make the style consistent. Alternatively, you can set the theme at the top of a script with `theme_set()` and this will apply to all subsequent ggplot plots.
```
vampire_theme <- theme(
rect = element_rect(fill = "black"),
panel.background = element_rect(fill = "black"),
text = element_text(size = 20, colour = "white"),
axis.text = element_text(size = 16, colour = "grey70"),
line = element_line(colour = "white", size = 2),
panel.grid = element_blank(),
axis.line = element_line(colour = "white"),
axis.ticks = element_blank(),
legend.position = "top"
)
theme_set(vampire_theme)
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.29: Custom theme
### 3\.8\.4 Save as file
You can save a ggplot using `ggsave()`. It saves the last ggplot you made, by default, but you can specify which plot you want to save if you assigned that plot to a variable.
You can set the `width` and `height` of your plot. The default units are inches, but you can change the `units` argument to “in,” “cm,” or “mm.”
```
box <- ggplot(pets, aes(pet, score, fill=pet)) +
geom_boxplot(alpha = 0.5)
violin <- ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(alpha = 0.5)
ggsave("demog_violin_plot.png", width = 5, height = 7)
ggsave("demog_box_plot.jpg", plot = box, width = 5, height = 7)
```
The file type is set from the filename suffix, or by specifying the argument `device`, which can take the following values: “eps,” “ps,” “tex,” “pdf,” “jpeg,” “tiff,” “png,” “bmp,” “svg” or “wmf.”
### 3\.8\.1 Labels
You can set custom titles and axis labels in a few different ways.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
labs(title = "Pet score with Age",
x = "Age (in Years)",
y = "score Score",
color = "Pet Type")
```
Figure 3\.25: Set custom labels with labs()
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
ggtitle("Pet score with Age") +
xlab("Age (in Years)") +
ylab("score Score") +
scale_color_discrete(name = "Pet Type")
```
Figure 3\.26: Set custom labels with individual functions
### 3\.8\.2 Colours
You can set custom values for colour and fill using functions like `scale_colour_manual()` and `scale_fill_manual()`. The [Colours chapter in Cookbook for R](http://www.cookbook-r.com/Graphs/Colors_(ggplot2)/) has many more ways to customise colour.
```
ggplot(pets, aes(pet, score, colour = pet, fill = pet)) +
geom_violin() +
scale_color_manual(values = c("darkgreen", "dodgerblue", "orange")) +
scale_fill_manual(values = c("#CCFFCC", "#BBDDFF", "#FFCC66"))
```
Figure 3\.27: Set custom colour
### 3\.8\.3 Themes
GGplot comes with several additional themes and the ability to fully customise your theme. Type `?theme` into the console to see the full list. Other packages such as `cowplot` also have custom themes. You can add a custom theme to the end of your ggplot object and specify a new `base_size` to make the default fonts and lines larger or smaller.
```
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm") +
theme_minimal(base_size = 18)
```
Figure 3\.28: Minimal theme with 18\-point base font size
It’s more complicated, but you can fully customise your theme with `theme()`. You can save this to an object and add it to the end of all of your plots to make the style consistent. Alternatively, you can set the theme at the top of a script with `theme_set()` and this will apply to all subsequent ggplot plots.
```
vampire_theme <- theme(
rect = element_rect(fill = "black"),
panel.background = element_rect(fill = "black"),
text = element_text(size = 20, colour = "white"),
axis.text = element_text(size = 16, colour = "grey70"),
line = element_line(colour = "white", size = 2),
panel.grid = element_blank(),
axis.line = element_line(colour = "white"),
axis.ticks = element_blank(),
legend.position = "top"
)
theme_set(vampire_theme)
ggplot(pets, aes(age, score, color = pet)) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.29: Custom theme
### 3\.8\.4 Save as file
You can save a ggplot using `ggsave()`. It saves the last ggplot you made, by default, but you can specify which plot you want to save if you assigned that plot to a variable.
You can set the `width` and `height` of your plot. The default units are inches, but you can change the `units` argument to “in,” “cm,” or “mm.”
```
box <- ggplot(pets, aes(pet, score, fill=pet)) +
geom_boxplot(alpha = 0.5)
violin <- ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(alpha = 0.5)
ggsave("demog_violin_plot.png", width = 5, height = 7)
ggsave("demog_box_plot.jpg", plot = box, width = 5, height = 7)
```
The file type is set from the filename suffix, or by specifying the argument `device`, which can take the following values: “eps,” “ps,” “tex,” “pdf,” “jpeg,” “tiff,” “png,” “bmp,” “svg” or “wmf.”
3\.9 Combination Plots
----------------------
### 3\.9\.1 Violinbox plot
A combination of a violin plot to show the shape of the distribution and a boxplot to show the median and interquartile ranges can be a very useful visualisation.
```
ggplot(pets, aes(pet, score, fill = pet)) +
geom_violin(show.legend = FALSE) +
geom_boxplot(width = 0.2, fill = "white",
show.legend = FALSE)
```
Figure 3\.30: Violin\-box plot
Set the `show.legend` argument to `FALSE` to hide the legend. We do this here because the x\-axis already labels the pet types.
### 3\.9\.2 Violin\-point\-range plot
You can use `stat_summary()` to superimpose a point\-range plot showning the mean ± 1 SD. You’ll learn how to write your own functions in the lesson on [Iteration and Functions](func.html#func).
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(trim = FALSE, alpha = 0.5) +
stat_summary(
fun = mean,
fun.max = function(x) {mean(x) + sd(x)},
fun.min = function(x) {mean(x) - sd(x)},
geom="pointrange"
)
```
Figure 3\.31: Point\-range plot using stat\_summary()
### 3\.9\.3 Violin\-jitter plot
If you don’t have a lot of data points, it’s good to represent them individually. You can use `geom_jitter` to do this.
```
# sample_n chooses 50 random observations from the dataset
ggplot(sample_n(pets, 50), aes(pet, score, fill=pet)) +
geom_violin(
trim = FALSE,
draw_quantiles = c(0.25, 0.5, 0.75),
alpha = 0.5
) +
geom_jitter(
width = 0.15, # points spread out over 15% of available width
height = 0, # do not move position on the y-axis
alpha = 0.5,
size = 3
)
```
Figure 3\.32: Violin\-jitter plot
### 3\.9\.4 Scatter\-line graph
If your graph isn’t too complicated, it’s good to also show the individual data points behind the line.
```
ggplot(sample_n(pets, 50), aes(age, weight, colour = pet)) +
geom_point() +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.33: Scatter\-line plot
### 3\.9\.5 Grid of plots
You can use the [`cowplot`](https://cran.r-project.org/web/packages/cowplot/vignettes/introduction.html) package to easily make grids of different graphs. First, you have to assign each plot a name. Then you list all the plots as the first arguments of `plot_grid()` and provide a vector of labels.
```
gg <- ggplot(pets, aes(pet, score, colour = pet))
nolegend <- theme(legend.position = 0)
vp <- gg + geom_violin(alpha = 0.5) + nolegend +
ggtitle("Violin Plot")
bp <- gg + geom_boxplot(alpha = 0.5) + nolegend +
ggtitle("Box Plot")
cp <- gg + stat_summary(fun = mean, geom = "col", fill = "white") + nolegend +
ggtitle("Column Plot")
dp <- ggplot(pets, aes(score, colour = pet)) +
geom_density() + nolegend +
ggtitle("Density Plot")
plot_grid(vp, bp, cp, dp, labels = LETTERS[1:4])
```
Figure 3\.34: Grid of plots
### 3\.9\.1 Violinbox plot
A combination of a violin plot to show the shape of the distribution and a boxplot to show the median and interquartile ranges can be a very useful visualisation.
```
ggplot(pets, aes(pet, score, fill = pet)) +
geom_violin(show.legend = FALSE) +
geom_boxplot(width = 0.2, fill = "white",
show.legend = FALSE)
```
Figure 3\.30: Violin\-box plot
Set the `show.legend` argument to `FALSE` to hide the legend. We do this here because the x\-axis already labels the pet types.
### 3\.9\.2 Violin\-point\-range plot
You can use `stat_summary()` to superimpose a point\-range plot showning the mean ± 1 SD. You’ll learn how to write your own functions in the lesson on [Iteration and Functions](func.html#func).
```
ggplot(pets, aes(pet, score, fill=pet)) +
geom_violin(trim = FALSE, alpha = 0.5) +
stat_summary(
fun = mean,
fun.max = function(x) {mean(x) + sd(x)},
fun.min = function(x) {mean(x) - sd(x)},
geom="pointrange"
)
```
Figure 3\.31: Point\-range plot using stat\_summary()
### 3\.9\.3 Violin\-jitter plot
If you don’t have a lot of data points, it’s good to represent them individually. You can use `geom_jitter` to do this.
```
# sample_n chooses 50 random observations from the dataset
ggplot(sample_n(pets, 50), aes(pet, score, fill=pet)) +
geom_violin(
trim = FALSE,
draw_quantiles = c(0.25, 0.5, 0.75),
alpha = 0.5
) +
geom_jitter(
width = 0.15, # points spread out over 15% of available width
height = 0, # do not move position on the y-axis
alpha = 0.5,
size = 3
)
```
Figure 3\.32: Violin\-jitter plot
### 3\.9\.4 Scatter\-line graph
If your graph isn’t too complicated, it’s good to also show the individual data points behind the line.
```
ggplot(sample_n(pets, 50), aes(age, weight, colour = pet)) +
geom_point() +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.33: Scatter\-line plot
### 3\.9\.5 Grid of plots
You can use the [`cowplot`](https://cran.r-project.org/web/packages/cowplot/vignettes/introduction.html) package to easily make grids of different graphs. First, you have to assign each plot a name. Then you list all the plots as the first arguments of `plot_grid()` and provide a vector of labels.
```
gg <- ggplot(pets, aes(pet, score, colour = pet))
nolegend <- theme(legend.position = 0)
vp <- gg + geom_violin(alpha = 0.5) + nolegend +
ggtitle("Violin Plot")
bp <- gg + geom_boxplot(alpha = 0.5) + nolegend +
ggtitle("Box Plot")
cp <- gg + stat_summary(fun = mean, geom = "col", fill = "white") + nolegend +
ggtitle("Column Plot")
dp <- ggplot(pets, aes(score, colour = pet)) +
geom_density() + nolegend +
ggtitle("Density Plot")
plot_grid(vp, bp, cp, dp, labels = LETTERS[1:4])
```
Figure 3\.34: Grid of plots
3\.10 Overlapping Discrete Data
-------------------------------
### 3\.10\.1 Reducing Opacity
You can deal with overlapping data points (very common if you’re using Likert scales) by reducing the opacity of the points. You need to use trial and error to adjust these so they look right.
```
ggplot(pets, aes(age, score, colour = pet)) +
geom_point(alpha = 0.25) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.35: Deal with overlapping data using transparency
### 3\.10\.2 Proportional Dot Plots
Or you can set the size of the dot proportional to the number of overlapping observations using `geom_count()`.
```
ggplot(pets, aes(age, score, colour = pet)) +
geom_count()
```
Figure 3\.36: Deal with overlapping data using geom\_count()
Alternatively, you can transform your data (we will learn to do this in the [data wrangling](dplyr.html#dplyr) chapter) to create a count column and use the count to set the dot colour.
```
pets %>%
group_by(age, score) %>%
summarise(count = n(), .groups = "drop") %>%
ggplot(aes(age, score, color=count)) +
geom_point(size = 2) +
scale_color_viridis_c()
```
Figure 3\.37: Deal with overlapping data using dot colour
The [viridis package](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html) changes the colour themes to be easier to read by people with colourblindness and to print better in greyscale. Viridis is built into `ggplot2` since v3\.0\.0\. It uses `scale_colour_viridis_c()` and `scale_fill_viridis_c()` for continuous variables and `scale_colour_viridis_d()` and `scale_fill_viridis_d()` for discrete variables.
### 3\.10\.1 Reducing Opacity
You can deal with overlapping data points (very common if you’re using Likert scales) by reducing the opacity of the points. You need to use trial and error to adjust these so they look right.
```
ggplot(pets, aes(age, score, colour = pet)) +
geom_point(alpha = 0.25) +
geom_smooth(formula = y ~ x, method="lm")
```
Figure 3\.35: Deal with overlapping data using transparency
### 3\.10\.2 Proportional Dot Plots
Or you can set the size of the dot proportional to the number of overlapping observations using `geom_count()`.
```
ggplot(pets, aes(age, score, colour = pet)) +
geom_count()
```
Figure 3\.36: Deal with overlapping data using geom\_count()
Alternatively, you can transform your data (we will learn to do this in the [data wrangling](dplyr.html#dplyr) chapter) to create a count column and use the count to set the dot colour.
```
pets %>%
group_by(age, score) %>%
summarise(count = n(), .groups = "drop") %>%
ggplot(aes(age, score, color=count)) +
geom_point(size = 2) +
scale_color_viridis_c()
```
Figure 3\.37: Deal with overlapping data using dot colour
The [viridis package](https://cran.r-project.org/web/packages/viridis/vignettes/intro-to-viridis.html) changes the colour themes to be easier to read by people with colourblindness and to print better in greyscale. Viridis is built into `ggplot2` since v3\.0\.0\. It uses `scale_colour_viridis_c()` and `scale_fill_viridis_c()` for continuous variables and `scale_colour_viridis_d()` and `scale_fill_viridis_d()` for discrete variables.
3\.11 Overlapping Continuous Data
---------------------------------
Even if the variables are continuous, overplotting might obscure any relationships if you have lots of data.
```
ggplot(pets, aes(age, score)) +
geom_point()
```
Figure 3\.38: Overplotted data
### 3\.11\.1 2D Density Plot
Use `geom_density2d()` to create a contour map.
```
ggplot(pets, aes(age, score)) +
geom_density2d()
```
Figure 3\.39: Contour map with geom\_density2d()
You can use `stat_density_2d(aes(fill = ..level..), geom = "polygon")` to create a heatmap\-style density plot.
```
ggplot(pets, aes(age, score)) +
stat_density_2d(aes(fill = ..level..), geom = "polygon") +
scale_fill_viridis_c()
```
Figure 3\.40: Heatmap\-density plot
### 3\.11\.2 2D Histogram
Use `geom_bin2d()` to create a rectangular heatmap of bin counts. Set the `binwidth` to the x and y dimensions to capture in each box.
```
ggplot(pets, aes(age, score)) +
geom_bin2d(binwidth = c(1, 5))
```
Figure 3\.41: Heatmap of bin counts
### 3\.11\.3 Hexagonal Heatmap
Use `geomhex()` to create a hexagonal heatmap of bin counts. Adjust the `binwidth`, `xlim()`, `ylim()` and/or the figure dimensions to make the hexagons more or less stretched.
```
ggplot(pets, aes(age, score)) +
geom_hex(binwidth = c(1, 5))
```
Figure 3\.42: Hexagonal heatmap of bin counts
### 3\.11\.4 Correlation Heatmap
I’ve included the code for creating a correlation matrix from a table of variables, but you don’t need to understand how this is done yet. We’ll cover `mutate()` and `gather()` functions in the [dplyr](dplyr.html#dplyr) and [tidyr](tidyr.html#tidyr) lessons.
```
heatmap <- pets %>%
select_if(is.numeric) %>% # get just the numeric columns
cor() %>% # create the correlation matrix
as_tibble(rownames = "V1") %>% # make it a tibble
gather("V2", "r", 2:ncol(.)) # wide to long (V2)
```
Once you have a correlation matrix in the correct (long) format, it’s easy to make a heatmap using `geom_tile()`.
```
ggplot(heatmap, aes(V1, V2, fill=r)) +
geom_tile() +
scale_fill_viridis_c()
```
Figure 3\.43: Heatmap using geom\_tile()
### 3\.11\.1 2D Density Plot
Use `geom_density2d()` to create a contour map.
```
ggplot(pets, aes(age, score)) +
geom_density2d()
```
Figure 3\.39: Contour map with geom\_density2d()
You can use `stat_density_2d(aes(fill = ..level..), geom = "polygon")` to create a heatmap\-style density plot.
```
ggplot(pets, aes(age, score)) +
stat_density_2d(aes(fill = ..level..), geom = "polygon") +
scale_fill_viridis_c()
```
Figure 3\.40: Heatmap\-density plot
### 3\.11\.2 2D Histogram
Use `geom_bin2d()` to create a rectangular heatmap of bin counts. Set the `binwidth` to the x and y dimensions to capture in each box.
```
ggplot(pets, aes(age, score)) +
geom_bin2d(binwidth = c(1, 5))
```
Figure 3\.41: Heatmap of bin counts
### 3\.11\.3 Hexagonal Heatmap
Use `geomhex()` to create a hexagonal heatmap of bin counts. Adjust the `binwidth`, `xlim()`, `ylim()` and/or the figure dimensions to make the hexagons more or less stretched.
```
ggplot(pets, aes(age, score)) +
geom_hex(binwidth = c(1, 5))
```
Figure 3\.42: Hexagonal heatmap of bin counts
### 3\.11\.4 Correlation Heatmap
I’ve included the code for creating a correlation matrix from a table of variables, but you don’t need to understand how this is done yet. We’ll cover `mutate()` and `gather()` functions in the [dplyr](dplyr.html#dplyr) and [tidyr](tidyr.html#tidyr) lessons.
```
heatmap <- pets %>%
select_if(is.numeric) %>% # get just the numeric columns
cor() %>% # create the correlation matrix
as_tibble(rownames = "V1") %>% # make it a tibble
gather("V2", "r", 2:ncol(.)) # wide to long (V2)
```
Once you have a correlation matrix in the correct (long) format, it’s easy to make a heatmap using `geom_tile()`.
```
ggplot(heatmap, aes(V1, V2, fill=r)) +
geom_tile() +
scale_fill_viridis_c()
```
Figure 3\.43: Heatmap using geom\_tile()
3\.12 Interactive Plots
-----------------------
You can use the `plotly` package to make interactive graphs. Just assign your ggplot to a variable and use the function `ggplotly()`.
```
demog_plot <- ggplot(pets, aes(age, score, fill=pet)) +
geom_point() +
geom_smooth(formula = y~x, method = lm)
ggplotly(demog_plot)
```
Figure 3\.44: Interactive graph using plotly
Hover over the data points above and click on the legend items.
3\.13 Glossary
--------------
| term | definition |
| --- | --- |
| [continuous](https://psyteachr.github.io/glossary/c#continuous) | Data that can take on any values between other existing values. |
| [discrete](https://psyteachr.github.io/glossary/d#discrete) | Data that can only take certain values, such as integers. |
| [geom](https://psyteachr.github.io/glossary/g#geom) | The geometric style in which data are displayed, such as boxplot, density, or histogram. |
| [likert](https://psyteachr.github.io/glossary/l#likert) | A rating scale with a small number of discrete points in order |
| [nominal](https://psyteachr.github.io/glossary/n#nominal) | Categorical variables that don’t have an inherent order, such as types of animal. |
| [ordinal](https://psyteachr.github.io/glossary/o#ordinal) | Discrete variables that have an inherent order, such as number of legs |
3\.14 Exercises
---------------
Download the [exercises](exercises/03_ggplot_exercise.Rmd). See the [plots](exercises/03_ggplot_answers.html) to see what your plots should look like (this doesn’t contain the answer code). See the [answers](exercises/03_ggplot_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(3)
# run this to access the answers
dataskills::exercise(3, answers = TRUE)
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/tidyr.html |
Chapter 4 Tidy Data
===================
4\.1 Learning Objectives
------------------------
### 4\.1\.1 Basic
1. Understand the concept of [tidy data](tidyr.html#tidy-data) [(video)](https://youtu.be/EsSN4OdsNpc)
2. Be able to convert between long and wide formats using pivot functions [(video)](https://youtu.be/4dvLmjhwN8I)
* [`pivot_longer()`](tidyr.html#pivot_longer)
* [`pivot_wider()`](tidyr.html#pivot_wider)
3. Be able to use the 4 basic `tidyr` verbs [(video)](https://youtu.be/oUWjb0JC8zM)
* [`gather()`](tidyr.html#gather)
* [`separate()`](tidyr.html#separate)
* [`spread()`](tidyr.html#spread)
* [`unite()`](tidyr.html#unite)
4. Be able to chain functions using [pipes](tidyr.html#pipes) [(video)](https://youtu.be/itfrlLaN4SE)
### 4\.1\.2 Advanced
5. Be able to use [regular expressions](#regex) to separate complex columns
4\.2 Resources
--------------
* [Tidy Data](http://vita.had.co.nz/papers/tidy-data.html)
* [Chapter 12: Tidy Data](http://r4ds.had.co.nz/tidy-data.html) in *R for Data Science*
* [Chapter 18: Pipes](http://r4ds.had.co.nz/pipes.html) in *R for Data Science*
* [Data wrangling cheat sheet](https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf)
4\.3 Setup
----------
```
# libraries needed
library(tidyverse)
library(dataskills)
set.seed(8675309) # makes sure random numbers are reproducible
```
4\.4 Tidy Data
--------------
### 4\.4\.1 Three Rules
* Each [variable](https://psyteachr.github.io/glossary/v#variable "A word that identifies and stores the value of some data for later use.") must have its own column
* Each [observation](https://psyteachr.github.io/glossary/o#observation "All of the data about a single trial or question.") must have its own row
* Each [value](https://psyteachr.github.io/glossary/v#value "A single number or piece of data.") must have its own cell
This table has three observations per row and the `total_meanRT` column contains two values.
Table 4\.1: Untidy table
| id | score\_1 | score\_2 | score\_3 | rt\_1 | rt\_2 | rt\_3 | total\_meanRT |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 4 | 3 | 7 | 857 | 890 | 859 | 14 (869\) |
| 2 | 3 | 1 | 1 | 902 | 900 | 959 | 5 (920\) |
| 3 | 2 | 5 | 4 | 757 | 823 | 901 | 11 (827\) |
| 4 | 6 | 2 | 6 | 844 | 788 | 624 | 14 (752\) |
| 5 | 1 | 7 | 2 | 659 | 764 | 690 | 10 (704\) |
This is the tidy version.
Table 4\.1: Tidy table
| id | trial | rt | score | total | mean\_rt |
| --- | --- | --- | --- | --- | --- |
| 1 | 1 | 857 | 4 | 14 | 869 |
| 1 | 2 | 890 | 3 | 14 | 869 |
| 1 | 3 | 859 | 7 | 14 | 869 |
| 2 | 1 | 902 | 3 | 5 | 920 |
| 2 | 2 | 900 | 1 | 5 | 920 |
| 2 | 3 | 959 | 1 | 5 | 920 |
| 3 | 1 | 757 | 2 | 11 | 827 |
| 3 | 2 | 823 | 5 | 11 | 827 |
| 3 | 3 | 901 | 4 | 11 | 827 |
| 4 | 1 | 844 | 6 | 14 | 752 |
| 4 | 2 | 788 | 2 | 14 | 752 |
| 4 | 3 | 624 | 6 | 14 | 752 |
| 5 | 1 | 659 | 1 | 10 | 704 |
| 5 | 2 | 764 | 7 | 10 | 704 |
| 5 | 3 | 690 | 2 | 10 | 704 |
### 4\.4\.2 Wide versus long
Data tables can be in [wide](https://psyteachr.github.io/glossary/w#wide "Data where all of the observations about one subject are in the same row") format or [long](https://psyteachr.github.io/glossary/l#long "Data where each observation is on a separate row") format (and sometimes a mix of the two). Wide data are where all of the observations about one subject are in the same row, while long data are where each observation is on a separate row. You often need to convert between these formats to do different types of analyses or data processing.
Imagine a study where each subject completes a questionnaire with three items. Each answer is an [observation](https://psyteachr.github.io/glossary/o#observation "All of the data about a single trial or question.") of that subject. You are probably most familiar with data like this in a wide format, where the subject `id` is in one column, and each of the three item responses is in its own column.
Table 4\.2: Wide data
| id | Q1 | Q2 | Q3 |
| --- | --- | --- | --- |
| A | 1 | 2 | 3 |
| B | 4 | 5 | 6 |
The same data can be represented in a long format by creating a new column that specifies what `item` the observation is from and a new column that specifies the `value` of that observation.
Table 4\.3: Long data
| id | item | value |
| --- | --- | --- |
| A | Q1 | 1 |
| B | Q1 | 4 |
| A | Q2 | 2 |
| B | Q2 | 5 |
| A | Q3 | 3 |
| B | Q3 | 6 |
Create a long version of the following table.
| id | fav\_colour | fav\_animal |
| --- | --- | --- |
| Lisa | red | echidna |
| Robbie | orange | babirusa |
| Steven | green | frog |
Answer
Your answer doesn’t need to have the same column headers or be in the same order.
| id | fav | answer |
| --- | --- | --- |
| Lisa | colour | red |
| Lisa | animal | echidna |
| Robbie | colour | orange |
| Robbie | animal | babirusa |
| Steven | colour | green |
| Steven | animal | frog |
4\.5 Pivot Functions
--------------------
The pivot functions allow you to transform a data table from wide to long or long to wide in one step.
### 4\.5\.1 Load Data
We will used the dataset `personality` from the dataskills package (or download the data from [personality.csv](https://psyteachr.github.io/msc-data-skills/data/personality.csv)). These data are from a 5\-factor (personality) personality questionnaire. Each question is labelled with the domain (Op \= openness, Co \= conscientiousness, Ex \= extroversion, Ag \= agreeableness, and Ne \= neuroticism) and the question number.
```
data("personality", package = "dataskills")
```
| user\_id | date | Op1 | Ne1 | Ne2 | Op2 | Ex1 | Ex2 | Co1 | Co2 | Ne3 | Ag1 | Ag2 | Ne4 | Ex3 | Co3 | Op3 | Ex4 | Op4 | Ex5 | Ag3 | Co4 | Co5 | Ne5 | Op5 | Ag4 | Op6 | Co6 | Ex6 | Ne6 | Co7 | Ag5 | Co8 | Ex7 | Ne7 | Co9 | Op7 | Ne8 | Ag6 | Ag7 | Co10 | Ex8 | Ex9 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 2006\-03\-23 | 3 | 4 | 0 | 6 | 3 | 3 | 3 | 3 | 0 | 2 | 1 | 3 | 3 | 2 | 2 | 1 | 3 | 3 | 1 | 3 | 0 | 3 | 6 | 1 | 0 | 6 | 3 | 1 | 3 | 3 | 3 | 3 | NA | 3 | 0 | 2 | NA | 3 | 1 | 2 | 4 |
| 1 | 2006\-02\-08 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 6 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 6 | 6 | 0 | 6 | 0 | 6 | 0 | 6 | 6 | 6 | 6 | 0 | 6 | 0 | 6 | 6 | 0 | 6 | 0 | 6 | 0 | 6 |
| 2 | 2005\-10\-24 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 6 | 5 | 1 | 5 | 1 | 1 | 1 | 1 | 5 | 5 | 1 | 5 | 1 | 5 | 1 | 5 | 5 | 5 | 5 | 1 | 5 | 1 | 5 | 5 | 1 | 5 | 1 | 5 | 1 | 5 |
| 5 | 2005\-12\-07 | 6 | 4 | 4 | 4 | 2 | 3 | 3 | 3 | 1 | 4 | 0 | 2 | 5 | 3 | 5 | 3 | 6 | 6 | 1 | 5 | 5 | 4 | 2 | 4 | 1 | 4 | 3 | 1 | 1 | 0 | 1 | 4 | 2 | 4 | 5 | 1 | 2 | 1 | 5 | 4 | 5 |
| 8 | 2006\-07\-27 | 6 | 1 | 2 | 6 | 2 | 3 | 5 | 4 | 0 | 6 | 5 | 3 | 3 | 4 | 5 | 3 | 6 | 3 | 0 | 5 | 5 | 1 | 5 | 6 | 6 | 6 | 0 | 0 | 3 | 2 | 3 | 1 | 0 | 3 | 5 | 1 | 3 | 1 | 3 | 3 | 5 |
| 108 | 2006\-02\-28 | 3 | 2 | 1 | 4 | 4 | 4 | 4 | 3 | 1 | 5 | 4 | 2 | 3 | 4 | 4 | 3 | 3 | 3 | 4 | 3 | 3 | 1 | 4 | 5 | 4 | 5 | 4 | 1 | 4 | 5 | 4 | 2 | 2 | 4 | 4 | 1 | 4 | 3 | 5 | 4 | 2 |
### 4\.5\.2 pivot\_longer()
`pivot_longer()` converts a wide data table to long format by converting the headers from specified columns into the values of new columns, and combining the values of those columns into a new condensed column.
* `cols` refers to the columns you want to make long You can refer to them by their names, like `col1, col2, col3, col4` or `col1:col4` or by their numbers, like `8, 9, 10` or `8:10`.
* `names_to` is what you want to call the new columns that the gathered column headers will go into; it’s “domain” and “qnumber” in this example.
* `names_sep` is an optional argument if you have more than one value for `names_to`. It specifies the characters or position to split the values of the `cols` headers.
* `values_to` is what you want to call the values in the columns `...`; they’re “score” in this example.
```
personality_long <- pivot_longer(
data = personality,
cols = Op1:Ex9, # columns to make long
names_to = c("domain", "qnumber"), # new column names for headers
names_sep = 2, # how to split the headers
values_to = "score" # new column name for values
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 5
## $ user_id <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
## $ date <date> 2006-03-23, 2006-03-23, 2006-03-23, 2006-03-23, 2006-03-23, 2…
## $ domain <chr> "Op", "Ne", "Ne", "Op", "Ex", "Ex", "Co", "Co", "Ne", "Ag", "A…
## $ qnumber <chr> "1", "1", "2", "2", "1", "2", "1", "2", "3", "1", "2", "4", "3…
## $ score <dbl> 3, 4, 0, 6, 3, 3, 3, 3, 0, 2, 1, 3, 3, 2, 2, 1, 3, 3, 1, 3, 0,…
```
You can pipe a data table to `glimpse()` at the end to have a quick look at it. It will still save to the object.
What would you set `names_sep` to in order to split the `cols` headers listed below into the results?
| `cols` | `names_to` | `names_sep` |
| --- | --- | --- |
| `A_1`, `A_2`, `B_1`, `B_2` | `c("condition", "version")` | A B 1 2 \_ |
| `A1`, `A2`, `B1`, `B2` | `c("condition", "version")` | A B 1 2 \_ |
| `cat-day&pre`, `cat-day&post`, `cat-night&pre`, `cat-night&post`, `dog-day&pre`, `dog-day&post`, `dog-night&pre`, `dog-night&post` | `c("pet", "time", "condition")` | \- \& \- |
### 4\.5\.3 pivot\_wider()
We can also go from long to wide format using the `pivot_wider()` function.
* `names_from` is the columns that contain your new column headers.
* `values_from` is the column that contains the values for the new columns.
* `names_sep` is the character string used to join names if `names_from` is more than one column.
```
personality_wide <- pivot_wider(
data = personality_long,
names_from = c(domain, qnumber),
values_from = score,
names_sep = ""
) %>%
glimpse()
```
```
## Rows: 15,000
## Columns: 43
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ Op1 <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
## $ Ne1 <dbl> 4, 0, 0, 4, 1, 2, 3, 4, 0, 3, 3, 3, 2, 1, 1, 3, 4, 5, 2, 4, 5,…
## $ Ne2 <dbl> 0, 6, 6, 4, 2, 1, 2, 3, 1, 2, 5, 5, 3, 1, 1, 1, 1, 6, 1, 2, 5,…
## $ Op2 <dbl> 6, 0, 0, 4, 6, 4, 4, 0, 0, 3, 4, 3, 3, 4, 5, 3, 3, 4, 1, 6, 6,…
## $ Ex1 <dbl> 3, 0, 0, 2, 2, 4, 4, 3, 5, 4, 1, 1, 3, 3, 1, 3, 5, 1, 0, 4, 1,…
## $ Ex2 <dbl> 3, 0, 0, 3, 3, 4, 5, 2, 5, 3, 4, 1, 3, 2, 1, 6, 5, 3, 4, 4, 1,…
## $ Co1 <dbl> 3, 0, 0, 3, 5, 4, 3, 4, 5, 3, 3, 3, 1, 5, 5, 4, 4, 5, 6, 4, 2,…
## $ Co2 <dbl> 3, 0, 0, 3, 4, 3, 3, 4, 5, 3, 5, 3, 3, 4, 5, 1, 5, 4, 5, 2, 5,…
## $ Ne3 <dbl> 0, 0, 0, 1, 0, 1, 4, 4, 0, 4, 2, 5, 1, 2, 5, 5, 2, 2, 1, 2, 5,…
## $ Ag1 <dbl> 2, 0, 0, 4, 6, 5, 5, 4, 2, 5, 4, 3, 2, 4, 5, 3, 5, 5, 5, 4, 4,…
## $ Ag2 <dbl> 1, 6, 6, 0, 5, 4, 5, 3, 4, 3, 5, 1, 5, 4, 2, 6, 5, 5, 5, 5, 2,…
## $ Ne4 <dbl> 3, 6, 6, 2, 3, 2, 3, 3, 0, 4, 4, 5, 5, 4, 5, 3, 2, 5, 2, 4, 5,…
## $ Ex3 <dbl> 3, 6, 5, 5, 3, 3, 3, 0, 6, 1, 4, 2, 3, 2, 1, 2, 5, 1, 0, 5, 5,…
## $ Co3 <dbl> 2, 0, 1, 3, 4, 4, 5, 4, 5, 3, 4, 3, 4, 4, 5, 4, 2, 4, 5, 2, 2,…
## $ Op3 <dbl> 2, 6, 5, 5, 5, 4, 3, 2, 4, 3, 3, 6, 5, 5, 6, 5, 4, 4, 3, 6, 5,…
## $ Ex4 <dbl> 1, 0, 1, 3, 3, 3, 4, 3, 5, 3, 2, 0, 3, 3, 1, 2, NA, 4, 4, 4, 1…
## $ Op4 <dbl> 3, 0, 1, 6, 6, 3, 3, 0, 6, 3, 4, 5, 4, 5, 6, 6, 2, 2, 4, 5, 5,…
## $ Ex5 <dbl> 3, 0, 1, 6, 3, 3, 4, 2, 5, 2, 2, 4, 2, 3, 0, 4, 5, 2, 3, 1, 1,…
## $ Ag3 <dbl> 1, 0, 1, 1, 0, 4, 4, 4, 3, 3, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 4,…
## $ Co4 <dbl> 3, 6, 5, 5, 5, 3, 2, 4, 3, 1, 4, 3, 1, 2, 4, 2, NA, 5, 6, 1, 1…
## $ Co5 <dbl> 0, 6, 5, 5, 5, 3, 3, 1, 5, 1, 2, 4, 4, 4, 2, 1, 6, 4, 3, 1, 3,…
## $ Ne5 <dbl> 3, 0, 1, 4, 1, 1, 4, 5, 0, 3, 4, 6, 2, 0, 1, 1, 0, 4, 3, 1, 5,…
## $ Op5 <dbl> 6, 6, 5, 2, 5, 4, 3, 2, 6, 6, 2, 4, 3, 4, 6, 6, 6, 5, 3, 3, 5,…
## $ Ag4 <dbl> 1, 0, 1, 4, 6, 5, 5, 6, 6, 6, 4, 2, 4, 5, 4, 5, 6, 4, 5, 6, 5,…
## $ Op6 <dbl> 0, 6, 5, 1, 6, 4, 6, 0, 0, 3, 5, 3, 5, 5, 5, 2, 5, 1, 1, 6, 2,…
## $ Co6 <dbl> 6, 0, 1, 4, 6, 5, 6, 5, 4, 3, 5, 5, 4, 6, 6, 1, 3, 4, 5, 4, 6,…
## $ Ex6 <dbl> 3, 6, 5, 3, 0, 4, 3, 1, 6, 3, 2, 1, 4, 2, 1, 5, 6, 2, 1, 2, 1,…
## $ Ne6 <dbl> 1, 6, 5, 1, 0, 1, 3, 4, 0, 4, 4, 5, 2, 1, 5, 6, 1, 2, 2, 3, 5,…
## $ Co7 <dbl> 3, 6, 5, 1, 3, 4, NA, 2, 3, 3, 2, 2, 4, 2, 5, 2, 5, 5, 3, 1, 1…
## $ Ag5 <dbl> 3, 6, 5, 0, 2, 5, 6, 2, 2, 3, 4, 1, 3, 5, 2, 6, 5, 6, 5, 3, 3,…
## $ Co8 <dbl> 3, 0, 1, 1, 3, 4, 3, 0, 1, 3, 2, 2, 1, 2, 4, 3, 2, 4, 5, 2, 6,…
## $ Ex7 <dbl> 3, 6, 5, 4, 1, 2, 5, 3, 6, 3, 4, 3, 5, 1, 1, 6, 6, 3, 1, 1, 3,…
## $ Ne7 <dbl> NA, 0, 1, 2, 0, 2, 4, 4, 0, 3, 2, 5, 1, 2, 5, 2, 2, 4, 1, 3, 5…
## $ Co9 <dbl> 3, 6, 5, 4, 3, 4, 5, 3, 5, 3, 4, 3, 4, 4, 2, 4, 6, 5, 5, 2, 2,…
## $ Op7 <dbl> 0, 6, 5, 5, 5, 4, 6, 2, 1, 3, 2, 4, 5, 5, 6, 3, 6, 5, 2, 6, 5,…
## $ Ne8 <dbl> 2, 0, 1, 1, 1, 1, 5, 4, 0, 4, 4, 5, 1, 2, 5, 2, 1, 5, 1, 2, 5,…
## $ Ag6 <dbl> NA, 6, 5, 2, 3, 4, 5, 6, 1, 3, 4, 2, 3, 5, 1, 6, 2, 6, 6, 5, 3…
## $ Ag7 <dbl> 3, 0, 1, 1, 1, 3, 3, 5, 0, 3, 2, 1, 2, 3, 5, 6, 4, 4, 6, 6, 2,…
## $ Co10 <dbl> 1, 6, 5, 5, 3, 5, 1, 2, 5, 2, 4, 3, 4, 4, 3, 2, 5, 5, 5, 2, 2,…
## $ Ex8 <dbl> 2, 0, 1, 4, 3, 4, 2, 4, 6, 2, 4, 0, 4, 4, 1, 3, 5, 4, 3, 1, 1,…
## $ Ex9 <dbl> 4, 6, 5, 5, 5, 2, 3, 3, 6, 3, 3, 4, 4, 3, 2, 5, 5, 4, 4, 0, 4,…
```
4\.6 Tidy Verbs
---------------
The pivot functions above are relatively new functions that combine the four basic tidy verbs. You can also convert data between long and wide formats using these functions. Many researchers still use these functions and older code will not use the pivot functions, so it is useful to know how to interpret these.
### 4\.6\.1 gather()
Much like `pivot_longer()`, `gather()` makes a wide data table long by creating a column for the headers and a column for the values. The main difference is that you cannot turn the headers into more than one column.
* `key` is what you want to call the new column that the gathered column headers will go into; it’s “question” in this example. It is like `names_to` in `pivot_longer()`, but can only take one value (multiple values need to be separated after `separate()`).
* `value` is what you want to call the values in the gathered columns; they’re “score” in this example. It is like `values_to` in `pivot_longer()`.
* `...` refers to the columns you want to gather. It is like `cols` in `pivot_longer()`.
The `gather()` function converts `personality` from a wide data table to long format, with a row for each user/question observation. The resulting data table should have the columns: `user_id`, `date`, `question`, and `score`.
```
personality_gathered <- gather(
data = personality,
key = "question", # new column name for gathered headers
value = "score", # new column name for gathered values
Op1:Ex9 # columns to gather
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 4
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 9…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, …
## $ question <chr> "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1"…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4…
```
### 4\.6\.2 separate()
* `col` is the column you want to separate
* `into` is a vector of new column names
* `sep` is the character(s) that separate your new columns. This defaults to anything that isn’t alphanumeric, like .`,_-/:` and is like the `names_sep` argument in `pivot_longer()`.
Split the `question` column into two columns: `domain` and `qnumber`.
There is no character to split on, here, but you can separate a column after a specific number of characters by setting `sep` to an integer. For example, to split “abcde” after the third character, use `sep = 3`, which results in `c("abc", "de")`. You can also use negative number to split before the *n*th character from the right. For example, to split a column that has words of various lengths and 2\-digit suffixes (like “lisa03”“,”amanda38"), you can use `sep = -2`.
```
personality_sep <- separate(
data = personality_gathered,
col = question, # column to separate
into = c("domain", "qnumber"), # new column names
sep = 2 # where to separate
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 5
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ domain <chr> "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "O…
## $ qnumber <chr> "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
```
If you want to separate just at full stops, you need to use `sep = "\\."`, not `sep = "."`. The two slashes **escape** the full stop, making it interpreted as a literal full stop and not the regular expression for any character.
### 4\.6\.3 unite()
* `col` is your new united column
* `...` refers to the columns you want to unite
* `sep` is the character(s) that will separate your united columns
Put the domain and qnumber columns back together into a new column named `domain_n`. Make it in a format like “Op\_Q1\.”
```
personality_unite <- unite(
data = personality_sep,
col = "domain_n", # new column name
domain, qnumber, # columns to unite
sep = "_Q" # separation characters
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 4
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 9…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, …
## $ domain_n <chr> "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1"…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4…
```
### 4\.6\.4 spread()
You can reverse the processes above, as well. For example, you can convert data from long format into wide format.
* `key` is the column that contains your new column headers. It is like `names_from` in `pivot_wider()`, but can only take one value (multiple values need to be merged first using `unite()`).
* `value` is the column that contains the values in the new spread columns. It is like `values_from` in `pivot_wider()`.
```
personality_spread <- spread(
data = personality_unite,
key = domain_n, # column that contains new headers
value = score # column that contains new values
) %>%
glimpse()
```
```
## Rows: 15,000
## Columns: 43
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ Ag_Q1 <dbl> 2, 0, 0, 4, 6, 5, 5, 4, 2, 5, 4, 3, 2, 4, 5, 3, 5, 5, 5, 4, 4,…
## $ Ag_Q2 <dbl> 1, 6, 6, 0, 5, 4, 5, 3, 4, 3, 5, 1, 5, 4, 2, 6, 5, 5, 5, 5, 2,…
## $ Ag_Q3 <dbl> 1, 0, 1, 1, 0, 4, 4, 4, 3, 3, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 4,…
## $ Ag_Q4 <dbl> 1, 0, 1, 4, 6, 5, 5, 6, 6, 6, 4, 2, 4, 5, 4, 5, 6, 4, 5, 6, 5,…
## $ Ag_Q5 <dbl> 3, 6, 5, 0, 2, 5, 6, 2, 2, 3, 4, 1, 3, 5, 2, 6, 5, 6, 5, 3, 3,…
## $ Ag_Q6 <dbl> NA, 6, 5, 2, 3, 4, 5, 6, 1, 3, 4, 2, 3, 5, 1, 6, 2, 6, 6, 5, 3…
## $ Ag_Q7 <dbl> 3, 0, 1, 1, 1, 3, 3, 5, 0, 3, 2, 1, 2, 3, 5, 6, 4, 4, 6, 6, 2,…
## $ Co_Q1 <dbl> 3, 0, 0, 3, 5, 4, 3, 4, 5, 3, 3, 3, 1, 5, 5, 4, 4, 5, 6, 4, 2,…
## $ Co_Q10 <dbl> 1, 6, 5, 5, 3, 5, 1, 2, 5, 2, 4, 3, 4, 4, 3, 2, 5, 5, 5, 2, 2,…
## $ Co_Q2 <dbl> 3, 0, 0, 3, 4, 3, 3, 4, 5, 3, 5, 3, 3, 4, 5, 1, 5, 4, 5, 2, 5,…
## $ Co_Q3 <dbl> 2, 0, 1, 3, 4, 4, 5, 4, 5, 3, 4, 3, 4, 4, 5, 4, 2, 4, 5, 2, 2,…
## $ Co_Q4 <dbl> 3, 6, 5, 5, 5, 3, 2, 4, 3, 1, 4, 3, 1, 2, 4, 2, NA, 5, 6, 1, 1…
## $ Co_Q5 <dbl> 0, 6, 5, 5, 5, 3, 3, 1, 5, 1, 2, 4, 4, 4, 2, 1, 6, 4, 3, 1, 3,…
## $ Co_Q6 <dbl> 6, 0, 1, 4, 6, 5, 6, 5, 4, 3, 5, 5, 4, 6, 6, 1, 3, 4, 5, 4, 6,…
## $ Co_Q7 <dbl> 3, 6, 5, 1, 3, 4, NA, 2, 3, 3, 2, 2, 4, 2, 5, 2, 5, 5, 3, 1, 1…
## $ Co_Q8 <dbl> 3, 0, 1, 1, 3, 4, 3, 0, 1, 3, 2, 2, 1, 2, 4, 3, 2, 4, 5, 2, 6,…
## $ Co_Q9 <dbl> 3, 6, 5, 4, 3, 4, 5, 3, 5, 3, 4, 3, 4, 4, 2, 4, 6, 5, 5, 2, 2,…
## $ Ex_Q1 <dbl> 3, 0, 0, 2, 2, 4, 4, 3, 5, 4, 1, 1, 3, 3, 1, 3, 5, 1, 0, 4, 1,…
## $ Ex_Q2 <dbl> 3, 0, 0, 3, 3, 4, 5, 2, 5, 3, 4, 1, 3, 2, 1, 6, 5, 3, 4, 4, 1,…
## $ Ex_Q3 <dbl> 3, 6, 5, 5, 3, 3, 3, 0, 6, 1, 4, 2, 3, 2, 1, 2, 5, 1, 0, 5, 5,…
## $ Ex_Q4 <dbl> 1, 0, 1, 3, 3, 3, 4, 3, 5, 3, 2, 0, 3, 3, 1, 2, NA, 4, 4, 4, 1…
## $ Ex_Q5 <dbl> 3, 0, 1, 6, 3, 3, 4, 2, 5, 2, 2, 4, 2, 3, 0, 4, 5, 2, 3, 1, 1,…
## $ Ex_Q6 <dbl> 3, 6, 5, 3, 0, 4, 3, 1, 6, 3, 2, 1, 4, 2, 1, 5, 6, 2, 1, 2, 1,…
## $ Ex_Q7 <dbl> 3, 6, 5, 4, 1, 2, 5, 3, 6, 3, 4, 3, 5, 1, 1, 6, 6, 3, 1, 1, 3,…
## $ Ex_Q8 <dbl> 2, 0, 1, 4, 3, 4, 2, 4, 6, 2, 4, 0, 4, 4, 1, 3, 5, 4, 3, 1, 1,…
## $ Ex_Q9 <dbl> 4, 6, 5, 5, 5, 2, 3, 3, 6, 3, 3, 4, 4, 3, 2, 5, 5, 4, 4, 0, 4,…
## $ Ne_Q1 <dbl> 4, 0, 0, 4, 1, 2, 3, 4, 0, 3, 3, 3, 2, 1, 1, 3, 4, 5, 2, 4, 5,…
## $ Ne_Q2 <dbl> 0, 6, 6, 4, 2, 1, 2, 3, 1, 2, 5, 5, 3, 1, 1, 1, 1, 6, 1, 2, 5,…
## $ Ne_Q3 <dbl> 0, 0, 0, 1, 0, 1, 4, 4, 0, 4, 2, 5, 1, 2, 5, 5, 2, 2, 1, 2, 5,…
## $ Ne_Q4 <dbl> 3, 6, 6, 2, 3, 2, 3, 3, 0, 4, 4, 5, 5, 4, 5, 3, 2, 5, 2, 4, 5,…
## $ Ne_Q5 <dbl> 3, 0, 1, 4, 1, 1, 4, 5, 0, 3, 4, 6, 2, 0, 1, 1, 0, 4, 3, 1, 5,…
## $ Ne_Q6 <dbl> 1, 6, 5, 1, 0, 1, 3, 4, 0, 4, 4, 5, 2, 1, 5, 6, 1, 2, 2, 3, 5,…
## $ Ne_Q7 <dbl> NA, 0, 1, 2, 0, 2, 4, 4, 0, 3, 2, 5, 1, 2, 5, 2, 2, 4, 1, 3, 5…
## $ Ne_Q8 <dbl> 2, 0, 1, 1, 1, 1, 5, 4, 0, 4, 4, 5, 1, 2, 5, 2, 1, 5, 1, 2, 5,…
## $ Op_Q1 <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
## $ Op_Q2 <dbl> 6, 0, 0, 4, 6, 4, 4, 0, 0, 3, 4, 3, 3, 4, 5, 3, 3, 4, 1, 6, 6,…
## $ Op_Q3 <dbl> 2, 6, 5, 5, 5, 4, 3, 2, 4, 3, 3, 6, 5, 5, 6, 5, 4, 4, 3, 6, 5,…
## $ Op_Q4 <dbl> 3, 0, 1, 6, 6, 3, 3, 0, 6, 3, 4, 5, 4, 5, 6, 6, 2, 2, 4, 5, 5,…
## $ Op_Q5 <dbl> 6, 6, 5, 2, 5, 4, 3, 2, 6, 6, 2, 4, 3, 4, 6, 6, 6, 5, 3, 3, 5,…
## $ Op_Q6 <dbl> 0, 6, 5, 1, 6, 4, 6, 0, 0, 3, 5, 3, 5, 5, 5, 2, 5, 1, 1, 6, 2,…
## $ Op_Q7 <dbl> 0, 6, 5, 5, 5, 4, 6, 2, 1, 3, 2, 4, 5, 5, 6, 3, 6, 5, 2, 6, 5,…
```
4\.7 Pipes
----------
Pipes are a way to order your code in a more readable format.
Let’s say you have a small data table with 10 participant IDs, two columns with variable type A, and 2 columns with variable type B. You want to calculate the mean of the A variables and the mean of the B variables and return a table with 10 rows (1 for each participant) and 3 columns (`id`, `A_mean` and `B_mean`).
One way you could do this is by creating a new object at every step and using that object in the next step. This is pretty clear, but you’ve created 6 unnecessary data objects in your environment. This can get confusing in very long scripts.
```
# make a data table with 10 subjects
data_original <- tibble(
id = 1:10,
A1 = rnorm(10, 0),
A2 = rnorm(10, 1),
B1 = rnorm(10, 2),
B2 = rnorm(10, 3)
)
# gather columns A1 to B2 into "variable" and "value" columns
data_gathered <- gather(data_original, variable, value, A1:B2)
# separate the variable column at the _ into "var" and "var_n" columns
data_separated <- separate(data_gathered, variable, c("var", "var_n"), sep = 1)
# group the data by id and var
data_grouped <- group_by(data_separated, id, var)
# calculate the mean value for each id/var
data_summarised <- summarise(data_grouped, mean = mean(value), .groups = "drop")
# spread the mean column into A and B columns
data_spread <- spread(data_summarised, var, mean)
# rename A and B to A_mean and B_mean
data <- rename(data_spread, A_mean = A, B_mean = B)
data
```
| id | A\_mean | B\_mean |
| --- | --- | --- |
| 1 | \-0\.5938256 | 1\.0243046 |
| 2 | 0\.7440623 | 2\.7172046 |
| 3 | 0\.9309275 | 3\.9262358 |
| 4 | 0\.7197686 | 1\.9662632 |
| 5 | \-0\.0280832 | 1\.9473456 |
| 6 | \-0\.0982555 | 3\.2073687 |
| 7 | 0\.1256922 | 0\.9256321 |
| 8 | 1\.4526447 | 2\.3778116 |
| 9 | 0\.2976443 | 1\.6617481 |
| 10 | 0\.5589199 | 2\.1034679 |
You *can* name each object `data` and keep replacing the old data object with the new one at each step. This will keep your environment clean, but I don’t recommend it because it makes it too easy to accidentally run your code out of order when you are running line\-by\-line for development or debugging.
One way to avoid extra objects is to nest your functions, literally replacing each data object with the code that generated it in the previous step. This can be fine for very short chains.
```
mean_petal_width <- round(mean(iris$Petal.Width), 2)
```
But it gets extremely confusing for long chains:
```
# do not ever do this!!
data <- rename(
spread(
summarise(
group_by(
separate(
gather(
tibble(
id = 1:10,
A1 = rnorm(10, 0),
A2 = rnorm(10, 1),
B1 = rnorm(10, 2),
B2 = rnorm(10,3)),
variable, value, A1:B2),
variable, c("var", "var_n"), sep = 1),
id, var),
mean = mean(value), .groups = "drop"),
var, mean),
A_mean = A, B_mean = B)
```
The pipe lets you “pipe” the result of each function into the next function, allowing you to put your code in a logical order without creating too many extra objects.
```
# calculate mean of A and B variables for each participant
data <- tibble(
id = 1:10,
A1 = rnorm(10, 0),
A2 = rnorm(10, 1),
B1 = rnorm(10, 2),
B2 = rnorm(10,3)
) %>%
gather(variable, value, A1:B2) %>%
separate(variable, c("var", "var_n"), sep=1) %>%
group_by(id, var) %>%
summarise(mean = mean(value), .groups = "drop") %>%
spread(var, mean) %>%
rename(A_mean = A, B_mean = B)
```
You can read this code from top to bottom as follows:
1. Make a tibble called `data` with
* id of 1 to 10,
* A1 of 10 random numbers from a normal distribution,
* A2 of 10 random numbers from a normal distribution,
* B1 of 10 random numbers from a normal distribution,
* B2 of 10 random numbers from a normal distribution; and then
2. Gather to create `variable` and `value` column from columns `A_1` to `B_2`; and then
3. Separate the column `variable` into 2 new columns called `var`and `var_n`, separate at character 1; and then
4. Group by columns `id` and `var`; and then
5. Summarise and new column called `mean` as the mean of the `value` column for each group and drop the grouping; and then
6. Spread to make new columns with the key names in `var` and values in `mean`; and then
7. Rename to make columns called `A_mean` (old `A`) and `B_mean` (old `B`)
You can make intermediate objects whenever you need to break up your code because it’s getting too complicated or you need to debug something.
You can debug a pipe by highlighting from the beginning to just before the pipe you want to stop at. Try this by highlighting from `data <-` to the end of the `separate` function and typing cmd\-return. What does `data` look like now?
Chain all the steps above using pipes.
```
personality_reshaped <- personality %>%
gather("question", "score", Op1:Ex9) %>%
separate(question, c("domain", "qnumber"), sep = 2) %>%
unite("domain_n", domain, qnumber, sep = "_Q") %>%
spread(domain_n, score)
```
4\.8 More Complex Example
-------------------------
### 4\.8\.1 Load Data
Get data on infant and maternal mortality rates from the dataskills package. If you don’t have the package, you can download them here:
* [infant mortality](https://psyteachr.github.io/msc-data-skills/data/infmort.csv)
* [maternal mortality](https://psyteachr.github.io/msc-data-skills/data/matmort.xls)
```
data("infmort", package = "dataskills")
head(infmort)
```
| Country | Year | Infant mortality rate (probability of dying between birth and age 1 per 1000 live births) |
| --- | --- | --- |
| Afghanistan | 2015 | 66\.3 \[52\.7\-83\.9] |
| Afghanistan | 2014 | 68\.1 \[55\.7\-83\.6] |
| Afghanistan | 2013 | 69\.9 \[58\.7\-83\.5] |
| Afghanistan | 2012 | 71\.7 \[61\.6\-83\.7] |
| Afghanistan | 2011 | 73\.4 \[64\.4\-84\.2] |
| Afghanistan | 2010 | 75\.1 \[66\.9\-85\.1] |
```
data("matmort", package = "dataskills")
head(matmort)
```
| Country | 1990 | 2000 | 2015 |
| --- | --- | --- | --- |
| Afghanistan | 1 340 \[ 878 \- 1 950] | 1 100 \[ 745 \- 1 570] | 396 \[ 253 \- 620] |
| Albania | 71 \[ 58 \- 88] | 43 \[ 33 \- 56] | 29 \[ 16 \- 46] |
| Algeria | 216 \[ 141 \- 327] | 170 \[ 118 \- 241] | 140 \[ 82 \- 244] |
| Angola | 1 160 \[ 627 \- 2 020] | 924 \[ 472 \- 1 730] | 477 \[ 221 \- 988] |
| Argentina | 72 \[ 64 \- 80] | 60 \[ 54 \- 65] | 52 \[ 44 \- 63] |
| Armenia | 58 \[ 51 \- 65] | 40 \[ 35 \- 46] | 25 \[ 21 \- 31] |
### 4\.8\.2 Wide to Long
`matmort` is in wide format, with a separate column for each year. Change it to long format, with a row for each Country/Year observation.
This example is complicated because the column names to gather *are* numbers. If the column names are non\-standard (e.g., have spaces, start with numbers, or have special characters), you can enclose them in backticks (\`) like the example below.
```
matmort_long <- matmort %>%
pivot_longer(cols = `1990`:`2015`,
names_to = "Year",
values_to = "stats") %>%
glimpse()
```
```
## Rows: 543
## Columns: 3
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Albania", "Alban…
## $ Year <chr> "1990", "2000", "2015", "1990", "2000", "2015", "1990", "2000"…
## $ stats <chr> "1 340 [ 878 - 1 950]", "1 100 [ 745 - 1 570]", "396 [ 253 - …
```
You can put `matmort` at the first argument to `pivot_longer()`; you don’t have to pipe it in. But when I’m working on data processing I often find myself needing to insert or rearrange steps and I constantly introduce errors by forgetting to take the first argument out of a pipe chain, so now I start with the original data table and pipe from there.
Alternatively, you can use the `gather()` function.
```
matmort_long <- matmort %>%
gather("Year", "stats", `1990`:`2015`) %>%
glimpse()
```
```
## Rows: 543
## Columns: 3
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ stats <chr> "1 340 [ 878 - 1 950]", "71 [ 58 - 88]", "216 [ 141 - 327]",…
```
### 4\.8\.3 One Piece of Data per Column
The data in the `stats` column is in an unusual format with some sort of confidence interval in brackets and lots of extra spaces. We don’t need any of the spaces, so first we’ll remove them with `mutate()`, which we’ll learn more about in the next lesson.
The `separate` function will separate your data on anything that is not a number or letter, so try it first without specifying the `sep` argument. The `into` argument is a list of the new column names.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi")) %>%
glimpse()
```
```
## Warning: Expected 3 pieces. Additional pieces discarded in 543 rows [1, 2, 3, 4,
## 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, ...].
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
The `gsub(pattern, replacement, x)` function is a flexible way to do search and replace. The example above replaces all occurances of the `pattern` " " (a space), with the `replacement` "" (nothing), in the string `x` (the `stats` column). Use `sub()` instead if you only want to replace the first occurance of a pattern. We only used a simple pattern here, but you can use more complicated [regex](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) patterns to replace, for example, all even numbers (e.g., `gsub(“[:02468:],” "“,”id = 123456“)`) or all occurances of the word colour in US or UK spelling (e.g., `gsub(”colo(u)?r“,”**“,”replace color, colour, or colours, but not collors")`).
#### 4\.8\.3\.1 Handle spare columns with `extra`
The previous example should have given you an error warning about “Additional pieces discarded in 543 rows.” This is because `separate` splits the column at the brackets and dashes, so the text `100[90-110]` would split into four values `c(“100,” “90,” “110,” "“)`, but we only specified 3 new columns. The fourth value is always empty (just the part after the last bracket), so we are happy to drop it, but `separate` generates a warning so you don’t do that accidentally. You can turn off the warning by adding the `extra` argument and setting it to “drop.” Look at the help for `??tidyr::separate` to see what the other options do.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
#### 4\.8\.3\.2 Set delimiters with `sep`
Now do the same with `infmort`. It’s already in long format, so you don’t need to use `gather`, but the third column has a ridiculously long name, so we can just refer to it by its column number (3\).
```
infmort_split <- infmort %>%
separate(3, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66", "68", "69", "71", "73", "75", "76", "78", "80", "82", "8…
## $ ci_low <chr> "3", "1", "9", "7", "4", "1", "8", "6", "4", "3", "4", "7", "0…
## $ ci_hi <chr> "52", "55", "58", "61", "64", "66", "69", "71", "73", "75", "7…
```
**Wait, that didn’t work at all!** It split the column on spaces, brackets, *and* full stops. We just want to split on the spaces, brackets and dashes. So we need to manually set `sep` to what the delimiters are. Also, once there are more than a few arguments specified for a function, it’s easier to read them if you put one argument on each line.
{\#regex}
You can use [regular expressions](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) to separate complex columns. Here, we want to separate on dashes and brackets. You can separate on a list of delimiters by putting them in parentheses, separated by “\|.” It’s a little more complicated because brackets have a special meaning in regex, so you need to “escape” the left one with two backslashes “\\.”
```
infmort_split <- infmort %>%
separate(
col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])"
) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66.3 ", "68.1 ", "69.9 ", "71.7 ", "73.4 ", "75.1 ", "76.8 ",…
## $ ci_low <chr> "52.7", "55.7", "58.7", "61.6", "64.4", "66.9", "69.0", "71.2"…
## $ ci_hi <chr> "83.9", "83.6", "83.5", "83.7", "84.2", "85.1", "86.1", "87.3"…
```
#### 4\.8\.3\.3 Fix data types with `convert`
That’s better. Notice the next to `Year`, `rate`, `ci_low` and `ci_hi`. That means these columns hold characters (like words), not numbers or integers. This can cause problems when you try to do thigs like average the numbers (you can’t average words), so we can fix it by adding the argument `convert` and setting it to `TRUE`.
```
infmort_split <- infmort %>%
separate(col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <dbl> 66.3, 68.1, 69.9, 71.7, 73.4, 75.1, 76.8, 78.6, 80.4, 82.3, 84…
## $ ci_low <dbl> 52.7, 55.7, 58.7, 61.6, 64.4, 66.9, 69.0, 71.2, 73.4, 75.5, 77…
## $ ci_hi <dbl> 83.9, 83.6, 83.5, 83.7, 84.2, 85.1, 86.1, 87.3, 88.9, 90.7, 92…
```
Do the same for `matmort`.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
### 4\.8\.4 All in one step
We can chain all the steps for `matmort` above together, since we don’t need those intermediate data tables.
```
matmort2<- dataskills::matmort %>%
gather("Year", "stats", `1990`:`2015`) %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(
col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE
) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
### 4\.8\.5 Columns by Year
Spread out the maternal mortality rate by year.
```
matmort_wide <- matmort2 %>%
spread(key = Year, value = rate) %>%
print()
```
```
## # A tibble: 542 x 6
## Country ci_low ci_hi `1990` `2000` `2015`
## <chr> <int> <int> <int> <int> <int>
## 1 Afghanistan 253 620 NA NA 396
## 2 Afghanistan 745 1570 NA 1100 NA
## 3 Afghanistan 878 1950 1340 NA NA
## 4 Albania 16 46 NA NA 29
## 5 Albania 33 56 NA 43 NA
## 6 Albania 58 88 71 NA NA
## 7 Algeria 82 244 NA NA 140
## 8 Algeria 118 241 NA 170 NA
## 9 Algeria 141 327 216 NA NA
## 10 Angola 221 988 NA NA 477
## # … with 532 more rows
```
Nope, that didn’t work at all, but it’s a really common mistake when spreading data. This is because `spread` matches on all the remaining columns, so Afghanistan with `ci_low` of 253 is treated as a different observation than Afghanistan with `ci_low` of 745\.
This is where `pivot_wider()` can be very useful. You can set `values_from` to multiple column names and their names will be added to the `names_from` values.
```
matmort_wide <- matmort2 %>%
pivot_wider(
names_from = Year,
values_from = c(rate, ci_low, ci_hi)
)
glimpse(matmort_wide)
```
```
## Rows: 181
## Columns: 10
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina"…
## $ rate_1990 <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33…
## $ rate_2000 <int> 1100, 43, 170, 924, 60, 40, 9, 5, 48, 61, 21, 399, 48, 26,…
## $ rate_2015 <int> 396, 29, 140, 477, 52, 25, 6, 4, 25, 80, 15, 176, 27, 4, 7…
## $ ci_low_1990 <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, …
## $ ci_low_2000 <int> 745, 33, 118, 472, 54, 35, 8, 4, 42, 50, 18, 322, 38, 22, …
## $ ci_low_2015 <int> 253, 16, 82, 221, 44, 21, 5, 3, 17, 53, 12, 125, 19, 3, 5,…
## $ ci_hi_1990 <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 3…
## $ ci_hi_2000 <int> 1570, 56, 241, 1730, 65, 46, 10, 6, 55, 74, 26, 496, 58, 3…
## $ ci_hi_2015 <int> 620, 46, 244, 988, 63, 31, 7, 5, 35, 124, 19, 280, 37, 6, …
```
### 4\.8\.6 Experimentum Data
Students in the Institute of Neuroscience and Psychology at the University of Glasgow can use the online experiment builder platform, [Experimentum](https://debruine.github.io/experimentum/). The platform is also [open source on github](https://github.com/debruine/experimentum) for anyone who can install it on a web server. It allows you to group questionnaires and experiments into **projects** with randomisation and counterbalancing. Data for questionnaires and experiments are downloadable in long format, but researchers often need to put them in wide format for analysis.
Look at the help menu for built\-in dataset `dataskills::experimentum_quests` to learn what each column is. Subjects are asked questions about dogs to test the different questionnaire response types.
* current: Do you own a dog? (yes/no)
* past: Have you ever owned a dog? (yes/no)
* name: What is the best name for a dog? (free short text)
* good: How good are dogs? (1\=pretty good:7\=very good)
* country: What country do borzois come from?
* good\_borzoi: How good are borzois? (0\=pretty good:100\=very good)
* text: Write some text about dogs. (free long text)
* time: What time is it? (time)
To get the dataset into wide format, where each question is in a separate column, use the following code:
```
q <- dataskills::experimentum_quests %>%
pivot_wider(id_cols = session_id:user_age,
names_from = q_name,
values_from = dv) %>%
type.convert(as.is = TRUE) %>%
print()
```
```
## # A tibble: 24 x 15
## session_id project_id quest_id user_id user_sex user_status user_age current
## <int> <int> <int> <int> <chr> <chr> <dbl> <int>
## 1 34034 1 1 31105 female guest 28.2 1
## 2 34104 1 1 31164 male registered 19.4 1
## 3 34326 1 1 31392 female guest 17 0
## 4 34343 1 1 31397 male guest 22 1
## 5 34765 1 1 31770 female guest 44 1
## 6 34796 1 1 31796 female guest 35.9 0
## 7 34806 1 1 31798 female guest 35 0
## 8 34822 1 1 31802 female guest 58 1
## 9 34864 1 1 31820 male guest 20 0
## 10 35014 1 1 31921 female student 39.2 1
## # … with 14 more rows, and 7 more variables: past <int>, name <chr>,
## # good <int>, country <chr>, text <chr>, good_borzoi <int>, time <chr>
```
The responses in the `dv` column have multiple types (e.g., `r glossary(“integer”)`, `r glossary(“double”)`, and `r glossary(“character”)`), but they are all represented as character strings when they’re in the same column. After you spread the data to wide format, each column should be given the ocrrect data type. The function `type.convert()` makes a best guess at what type each new column should be and converts it. The argument `as.is = TRUE` converts columns where none of the numbers have decimal places to integers.
4\.9 Glossary
-------------
| term | definition |
| --- | --- |
| [long](https://psyteachr.github.io/glossary/l#long) | Data where each observation is on a separate row |
| [observation](https://psyteachr.github.io/glossary/o#observation) | All of the data about a single trial or question. |
| [value](https://psyteachr.github.io/glossary/v#value) | A single number or piece of data. |
| [variable](https://psyteachr.github.io/glossary/v#variable) | A word that identifies and stores the value of some data for later use. |
| [wide](https://psyteachr.github.io/glossary/w#wide) | Data where all of the observations about one subject are in the same row |
4\.10 Exercises
---------------
Download the [exercises](exercises/04_tidyr_exercise.Rmd). See the [answers](exercises/04_tidyr_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(4)
# run this to access the answers
dataskills::exercise(4, answers = TRUE)
```
4\.1 Learning Objectives
------------------------
### 4\.1\.1 Basic
1. Understand the concept of [tidy data](tidyr.html#tidy-data) [(video)](https://youtu.be/EsSN4OdsNpc)
2. Be able to convert between long and wide formats using pivot functions [(video)](https://youtu.be/4dvLmjhwN8I)
* [`pivot_longer()`](tidyr.html#pivot_longer)
* [`pivot_wider()`](tidyr.html#pivot_wider)
3. Be able to use the 4 basic `tidyr` verbs [(video)](https://youtu.be/oUWjb0JC8zM)
* [`gather()`](tidyr.html#gather)
* [`separate()`](tidyr.html#separate)
* [`spread()`](tidyr.html#spread)
* [`unite()`](tidyr.html#unite)
4. Be able to chain functions using [pipes](tidyr.html#pipes) [(video)](https://youtu.be/itfrlLaN4SE)
### 4\.1\.2 Advanced
5. Be able to use [regular expressions](#regex) to separate complex columns
### 4\.1\.1 Basic
1. Understand the concept of [tidy data](tidyr.html#tidy-data) [(video)](https://youtu.be/EsSN4OdsNpc)
2. Be able to convert between long and wide formats using pivot functions [(video)](https://youtu.be/4dvLmjhwN8I)
* [`pivot_longer()`](tidyr.html#pivot_longer)
* [`pivot_wider()`](tidyr.html#pivot_wider)
3. Be able to use the 4 basic `tidyr` verbs [(video)](https://youtu.be/oUWjb0JC8zM)
* [`gather()`](tidyr.html#gather)
* [`separate()`](tidyr.html#separate)
* [`spread()`](tidyr.html#spread)
* [`unite()`](tidyr.html#unite)
4. Be able to chain functions using [pipes](tidyr.html#pipes) [(video)](https://youtu.be/itfrlLaN4SE)
### 4\.1\.2 Advanced
5. Be able to use [regular expressions](#regex) to separate complex columns
4\.2 Resources
--------------
* [Tidy Data](http://vita.had.co.nz/papers/tidy-data.html)
* [Chapter 12: Tidy Data](http://r4ds.had.co.nz/tidy-data.html) in *R for Data Science*
* [Chapter 18: Pipes](http://r4ds.had.co.nz/pipes.html) in *R for Data Science*
* [Data wrangling cheat sheet](https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf)
4\.3 Setup
----------
```
# libraries needed
library(tidyverse)
library(dataskills)
set.seed(8675309) # makes sure random numbers are reproducible
```
4\.4 Tidy Data
--------------
### 4\.4\.1 Three Rules
* Each [variable](https://psyteachr.github.io/glossary/v#variable "A word that identifies and stores the value of some data for later use.") must have its own column
* Each [observation](https://psyteachr.github.io/glossary/o#observation "All of the data about a single trial or question.") must have its own row
* Each [value](https://psyteachr.github.io/glossary/v#value "A single number or piece of data.") must have its own cell
This table has three observations per row and the `total_meanRT` column contains two values.
Table 4\.1: Untidy table
| id | score\_1 | score\_2 | score\_3 | rt\_1 | rt\_2 | rt\_3 | total\_meanRT |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 4 | 3 | 7 | 857 | 890 | 859 | 14 (869\) |
| 2 | 3 | 1 | 1 | 902 | 900 | 959 | 5 (920\) |
| 3 | 2 | 5 | 4 | 757 | 823 | 901 | 11 (827\) |
| 4 | 6 | 2 | 6 | 844 | 788 | 624 | 14 (752\) |
| 5 | 1 | 7 | 2 | 659 | 764 | 690 | 10 (704\) |
This is the tidy version.
Table 4\.1: Tidy table
| id | trial | rt | score | total | mean\_rt |
| --- | --- | --- | --- | --- | --- |
| 1 | 1 | 857 | 4 | 14 | 869 |
| 1 | 2 | 890 | 3 | 14 | 869 |
| 1 | 3 | 859 | 7 | 14 | 869 |
| 2 | 1 | 902 | 3 | 5 | 920 |
| 2 | 2 | 900 | 1 | 5 | 920 |
| 2 | 3 | 959 | 1 | 5 | 920 |
| 3 | 1 | 757 | 2 | 11 | 827 |
| 3 | 2 | 823 | 5 | 11 | 827 |
| 3 | 3 | 901 | 4 | 11 | 827 |
| 4 | 1 | 844 | 6 | 14 | 752 |
| 4 | 2 | 788 | 2 | 14 | 752 |
| 4 | 3 | 624 | 6 | 14 | 752 |
| 5 | 1 | 659 | 1 | 10 | 704 |
| 5 | 2 | 764 | 7 | 10 | 704 |
| 5 | 3 | 690 | 2 | 10 | 704 |
### 4\.4\.2 Wide versus long
Data tables can be in [wide](https://psyteachr.github.io/glossary/w#wide "Data where all of the observations about one subject are in the same row") format or [long](https://psyteachr.github.io/glossary/l#long "Data where each observation is on a separate row") format (and sometimes a mix of the two). Wide data are where all of the observations about one subject are in the same row, while long data are where each observation is on a separate row. You often need to convert between these formats to do different types of analyses or data processing.
Imagine a study where each subject completes a questionnaire with three items. Each answer is an [observation](https://psyteachr.github.io/glossary/o#observation "All of the data about a single trial or question.") of that subject. You are probably most familiar with data like this in a wide format, where the subject `id` is in one column, and each of the three item responses is in its own column.
Table 4\.2: Wide data
| id | Q1 | Q2 | Q3 |
| --- | --- | --- | --- |
| A | 1 | 2 | 3 |
| B | 4 | 5 | 6 |
The same data can be represented in a long format by creating a new column that specifies what `item` the observation is from and a new column that specifies the `value` of that observation.
Table 4\.3: Long data
| id | item | value |
| --- | --- | --- |
| A | Q1 | 1 |
| B | Q1 | 4 |
| A | Q2 | 2 |
| B | Q2 | 5 |
| A | Q3 | 3 |
| B | Q3 | 6 |
Create a long version of the following table.
| id | fav\_colour | fav\_animal |
| --- | --- | --- |
| Lisa | red | echidna |
| Robbie | orange | babirusa |
| Steven | green | frog |
Answer
Your answer doesn’t need to have the same column headers or be in the same order.
| id | fav | answer |
| --- | --- | --- |
| Lisa | colour | red |
| Lisa | animal | echidna |
| Robbie | colour | orange |
| Robbie | animal | babirusa |
| Steven | colour | green |
| Steven | animal | frog |
### 4\.4\.1 Three Rules
* Each [variable](https://psyteachr.github.io/glossary/v#variable "A word that identifies and stores the value of some data for later use.") must have its own column
* Each [observation](https://psyteachr.github.io/glossary/o#observation "All of the data about a single trial or question.") must have its own row
* Each [value](https://psyteachr.github.io/glossary/v#value "A single number or piece of data.") must have its own cell
This table has three observations per row and the `total_meanRT` column contains two values.
Table 4\.1: Untidy table
| id | score\_1 | score\_2 | score\_3 | rt\_1 | rt\_2 | rt\_3 | total\_meanRT |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 4 | 3 | 7 | 857 | 890 | 859 | 14 (869\) |
| 2 | 3 | 1 | 1 | 902 | 900 | 959 | 5 (920\) |
| 3 | 2 | 5 | 4 | 757 | 823 | 901 | 11 (827\) |
| 4 | 6 | 2 | 6 | 844 | 788 | 624 | 14 (752\) |
| 5 | 1 | 7 | 2 | 659 | 764 | 690 | 10 (704\) |
This is the tidy version.
Table 4\.1: Tidy table
| id | trial | rt | score | total | mean\_rt |
| --- | --- | --- | --- | --- | --- |
| 1 | 1 | 857 | 4 | 14 | 869 |
| 1 | 2 | 890 | 3 | 14 | 869 |
| 1 | 3 | 859 | 7 | 14 | 869 |
| 2 | 1 | 902 | 3 | 5 | 920 |
| 2 | 2 | 900 | 1 | 5 | 920 |
| 2 | 3 | 959 | 1 | 5 | 920 |
| 3 | 1 | 757 | 2 | 11 | 827 |
| 3 | 2 | 823 | 5 | 11 | 827 |
| 3 | 3 | 901 | 4 | 11 | 827 |
| 4 | 1 | 844 | 6 | 14 | 752 |
| 4 | 2 | 788 | 2 | 14 | 752 |
| 4 | 3 | 624 | 6 | 14 | 752 |
| 5 | 1 | 659 | 1 | 10 | 704 |
| 5 | 2 | 764 | 7 | 10 | 704 |
| 5 | 3 | 690 | 2 | 10 | 704 |
### 4\.4\.2 Wide versus long
Data tables can be in [wide](https://psyteachr.github.io/glossary/w#wide "Data where all of the observations about one subject are in the same row") format or [long](https://psyteachr.github.io/glossary/l#long "Data where each observation is on a separate row") format (and sometimes a mix of the two). Wide data are where all of the observations about one subject are in the same row, while long data are where each observation is on a separate row. You often need to convert between these formats to do different types of analyses or data processing.
Imagine a study where each subject completes a questionnaire with three items. Each answer is an [observation](https://psyteachr.github.io/glossary/o#observation "All of the data about a single trial or question.") of that subject. You are probably most familiar with data like this in a wide format, where the subject `id` is in one column, and each of the three item responses is in its own column.
Table 4\.2: Wide data
| id | Q1 | Q2 | Q3 |
| --- | --- | --- | --- |
| A | 1 | 2 | 3 |
| B | 4 | 5 | 6 |
The same data can be represented in a long format by creating a new column that specifies what `item` the observation is from and a new column that specifies the `value` of that observation.
Table 4\.3: Long data
| id | item | value |
| --- | --- | --- |
| A | Q1 | 1 |
| B | Q1 | 4 |
| A | Q2 | 2 |
| B | Q2 | 5 |
| A | Q3 | 3 |
| B | Q3 | 6 |
Create a long version of the following table.
| id | fav\_colour | fav\_animal |
| --- | --- | --- |
| Lisa | red | echidna |
| Robbie | orange | babirusa |
| Steven | green | frog |
Answer
Your answer doesn’t need to have the same column headers or be in the same order.
| id | fav | answer |
| --- | --- | --- |
| Lisa | colour | red |
| Lisa | animal | echidna |
| Robbie | colour | orange |
| Robbie | animal | babirusa |
| Steven | colour | green |
| Steven | animal | frog |
4\.5 Pivot Functions
--------------------
The pivot functions allow you to transform a data table from wide to long or long to wide in one step.
### 4\.5\.1 Load Data
We will used the dataset `personality` from the dataskills package (or download the data from [personality.csv](https://psyteachr.github.io/msc-data-skills/data/personality.csv)). These data are from a 5\-factor (personality) personality questionnaire. Each question is labelled with the domain (Op \= openness, Co \= conscientiousness, Ex \= extroversion, Ag \= agreeableness, and Ne \= neuroticism) and the question number.
```
data("personality", package = "dataskills")
```
| user\_id | date | Op1 | Ne1 | Ne2 | Op2 | Ex1 | Ex2 | Co1 | Co2 | Ne3 | Ag1 | Ag2 | Ne4 | Ex3 | Co3 | Op3 | Ex4 | Op4 | Ex5 | Ag3 | Co4 | Co5 | Ne5 | Op5 | Ag4 | Op6 | Co6 | Ex6 | Ne6 | Co7 | Ag5 | Co8 | Ex7 | Ne7 | Co9 | Op7 | Ne8 | Ag6 | Ag7 | Co10 | Ex8 | Ex9 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 2006\-03\-23 | 3 | 4 | 0 | 6 | 3 | 3 | 3 | 3 | 0 | 2 | 1 | 3 | 3 | 2 | 2 | 1 | 3 | 3 | 1 | 3 | 0 | 3 | 6 | 1 | 0 | 6 | 3 | 1 | 3 | 3 | 3 | 3 | NA | 3 | 0 | 2 | NA | 3 | 1 | 2 | 4 |
| 1 | 2006\-02\-08 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 6 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 6 | 6 | 0 | 6 | 0 | 6 | 0 | 6 | 6 | 6 | 6 | 0 | 6 | 0 | 6 | 6 | 0 | 6 | 0 | 6 | 0 | 6 |
| 2 | 2005\-10\-24 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 6 | 5 | 1 | 5 | 1 | 1 | 1 | 1 | 5 | 5 | 1 | 5 | 1 | 5 | 1 | 5 | 5 | 5 | 5 | 1 | 5 | 1 | 5 | 5 | 1 | 5 | 1 | 5 | 1 | 5 |
| 5 | 2005\-12\-07 | 6 | 4 | 4 | 4 | 2 | 3 | 3 | 3 | 1 | 4 | 0 | 2 | 5 | 3 | 5 | 3 | 6 | 6 | 1 | 5 | 5 | 4 | 2 | 4 | 1 | 4 | 3 | 1 | 1 | 0 | 1 | 4 | 2 | 4 | 5 | 1 | 2 | 1 | 5 | 4 | 5 |
| 8 | 2006\-07\-27 | 6 | 1 | 2 | 6 | 2 | 3 | 5 | 4 | 0 | 6 | 5 | 3 | 3 | 4 | 5 | 3 | 6 | 3 | 0 | 5 | 5 | 1 | 5 | 6 | 6 | 6 | 0 | 0 | 3 | 2 | 3 | 1 | 0 | 3 | 5 | 1 | 3 | 1 | 3 | 3 | 5 |
| 108 | 2006\-02\-28 | 3 | 2 | 1 | 4 | 4 | 4 | 4 | 3 | 1 | 5 | 4 | 2 | 3 | 4 | 4 | 3 | 3 | 3 | 4 | 3 | 3 | 1 | 4 | 5 | 4 | 5 | 4 | 1 | 4 | 5 | 4 | 2 | 2 | 4 | 4 | 1 | 4 | 3 | 5 | 4 | 2 |
### 4\.5\.2 pivot\_longer()
`pivot_longer()` converts a wide data table to long format by converting the headers from specified columns into the values of new columns, and combining the values of those columns into a new condensed column.
* `cols` refers to the columns you want to make long You can refer to them by their names, like `col1, col2, col3, col4` or `col1:col4` or by their numbers, like `8, 9, 10` or `8:10`.
* `names_to` is what you want to call the new columns that the gathered column headers will go into; it’s “domain” and “qnumber” in this example.
* `names_sep` is an optional argument if you have more than one value for `names_to`. It specifies the characters or position to split the values of the `cols` headers.
* `values_to` is what you want to call the values in the columns `...`; they’re “score” in this example.
```
personality_long <- pivot_longer(
data = personality,
cols = Op1:Ex9, # columns to make long
names_to = c("domain", "qnumber"), # new column names for headers
names_sep = 2, # how to split the headers
values_to = "score" # new column name for values
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 5
## $ user_id <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
## $ date <date> 2006-03-23, 2006-03-23, 2006-03-23, 2006-03-23, 2006-03-23, 2…
## $ domain <chr> "Op", "Ne", "Ne", "Op", "Ex", "Ex", "Co", "Co", "Ne", "Ag", "A…
## $ qnumber <chr> "1", "1", "2", "2", "1", "2", "1", "2", "3", "1", "2", "4", "3…
## $ score <dbl> 3, 4, 0, 6, 3, 3, 3, 3, 0, 2, 1, 3, 3, 2, 2, 1, 3, 3, 1, 3, 0,…
```
You can pipe a data table to `glimpse()` at the end to have a quick look at it. It will still save to the object.
What would you set `names_sep` to in order to split the `cols` headers listed below into the results?
| `cols` | `names_to` | `names_sep` |
| --- | --- | --- |
| `A_1`, `A_2`, `B_1`, `B_2` | `c("condition", "version")` | A B 1 2 \_ |
| `A1`, `A2`, `B1`, `B2` | `c("condition", "version")` | A B 1 2 \_ |
| `cat-day&pre`, `cat-day&post`, `cat-night&pre`, `cat-night&post`, `dog-day&pre`, `dog-day&post`, `dog-night&pre`, `dog-night&post` | `c("pet", "time", "condition")` | \- \& \- |
### 4\.5\.3 pivot\_wider()
We can also go from long to wide format using the `pivot_wider()` function.
* `names_from` is the columns that contain your new column headers.
* `values_from` is the column that contains the values for the new columns.
* `names_sep` is the character string used to join names if `names_from` is more than one column.
```
personality_wide <- pivot_wider(
data = personality_long,
names_from = c(domain, qnumber),
values_from = score,
names_sep = ""
) %>%
glimpse()
```
```
## Rows: 15,000
## Columns: 43
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ Op1 <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
## $ Ne1 <dbl> 4, 0, 0, 4, 1, 2, 3, 4, 0, 3, 3, 3, 2, 1, 1, 3, 4, 5, 2, 4, 5,…
## $ Ne2 <dbl> 0, 6, 6, 4, 2, 1, 2, 3, 1, 2, 5, 5, 3, 1, 1, 1, 1, 6, 1, 2, 5,…
## $ Op2 <dbl> 6, 0, 0, 4, 6, 4, 4, 0, 0, 3, 4, 3, 3, 4, 5, 3, 3, 4, 1, 6, 6,…
## $ Ex1 <dbl> 3, 0, 0, 2, 2, 4, 4, 3, 5, 4, 1, 1, 3, 3, 1, 3, 5, 1, 0, 4, 1,…
## $ Ex2 <dbl> 3, 0, 0, 3, 3, 4, 5, 2, 5, 3, 4, 1, 3, 2, 1, 6, 5, 3, 4, 4, 1,…
## $ Co1 <dbl> 3, 0, 0, 3, 5, 4, 3, 4, 5, 3, 3, 3, 1, 5, 5, 4, 4, 5, 6, 4, 2,…
## $ Co2 <dbl> 3, 0, 0, 3, 4, 3, 3, 4, 5, 3, 5, 3, 3, 4, 5, 1, 5, 4, 5, 2, 5,…
## $ Ne3 <dbl> 0, 0, 0, 1, 0, 1, 4, 4, 0, 4, 2, 5, 1, 2, 5, 5, 2, 2, 1, 2, 5,…
## $ Ag1 <dbl> 2, 0, 0, 4, 6, 5, 5, 4, 2, 5, 4, 3, 2, 4, 5, 3, 5, 5, 5, 4, 4,…
## $ Ag2 <dbl> 1, 6, 6, 0, 5, 4, 5, 3, 4, 3, 5, 1, 5, 4, 2, 6, 5, 5, 5, 5, 2,…
## $ Ne4 <dbl> 3, 6, 6, 2, 3, 2, 3, 3, 0, 4, 4, 5, 5, 4, 5, 3, 2, 5, 2, 4, 5,…
## $ Ex3 <dbl> 3, 6, 5, 5, 3, 3, 3, 0, 6, 1, 4, 2, 3, 2, 1, 2, 5, 1, 0, 5, 5,…
## $ Co3 <dbl> 2, 0, 1, 3, 4, 4, 5, 4, 5, 3, 4, 3, 4, 4, 5, 4, 2, 4, 5, 2, 2,…
## $ Op3 <dbl> 2, 6, 5, 5, 5, 4, 3, 2, 4, 3, 3, 6, 5, 5, 6, 5, 4, 4, 3, 6, 5,…
## $ Ex4 <dbl> 1, 0, 1, 3, 3, 3, 4, 3, 5, 3, 2, 0, 3, 3, 1, 2, NA, 4, 4, 4, 1…
## $ Op4 <dbl> 3, 0, 1, 6, 6, 3, 3, 0, 6, 3, 4, 5, 4, 5, 6, 6, 2, 2, 4, 5, 5,…
## $ Ex5 <dbl> 3, 0, 1, 6, 3, 3, 4, 2, 5, 2, 2, 4, 2, 3, 0, 4, 5, 2, 3, 1, 1,…
## $ Ag3 <dbl> 1, 0, 1, 1, 0, 4, 4, 4, 3, 3, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 4,…
## $ Co4 <dbl> 3, 6, 5, 5, 5, 3, 2, 4, 3, 1, 4, 3, 1, 2, 4, 2, NA, 5, 6, 1, 1…
## $ Co5 <dbl> 0, 6, 5, 5, 5, 3, 3, 1, 5, 1, 2, 4, 4, 4, 2, 1, 6, 4, 3, 1, 3,…
## $ Ne5 <dbl> 3, 0, 1, 4, 1, 1, 4, 5, 0, 3, 4, 6, 2, 0, 1, 1, 0, 4, 3, 1, 5,…
## $ Op5 <dbl> 6, 6, 5, 2, 5, 4, 3, 2, 6, 6, 2, 4, 3, 4, 6, 6, 6, 5, 3, 3, 5,…
## $ Ag4 <dbl> 1, 0, 1, 4, 6, 5, 5, 6, 6, 6, 4, 2, 4, 5, 4, 5, 6, 4, 5, 6, 5,…
## $ Op6 <dbl> 0, 6, 5, 1, 6, 4, 6, 0, 0, 3, 5, 3, 5, 5, 5, 2, 5, 1, 1, 6, 2,…
## $ Co6 <dbl> 6, 0, 1, 4, 6, 5, 6, 5, 4, 3, 5, 5, 4, 6, 6, 1, 3, 4, 5, 4, 6,…
## $ Ex6 <dbl> 3, 6, 5, 3, 0, 4, 3, 1, 6, 3, 2, 1, 4, 2, 1, 5, 6, 2, 1, 2, 1,…
## $ Ne6 <dbl> 1, 6, 5, 1, 0, 1, 3, 4, 0, 4, 4, 5, 2, 1, 5, 6, 1, 2, 2, 3, 5,…
## $ Co7 <dbl> 3, 6, 5, 1, 3, 4, NA, 2, 3, 3, 2, 2, 4, 2, 5, 2, 5, 5, 3, 1, 1…
## $ Ag5 <dbl> 3, 6, 5, 0, 2, 5, 6, 2, 2, 3, 4, 1, 3, 5, 2, 6, 5, 6, 5, 3, 3,…
## $ Co8 <dbl> 3, 0, 1, 1, 3, 4, 3, 0, 1, 3, 2, 2, 1, 2, 4, 3, 2, 4, 5, 2, 6,…
## $ Ex7 <dbl> 3, 6, 5, 4, 1, 2, 5, 3, 6, 3, 4, 3, 5, 1, 1, 6, 6, 3, 1, 1, 3,…
## $ Ne7 <dbl> NA, 0, 1, 2, 0, 2, 4, 4, 0, 3, 2, 5, 1, 2, 5, 2, 2, 4, 1, 3, 5…
## $ Co9 <dbl> 3, 6, 5, 4, 3, 4, 5, 3, 5, 3, 4, 3, 4, 4, 2, 4, 6, 5, 5, 2, 2,…
## $ Op7 <dbl> 0, 6, 5, 5, 5, 4, 6, 2, 1, 3, 2, 4, 5, 5, 6, 3, 6, 5, 2, 6, 5,…
## $ Ne8 <dbl> 2, 0, 1, 1, 1, 1, 5, 4, 0, 4, 4, 5, 1, 2, 5, 2, 1, 5, 1, 2, 5,…
## $ Ag6 <dbl> NA, 6, 5, 2, 3, 4, 5, 6, 1, 3, 4, 2, 3, 5, 1, 6, 2, 6, 6, 5, 3…
## $ Ag7 <dbl> 3, 0, 1, 1, 1, 3, 3, 5, 0, 3, 2, 1, 2, 3, 5, 6, 4, 4, 6, 6, 2,…
## $ Co10 <dbl> 1, 6, 5, 5, 3, 5, 1, 2, 5, 2, 4, 3, 4, 4, 3, 2, 5, 5, 5, 2, 2,…
## $ Ex8 <dbl> 2, 0, 1, 4, 3, 4, 2, 4, 6, 2, 4, 0, 4, 4, 1, 3, 5, 4, 3, 1, 1,…
## $ Ex9 <dbl> 4, 6, 5, 5, 5, 2, 3, 3, 6, 3, 3, 4, 4, 3, 2, 5, 5, 4, 4, 0, 4,…
```
### 4\.5\.1 Load Data
We will used the dataset `personality` from the dataskills package (or download the data from [personality.csv](https://psyteachr.github.io/msc-data-skills/data/personality.csv)). These data are from a 5\-factor (personality) personality questionnaire. Each question is labelled with the domain (Op \= openness, Co \= conscientiousness, Ex \= extroversion, Ag \= agreeableness, and Ne \= neuroticism) and the question number.
```
data("personality", package = "dataskills")
```
| user\_id | date | Op1 | Ne1 | Ne2 | Op2 | Ex1 | Ex2 | Co1 | Co2 | Ne3 | Ag1 | Ag2 | Ne4 | Ex3 | Co3 | Op3 | Ex4 | Op4 | Ex5 | Ag3 | Co4 | Co5 | Ne5 | Op5 | Ag4 | Op6 | Co6 | Ex6 | Ne6 | Co7 | Ag5 | Co8 | Ex7 | Ne7 | Co9 | Op7 | Ne8 | Ag6 | Ag7 | Co10 | Ex8 | Ex9 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 2006\-03\-23 | 3 | 4 | 0 | 6 | 3 | 3 | 3 | 3 | 0 | 2 | 1 | 3 | 3 | 2 | 2 | 1 | 3 | 3 | 1 | 3 | 0 | 3 | 6 | 1 | 0 | 6 | 3 | 1 | 3 | 3 | 3 | 3 | NA | 3 | 0 | 2 | NA | 3 | 1 | 2 | 4 |
| 1 | 2006\-02\-08 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 6 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 6 | 6 | 0 | 6 | 0 | 6 | 0 | 6 | 6 | 6 | 6 | 0 | 6 | 0 | 6 | 6 | 0 | 6 | 0 | 6 | 0 | 6 |
| 2 | 2005\-10\-24 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 6 | 5 | 1 | 5 | 1 | 1 | 1 | 1 | 5 | 5 | 1 | 5 | 1 | 5 | 1 | 5 | 5 | 5 | 5 | 1 | 5 | 1 | 5 | 5 | 1 | 5 | 1 | 5 | 1 | 5 |
| 5 | 2005\-12\-07 | 6 | 4 | 4 | 4 | 2 | 3 | 3 | 3 | 1 | 4 | 0 | 2 | 5 | 3 | 5 | 3 | 6 | 6 | 1 | 5 | 5 | 4 | 2 | 4 | 1 | 4 | 3 | 1 | 1 | 0 | 1 | 4 | 2 | 4 | 5 | 1 | 2 | 1 | 5 | 4 | 5 |
| 8 | 2006\-07\-27 | 6 | 1 | 2 | 6 | 2 | 3 | 5 | 4 | 0 | 6 | 5 | 3 | 3 | 4 | 5 | 3 | 6 | 3 | 0 | 5 | 5 | 1 | 5 | 6 | 6 | 6 | 0 | 0 | 3 | 2 | 3 | 1 | 0 | 3 | 5 | 1 | 3 | 1 | 3 | 3 | 5 |
| 108 | 2006\-02\-28 | 3 | 2 | 1 | 4 | 4 | 4 | 4 | 3 | 1 | 5 | 4 | 2 | 3 | 4 | 4 | 3 | 3 | 3 | 4 | 3 | 3 | 1 | 4 | 5 | 4 | 5 | 4 | 1 | 4 | 5 | 4 | 2 | 2 | 4 | 4 | 1 | 4 | 3 | 5 | 4 | 2 |
### 4\.5\.2 pivot\_longer()
`pivot_longer()` converts a wide data table to long format by converting the headers from specified columns into the values of new columns, and combining the values of those columns into a new condensed column.
* `cols` refers to the columns you want to make long You can refer to them by their names, like `col1, col2, col3, col4` or `col1:col4` or by their numbers, like `8, 9, 10` or `8:10`.
* `names_to` is what you want to call the new columns that the gathered column headers will go into; it’s “domain” and “qnumber” in this example.
* `names_sep` is an optional argument if you have more than one value for `names_to`. It specifies the characters or position to split the values of the `cols` headers.
* `values_to` is what you want to call the values in the columns `...`; they’re “score” in this example.
```
personality_long <- pivot_longer(
data = personality,
cols = Op1:Ex9, # columns to make long
names_to = c("domain", "qnumber"), # new column names for headers
names_sep = 2, # how to split the headers
values_to = "score" # new column name for values
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 5
## $ user_id <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
## $ date <date> 2006-03-23, 2006-03-23, 2006-03-23, 2006-03-23, 2006-03-23, 2…
## $ domain <chr> "Op", "Ne", "Ne", "Op", "Ex", "Ex", "Co", "Co", "Ne", "Ag", "A…
## $ qnumber <chr> "1", "1", "2", "2", "1", "2", "1", "2", "3", "1", "2", "4", "3…
## $ score <dbl> 3, 4, 0, 6, 3, 3, 3, 3, 0, 2, 1, 3, 3, 2, 2, 1, 3, 3, 1, 3, 0,…
```
You can pipe a data table to `glimpse()` at the end to have a quick look at it. It will still save to the object.
What would you set `names_sep` to in order to split the `cols` headers listed below into the results?
| `cols` | `names_to` | `names_sep` |
| --- | --- | --- |
| `A_1`, `A_2`, `B_1`, `B_2` | `c("condition", "version")` | A B 1 2 \_ |
| `A1`, `A2`, `B1`, `B2` | `c("condition", "version")` | A B 1 2 \_ |
| `cat-day&pre`, `cat-day&post`, `cat-night&pre`, `cat-night&post`, `dog-day&pre`, `dog-day&post`, `dog-night&pre`, `dog-night&post` | `c("pet", "time", "condition")` | \- \& \- |
### 4\.5\.3 pivot\_wider()
We can also go from long to wide format using the `pivot_wider()` function.
* `names_from` is the columns that contain your new column headers.
* `values_from` is the column that contains the values for the new columns.
* `names_sep` is the character string used to join names if `names_from` is more than one column.
```
personality_wide <- pivot_wider(
data = personality_long,
names_from = c(domain, qnumber),
values_from = score,
names_sep = ""
) %>%
glimpse()
```
```
## Rows: 15,000
## Columns: 43
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ Op1 <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
## $ Ne1 <dbl> 4, 0, 0, 4, 1, 2, 3, 4, 0, 3, 3, 3, 2, 1, 1, 3, 4, 5, 2, 4, 5,…
## $ Ne2 <dbl> 0, 6, 6, 4, 2, 1, 2, 3, 1, 2, 5, 5, 3, 1, 1, 1, 1, 6, 1, 2, 5,…
## $ Op2 <dbl> 6, 0, 0, 4, 6, 4, 4, 0, 0, 3, 4, 3, 3, 4, 5, 3, 3, 4, 1, 6, 6,…
## $ Ex1 <dbl> 3, 0, 0, 2, 2, 4, 4, 3, 5, 4, 1, 1, 3, 3, 1, 3, 5, 1, 0, 4, 1,…
## $ Ex2 <dbl> 3, 0, 0, 3, 3, 4, 5, 2, 5, 3, 4, 1, 3, 2, 1, 6, 5, 3, 4, 4, 1,…
## $ Co1 <dbl> 3, 0, 0, 3, 5, 4, 3, 4, 5, 3, 3, 3, 1, 5, 5, 4, 4, 5, 6, 4, 2,…
## $ Co2 <dbl> 3, 0, 0, 3, 4, 3, 3, 4, 5, 3, 5, 3, 3, 4, 5, 1, 5, 4, 5, 2, 5,…
## $ Ne3 <dbl> 0, 0, 0, 1, 0, 1, 4, 4, 0, 4, 2, 5, 1, 2, 5, 5, 2, 2, 1, 2, 5,…
## $ Ag1 <dbl> 2, 0, 0, 4, 6, 5, 5, 4, 2, 5, 4, 3, 2, 4, 5, 3, 5, 5, 5, 4, 4,…
## $ Ag2 <dbl> 1, 6, 6, 0, 5, 4, 5, 3, 4, 3, 5, 1, 5, 4, 2, 6, 5, 5, 5, 5, 2,…
## $ Ne4 <dbl> 3, 6, 6, 2, 3, 2, 3, 3, 0, 4, 4, 5, 5, 4, 5, 3, 2, 5, 2, 4, 5,…
## $ Ex3 <dbl> 3, 6, 5, 5, 3, 3, 3, 0, 6, 1, 4, 2, 3, 2, 1, 2, 5, 1, 0, 5, 5,…
## $ Co3 <dbl> 2, 0, 1, 3, 4, 4, 5, 4, 5, 3, 4, 3, 4, 4, 5, 4, 2, 4, 5, 2, 2,…
## $ Op3 <dbl> 2, 6, 5, 5, 5, 4, 3, 2, 4, 3, 3, 6, 5, 5, 6, 5, 4, 4, 3, 6, 5,…
## $ Ex4 <dbl> 1, 0, 1, 3, 3, 3, 4, 3, 5, 3, 2, 0, 3, 3, 1, 2, NA, 4, 4, 4, 1…
## $ Op4 <dbl> 3, 0, 1, 6, 6, 3, 3, 0, 6, 3, 4, 5, 4, 5, 6, 6, 2, 2, 4, 5, 5,…
## $ Ex5 <dbl> 3, 0, 1, 6, 3, 3, 4, 2, 5, 2, 2, 4, 2, 3, 0, 4, 5, 2, 3, 1, 1,…
## $ Ag3 <dbl> 1, 0, 1, 1, 0, 4, 4, 4, 3, 3, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 4,…
## $ Co4 <dbl> 3, 6, 5, 5, 5, 3, 2, 4, 3, 1, 4, 3, 1, 2, 4, 2, NA, 5, 6, 1, 1…
## $ Co5 <dbl> 0, 6, 5, 5, 5, 3, 3, 1, 5, 1, 2, 4, 4, 4, 2, 1, 6, 4, 3, 1, 3,…
## $ Ne5 <dbl> 3, 0, 1, 4, 1, 1, 4, 5, 0, 3, 4, 6, 2, 0, 1, 1, 0, 4, 3, 1, 5,…
## $ Op5 <dbl> 6, 6, 5, 2, 5, 4, 3, 2, 6, 6, 2, 4, 3, 4, 6, 6, 6, 5, 3, 3, 5,…
## $ Ag4 <dbl> 1, 0, 1, 4, 6, 5, 5, 6, 6, 6, 4, 2, 4, 5, 4, 5, 6, 4, 5, 6, 5,…
## $ Op6 <dbl> 0, 6, 5, 1, 6, 4, 6, 0, 0, 3, 5, 3, 5, 5, 5, 2, 5, 1, 1, 6, 2,…
## $ Co6 <dbl> 6, 0, 1, 4, 6, 5, 6, 5, 4, 3, 5, 5, 4, 6, 6, 1, 3, 4, 5, 4, 6,…
## $ Ex6 <dbl> 3, 6, 5, 3, 0, 4, 3, 1, 6, 3, 2, 1, 4, 2, 1, 5, 6, 2, 1, 2, 1,…
## $ Ne6 <dbl> 1, 6, 5, 1, 0, 1, 3, 4, 0, 4, 4, 5, 2, 1, 5, 6, 1, 2, 2, 3, 5,…
## $ Co7 <dbl> 3, 6, 5, 1, 3, 4, NA, 2, 3, 3, 2, 2, 4, 2, 5, 2, 5, 5, 3, 1, 1…
## $ Ag5 <dbl> 3, 6, 5, 0, 2, 5, 6, 2, 2, 3, 4, 1, 3, 5, 2, 6, 5, 6, 5, 3, 3,…
## $ Co8 <dbl> 3, 0, 1, 1, 3, 4, 3, 0, 1, 3, 2, 2, 1, 2, 4, 3, 2, 4, 5, 2, 6,…
## $ Ex7 <dbl> 3, 6, 5, 4, 1, 2, 5, 3, 6, 3, 4, 3, 5, 1, 1, 6, 6, 3, 1, 1, 3,…
## $ Ne7 <dbl> NA, 0, 1, 2, 0, 2, 4, 4, 0, 3, 2, 5, 1, 2, 5, 2, 2, 4, 1, 3, 5…
## $ Co9 <dbl> 3, 6, 5, 4, 3, 4, 5, 3, 5, 3, 4, 3, 4, 4, 2, 4, 6, 5, 5, 2, 2,…
## $ Op7 <dbl> 0, 6, 5, 5, 5, 4, 6, 2, 1, 3, 2, 4, 5, 5, 6, 3, 6, 5, 2, 6, 5,…
## $ Ne8 <dbl> 2, 0, 1, 1, 1, 1, 5, 4, 0, 4, 4, 5, 1, 2, 5, 2, 1, 5, 1, 2, 5,…
## $ Ag6 <dbl> NA, 6, 5, 2, 3, 4, 5, 6, 1, 3, 4, 2, 3, 5, 1, 6, 2, 6, 6, 5, 3…
## $ Ag7 <dbl> 3, 0, 1, 1, 1, 3, 3, 5, 0, 3, 2, 1, 2, 3, 5, 6, 4, 4, 6, 6, 2,…
## $ Co10 <dbl> 1, 6, 5, 5, 3, 5, 1, 2, 5, 2, 4, 3, 4, 4, 3, 2, 5, 5, 5, 2, 2,…
## $ Ex8 <dbl> 2, 0, 1, 4, 3, 4, 2, 4, 6, 2, 4, 0, 4, 4, 1, 3, 5, 4, 3, 1, 1,…
## $ Ex9 <dbl> 4, 6, 5, 5, 5, 2, 3, 3, 6, 3, 3, 4, 4, 3, 2, 5, 5, 4, 4, 0, 4,…
```
4\.6 Tidy Verbs
---------------
The pivot functions above are relatively new functions that combine the four basic tidy verbs. You can also convert data between long and wide formats using these functions. Many researchers still use these functions and older code will not use the pivot functions, so it is useful to know how to interpret these.
### 4\.6\.1 gather()
Much like `pivot_longer()`, `gather()` makes a wide data table long by creating a column for the headers and a column for the values. The main difference is that you cannot turn the headers into more than one column.
* `key` is what you want to call the new column that the gathered column headers will go into; it’s “question” in this example. It is like `names_to` in `pivot_longer()`, but can only take one value (multiple values need to be separated after `separate()`).
* `value` is what you want to call the values in the gathered columns; they’re “score” in this example. It is like `values_to` in `pivot_longer()`.
* `...` refers to the columns you want to gather. It is like `cols` in `pivot_longer()`.
The `gather()` function converts `personality` from a wide data table to long format, with a row for each user/question observation. The resulting data table should have the columns: `user_id`, `date`, `question`, and `score`.
```
personality_gathered <- gather(
data = personality,
key = "question", # new column name for gathered headers
value = "score", # new column name for gathered values
Op1:Ex9 # columns to gather
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 4
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 9…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, …
## $ question <chr> "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1"…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4…
```
### 4\.6\.2 separate()
* `col` is the column you want to separate
* `into` is a vector of new column names
* `sep` is the character(s) that separate your new columns. This defaults to anything that isn’t alphanumeric, like .`,_-/:` and is like the `names_sep` argument in `pivot_longer()`.
Split the `question` column into two columns: `domain` and `qnumber`.
There is no character to split on, here, but you can separate a column after a specific number of characters by setting `sep` to an integer. For example, to split “abcde” after the third character, use `sep = 3`, which results in `c("abc", "de")`. You can also use negative number to split before the *n*th character from the right. For example, to split a column that has words of various lengths and 2\-digit suffixes (like “lisa03”“,”amanda38"), you can use `sep = -2`.
```
personality_sep <- separate(
data = personality_gathered,
col = question, # column to separate
into = c("domain", "qnumber"), # new column names
sep = 2 # where to separate
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 5
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ domain <chr> "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "O…
## $ qnumber <chr> "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
```
If you want to separate just at full stops, you need to use `sep = "\\."`, not `sep = "."`. The two slashes **escape** the full stop, making it interpreted as a literal full stop and not the regular expression for any character.
### 4\.6\.3 unite()
* `col` is your new united column
* `...` refers to the columns you want to unite
* `sep` is the character(s) that will separate your united columns
Put the domain and qnumber columns back together into a new column named `domain_n`. Make it in a format like “Op\_Q1\.”
```
personality_unite <- unite(
data = personality_sep,
col = "domain_n", # new column name
domain, qnumber, # columns to unite
sep = "_Q" # separation characters
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 4
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 9…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, …
## $ domain_n <chr> "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1"…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4…
```
### 4\.6\.4 spread()
You can reverse the processes above, as well. For example, you can convert data from long format into wide format.
* `key` is the column that contains your new column headers. It is like `names_from` in `pivot_wider()`, but can only take one value (multiple values need to be merged first using `unite()`).
* `value` is the column that contains the values in the new spread columns. It is like `values_from` in `pivot_wider()`.
```
personality_spread <- spread(
data = personality_unite,
key = domain_n, # column that contains new headers
value = score # column that contains new values
) %>%
glimpse()
```
```
## Rows: 15,000
## Columns: 43
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ Ag_Q1 <dbl> 2, 0, 0, 4, 6, 5, 5, 4, 2, 5, 4, 3, 2, 4, 5, 3, 5, 5, 5, 4, 4,…
## $ Ag_Q2 <dbl> 1, 6, 6, 0, 5, 4, 5, 3, 4, 3, 5, 1, 5, 4, 2, 6, 5, 5, 5, 5, 2,…
## $ Ag_Q3 <dbl> 1, 0, 1, 1, 0, 4, 4, 4, 3, 3, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 4,…
## $ Ag_Q4 <dbl> 1, 0, 1, 4, 6, 5, 5, 6, 6, 6, 4, 2, 4, 5, 4, 5, 6, 4, 5, 6, 5,…
## $ Ag_Q5 <dbl> 3, 6, 5, 0, 2, 5, 6, 2, 2, 3, 4, 1, 3, 5, 2, 6, 5, 6, 5, 3, 3,…
## $ Ag_Q6 <dbl> NA, 6, 5, 2, 3, 4, 5, 6, 1, 3, 4, 2, 3, 5, 1, 6, 2, 6, 6, 5, 3…
## $ Ag_Q7 <dbl> 3, 0, 1, 1, 1, 3, 3, 5, 0, 3, 2, 1, 2, 3, 5, 6, 4, 4, 6, 6, 2,…
## $ Co_Q1 <dbl> 3, 0, 0, 3, 5, 4, 3, 4, 5, 3, 3, 3, 1, 5, 5, 4, 4, 5, 6, 4, 2,…
## $ Co_Q10 <dbl> 1, 6, 5, 5, 3, 5, 1, 2, 5, 2, 4, 3, 4, 4, 3, 2, 5, 5, 5, 2, 2,…
## $ Co_Q2 <dbl> 3, 0, 0, 3, 4, 3, 3, 4, 5, 3, 5, 3, 3, 4, 5, 1, 5, 4, 5, 2, 5,…
## $ Co_Q3 <dbl> 2, 0, 1, 3, 4, 4, 5, 4, 5, 3, 4, 3, 4, 4, 5, 4, 2, 4, 5, 2, 2,…
## $ Co_Q4 <dbl> 3, 6, 5, 5, 5, 3, 2, 4, 3, 1, 4, 3, 1, 2, 4, 2, NA, 5, 6, 1, 1…
## $ Co_Q5 <dbl> 0, 6, 5, 5, 5, 3, 3, 1, 5, 1, 2, 4, 4, 4, 2, 1, 6, 4, 3, 1, 3,…
## $ Co_Q6 <dbl> 6, 0, 1, 4, 6, 5, 6, 5, 4, 3, 5, 5, 4, 6, 6, 1, 3, 4, 5, 4, 6,…
## $ Co_Q7 <dbl> 3, 6, 5, 1, 3, 4, NA, 2, 3, 3, 2, 2, 4, 2, 5, 2, 5, 5, 3, 1, 1…
## $ Co_Q8 <dbl> 3, 0, 1, 1, 3, 4, 3, 0, 1, 3, 2, 2, 1, 2, 4, 3, 2, 4, 5, 2, 6,…
## $ Co_Q9 <dbl> 3, 6, 5, 4, 3, 4, 5, 3, 5, 3, 4, 3, 4, 4, 2, 4, 6, 5, 5, 2, 2,…
## $ Ex_Q1 <dbl> 3, 0, 0, 2, 2, 4, 4, 3, 5, 4, 1, 1, 3, 3, 1, 3, 5, 1, 0, 4, 1,…
## $ Ex_Q2 <dbl> 3, 0, 0, 3, 3, 4, 5, 2, 5, 3, 4, 1, 3, 2, 1, 6, 5, 3, 4, 4, 1,…
## $ Ex_Q3 <dbl> 3, 6, 5, 5, 3, 3, 3, 0, 6, 1, 4, 2, 3, 2, 1, 2, 5, 1, 0, 5, 5,…
## $ Ex_Q4 <dbl> 1, 0, 1, 3, 3, 3, 4, 3, 5, 3, 2, 0, 3, 3, 1, 2, NA, 4, 4, 4, 1…
## $ Ex_Q5 <dbl> 3, 0, 1, 6, 3, 3, 4, 2, 5, 2, 2, 4, 2, 3, 0, 4, 5, 2, 3, 1, 1,…
## $ Ex_Q6 <dbl> 3, 6, 5, 3, 0, 4, 3, 1, 6, 3, 2, 1, 4, 2, 1, 5, 6, 2, 1, 2, 1,…
## $ Ex_Q7 <dbl> 3, 6, 5, 4, 1, 2, 5, 3, 6, 3, 4, 3, 5, 1, 1, 6, 6, 3, 1, 1, 3,…
## $ Ex_Q8 <dbl> 2, 0, 1, 4, 3, 4, 2, 4, 6, 2, 4, 0, 4, 4, 1, 3, 5, 4, 3, 1, 1,…
## $ Ex_Q9 <dbl> 4, 6, 5, 5, 5, 2, 3, 3, 6, 3, 3, 4, 4, 3, 2, 5, 5, 4, 4, 0, 4,…
## $ Ne_Q1 <dbl> 4, 0, 0, 4, 1, 2, 3, 4, 0, 3, 3, 3, 2, 1, 1, 3, 4, 5, 2, 4, 5,…
## $ Ne_Q2 <dbl> 0, 6, 6, 4, 2, 1, 2, 3, 1, 2, 5, 5, 3, 1, 1, 1, 1, 6, 1, 2, 5,…
## $ Ne_Q3 <dbl> 0, 0, 0, 1, 0, 1, 4, 4, 0, 4, 2, 5, 1, 2, 5, 5, 2, 2, 1, 2, 5,…
## $ Ne_Q4 <dbl> 3, 6, 6, 2, 3, 2, 3, 3, 0, 4, 4, 5, 5, 4, 5, 3, 2, 5, 2, 4, 5,…
## $ Ne_Q5 <dbl> 3, 0, 1, 4, 1, 1, 4, 5, 0, 3, 4, 6, 2, 0, 1, 1, 0, 4, 3, 1, 5,…
## $ Ne_Q6 <dbl> 1, 6, 5, 1, 0, 1, 3, 4, 0, 4, 4, 5, 2, 1, 5, 6, 1, 2, 2, 3, 5,…
## $ Ne_Q7 <dbl> NA, 0, 1, 2, 0, 2, 4, 4, 0, 3, 2, 5, 1, 2, 5, 2, 2, 4, 1, 3, 5…
## $ Ne_Q8 <dbl> 2, 0, 1, 1, 1, 1, 5, 4, 0, 4, 4, 5, 1, 2, 5, 2, 1, 5, 1, 2, 5,…
## $ Op_Q1 <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
## $ Op_Q2 <dbl> 6, 0, 0, 4, 6, 4, 4, 0, 0, 3, 4, 3, 3, 4, 5, 3, 3, 4, 1, 6, 6,…
## $ Op_Q3 <dbl> 2, 6, 5, 5, 5, 4, 3, 2, 4, 3, 3, 6, 5, 5, 6, 5, 4, 4, 3, 6, 5,…
## $ Op_Q4 <dbl> 3, 0, 1, 6, 6, 3, 3, 0, 6, 3, 4, 5, 4, 5, 6, 6, 2, 2, 4, 5, 5,…
## $ Op_Q5 <dbl> 6, 6, 5, 2, 5, 4, 3, 2, 6, 6, 2, 4, 3, 4, 6, 6, 6, 5, 3, 3, 5,…
## $ Op_Q6 <dbl> 0, 6, 5, 1, 6, 4, 6, 0, 0, 3, 5, 3, 5, 5, 5, 2, 5, 1, 1, 6, 2,…
## $ Op_Q7 <dbl> 0, 6, 5, 5, 5, 4, 6, 2, 1, 3, 2, 4, 5, 5, 6, 3, 6, 5, 2, 6, 5,…
```
### 4\.6\.1 gather()
Much like `pivot_longer()`, `gather()` makes a wide data table long by creating a column for the headers and a column for the values. The main difference is that you cannot turn the headers into more than one column.
* `key` is what you want to call the new column that the gathered column headers will go into; it’s “question” in this example. It is like `names_to` in `pivot_longer()`, but can only take one value (multiple values need to be separated after `separate()`).
* `value` is what you want to call the values in the gathered columns; they’re “score” in this example. It is like `values_to` in `pivot_longer()`.
* `...` refers to the columns you want to gather. It is like `cols` in `pivot_longer()`.
The `gather()` function converts `personality` from a wide data table to long format, with a row for each user/question observation. The resulting data table should have the columns: `user_id`, `date`, `question`, and `score`.
```
personality_gathered <- gather(
data = personality,
key = "question", # new column name for gathered headers
value = "score", # new column name for gathered values
Op1:Ex9 # columns to gather
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 4
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 9…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, …
## $ question <chr> "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1"…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4…
```
### 4\.6\.2 separate()
* `col` is the column you want to separate
* `into` is a vector of new column names
* `sep` is the character(s) that separate your new columns. This defaults to anything that isn’t alphanumeric, like .`,_-/:` and is like the `names_sep` argument in `pivot_longer()`.
Split the `question` column into two columns: `domain` and `qnumber`.
There is no character to split on, here, but you can separate a column after a specific number of characters by setting `sep` to an integer. For example, to split “abcde” after the third character, use `sep = 3`, which results in `c("abc", "de")`. You can also use negative number to split before the *n*th character from the right. For example, to split a column that has words of various lengths and 2\-digit suffixes (like “lisa03”“,”amanda38"), you can use `sep = -2`.
```
personality_sep <- separate(
data = personality_gathered,
col = question, # column to separate
into = c("domain", "qnumber"), # new column names
sep = 2 # where to separate
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 5
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ domain <chr> "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "O…
## $ qnumber <chr> "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
```
If you want to separate just at full stops, you need to use `sep = "\\."`, not `sep = "."`. The two slashes **escape** the full stop, making it interpreted as a literal full stop and not the regular expression for any character.
### 4\.6\.3 unite()
* `col` is your new united column
* `...` refers to the columns you want to unite
* `sep` is the character(s) that will separate your united columns
Put the domain and qnumber columns back together into a new column named `domain_n`. Make it in a format like “Op\_Q1\.”
```
personality_unite <- unite(
data = personality_sep,
col = "domain_n", # new column name
domain, qnumber, # columns to unite
sep = "_Q" # separation characters
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 4
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 9…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, …
## $ domain_n <chr> "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1"…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4…
```
### 4\.6\.4 spread()
You can reverse the processes above, as well. For example, you can convert data from long format into wide format.
* `key` is the column that contains your new column headers. It is like `names_from` in `pivot_wider()`, but can only take one value (multiple values need to be merged first using `unite()`).
* `value` is the column that contains the values in the new spread columns. It is like `values_from` in `pivot_wider()`.
```
personality_spread <- spread(
data = personality_unite,
key = domain_n, # column that contains new headers
value = score # column that contains new values
) %>%
glimpse()
```
```
## Rows: 15,000
## Columns: 43
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ Ag_Q1 <dbl> 2, 0, 0, 4, 6, 5, 5, 4, 2, 5, 4, 3, 2, 4, 5, 3, 5, 5, 5, 4, 4,…
## $ Ag_Q2 <dbl> 1, 6, 6, 0, 5, 4, 5, 3, 4, 3, 5, 1, 5, 4, 2, 6, 5, 5, 5, 5, 2,…
## $ Ag_Q3 <dbl> 1, 0, 1, 1, 0, 4, 4, 4, 3, 3, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 4,…
## $ Ag_Q4 <dbl> 1, 0, 1, 4, 6, 5, 5, 6, 6, 6, 4, 2, 4, 5, 4, 5, 6, 4, 5, 6, 5,…
## $ Ag_Q5 <dbl> 3, 6, 5, 0, 2, 5, 6, 2, 2, 3, 4, 1, 3, 5, 2, 6, 5, 6, 5, 3, 3,…
## $ Ag_Q6 <dbl> NA, 6, 5, 2, 3, 4, 5, 6, 1, 3, 4, 2, 3, 5, 1, 6, 2, 6, 6, 5, 3…
## $ Ag_Q7 <dbl> 3, 0, 1, 1, 1, 3, 3, 5, 0, 3, 2, 1, 2, 3, 5, 6, 4, 4, 6, 6, 2,…
## $ Co_Q1 <dbl> 3, 0, 0, 3, 5, 4, 3, 4, 5, 3, 3, 3, 1, 5, 5, 4, 4, 5, 6, 4, 2,…
## $ Co_Q10 <dbl> 1, 6, 5, 5, 3, 5, 1, 2, 5, 2, 4, 3, 4, 4, 3, 2, 5, 5, 5, 2, 2,…
## $ Co_Q2 <dbl> 3, 0, 0, 3, 4, 3, 3, 4, 5, 3, 5, 3, 3, 4, 5, 1, 5, 4, 5, 2, 5,…
## $ Co_Q3 <dbl> 2, 0, 1, 3, 4, 4, 5, 4, 5, 3, 4, 3, 4, 4, 5, 4, 2, 4, 5, 2, 2,…
## $ Co_Q4 <dbl> 3, 6, 5, 5, 5, 3, 2, 4, 3, 1, 4, 3, 1, 2, 4, 2, NA, 5, 6, 1, 1…
## $ Co_Q5 <dbl> 0, 6, 5, 5, 5, 3, 3, 1, 5, 1, 2, 4, 4, 4, 2, 1, 6, 4, 3, 1, 3,…
## $ Co_Q6 <dbl> 6, 0, 1, 4, 6, 5, 6, 5, 4, 3, 5, 5, 4, 6, 6, 1, 3, 4, 5, 4, 6,…
## $ Co_Q7 <dbl> 3, 6, 5, 1, 3, 4, NA, 2, 3, 3, 2, 2, 4, 2, 5, 2, 5, 5, 3, 1, 1…
## $ Co_Q8 <dbl> 3, 0, 1, 1, 3, 4, 3, 0, 1, 3, 2, 2, 1, 2, 4, 3, 2, 4, 5, 2, 6,…
## $ Co_Q9 <dbl> 3, 6, 5, 4, 3, 4, 5, 3, 5, 3, 4, 3, 4, 4, 2, 4, 6, 5, 5, 2, 2,…
## $ Ex_Q1 <dbl> 3, 0, 0, 2, 2, 4, 4, 3, 5, 4, 1, 1, 3, 3, 1, 3, 5, 1, 0, 4, 1,…
## $ Ex_Q2 <dbl> 3, 0, 0, 3, 3, 4, 5, 2, 5, 3, 4, 1, 3, 2, 1, 6, 5, 3, 4, 4, 1,…
## $ Ex_Q3 <dbl> 3, 6, 5, 5, 3, 3, 3, 0, 6, 1, 4, 2, 3, 2, 1, 2, 5, 1, 0, 5, 5,…
## $ Ex_Q4 <dbl> 1, 0, 1, 3, 3, 3, 4, 3, 5, 3, 2, 0, 3, 3, 1, 2, NA, 4, 4, 4, 1…
## $ Ex_Q5 <dbl> 3, 0, 1, 6, 3, 3, 4, 2, 5, 2, 2, 4, 2, 3, 0, 4, 5, 2, 3, 1, 1,…
## $ Ex_Q6 <dbl> 3, 6, 5, 3, 0, 4, 3, 1, 6, 3, 2, 1, 4, 2, 1, 5, 6, 2, 1, 2, 1,…
## $ Ex_Q7 <dbl> 3, 6, 5, 4, 1, 2, 5, 3, 6, 3, 4, 3, 5, 1, 1, 6, 6, 3, 1, 1, 3,…
## $ Ex_Q8 <dbl> 2, 0, 1, 4, 3, 4, 2, 4, 6, 2, 4, 0, 4, 4, 1, 3, 5, 4, 3, 1, 1,…
## $ Ex_Q9 <dbl> 4, 6, 5, 5, 5, 2, 3, 3, 6, 3, 3, 4, 4, 3, 2, 5, 5, 4, 4, 0, 4,…
## $ Ne_Q1 <dbl> 4, 0, 0, 4, 1, 2, 3, 4, 0, 3, 3, 3, 2, 1, 1, 3, 4, 5, 2, 4, 5,…
## $ Ne_Q2 <dbl> 0, 6, 6, 4, 2, 1, 2, 3, 1, 2, 5, 5, 3, 1, 1, 1, 1, 6, 1, 2, 5,…
## $ Ne_Q3 <dbl> 0, 0, 0, 1, 0, 1, 4, 4, 0, 4, 2, 5, 1, 2, 5, 5, 2, 2, 1, 2, 5,…
## $ Ne_Q4 <dbl> 3, 6, 6, 2, 3, 2, 3, 3, 0, 4, 4, 5, 5, 4, 5, 3, 2, 5, 2, 4, 5,…
## $ Ne_Q5 <dbl> 3, 0, 1, 4, 1, 1, 4, 5, 0, 3, 4, 6, 2, 0, 1, 1, 0, 4, 3, 1, 5,…
## $ Ne_Q6 <dbl> 1, 6, 5, 1, 0, 1, 3, 4, 0, 4, 4, 5, 2, 1, 5, 6, 1, 2, 2, 3, 5,…
## $ Ne_Q7 <dbl> NA, 0, 1, 2, 0, 2, 4, 4, 0, 3, 2, 5, 1, 2, 5, 2, 2, 4, 1, 3, 5…
## $ Ne_Q8 <dbl> 2, 0, 1, 1, 1, 1, 5, 4, 0, 4, 4, 5, 1, 2, 5, 2, 1, 5, 1, 2, 5,…
## $ Op_Q1 <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
## $ Op_Q2 <dbl> 6, 0, 0, 4, 6, 4, 4, 0, 0, 3, 4, 3, 3, 4, 5, 3, 3, 4, 1, 6, 6,…
## $ Op_Q3 <dbl> 2, 6, 5, 5, 5, 4, 3, 2, 4, 3, 3, 6, 5, 5, 6, 5, 4, 4, 3, 6, 5,…
## $ Op_Q4 <dbl> 3, 0, 1, 6, 6, 3, 3, 0, 6, 3, 4, 5, 4, 5, 6, 6, 2, 2, 4, 5, 5,…
## $ Op_Q5 <dbl> 6, 6, 5, 2, 5, 4, 3, 2, 6, 6, 2, 4, 3, 4, 6, 6, 6, 5, 3, 3, 5,…
## $ Op_Q6 <dbl> 0, 6, 5, 1, 6, 4, 6, 0, 0, 3, 5, 3, 5, 5, 5, 2, 5, 1, 1, 6, 2,…
## $ Op_Q7 <dbl> 0, 6, 5, 5, 5, 4, 6, 2, 1, 3, 2, 4, 5, 5, 6, 3, 6, 5, 2, 6, 5,…
```
4\.7 Pipes
----------
Pipes are a way to order your code in a more readable format.
Let’s say you have a small data table with 10 participant IDs, two columns with variable type A, and 2 columns with variable type B. You want to calculate the mean of the A variables and the mean of the B variables and return a table with 10 rows (1 for each participant) and 3 columns (`id`, `A_mean` and `B_mean`).
One way you could do this is by creating a new object at every step and using that object in the next step. This is pretty clear, but you’ve created 6 unnecessary data objects in your environment. This can get confusing in very long scripts.
```
# make a data table with 10 subjects
data_original <- tibble(
id = 1:10,
A1 = rnorm(10, 0),
A2 = rnorm(10, 1),
B1 = rnorm(10, 2),
B2 = rnorm(10, 3)
)
# gather columns A1 to B2 into "variable" and "value" columns
data_gathered <- gather(data_original, variable, value, A1:B2)
# separate the variable column at the _ into "var" and "var_n" columns
data_separated <- separate(data_gathered, variable, c("var", "var_n"), sep = 1)
# group the data by id and var
data_grouped <- group_by(data_separated, id, var)
# calculate the mean value for each id/var
data_summarised <- summarise(data_grouped, mean = mean(value), .groups = "drop")
# spread the mean column into A and B columns
data_spread <- spread(data_summarised, var, mean)
# rename A and B to A_mean and B_mean
data <- rename(data_spread, A_mean = A, B_mean = B)
data
```
| id | A\_mean | B\_mean |
| --- | --- | --- |
| 1 | \-0\.5938256 | 1\.0243046 |
| 2 | 0\.7440623 | 2\.7172046 |
| 3 | 0\.9309275 | 3\.9262358 |
| 4 | 0\.7197686 | 1\.9662632 |
| 5 | \-0\.0280832 | 1\.9473456 |
| 6 | \-0\.0982555 | 3\.2073687 |
| 7 | 0\.1256922 | 0\.9256321 |
| 8 | 1\.4526447 | 2\.3778116 |
| 9 | 0\.2976443 | 1\.6617481 |
| 10 | 0\.5589199 | 2\.1034679 |
You *can* name each object `data` and keep replacing the old data object with the new one at each step. This will keep your environment clean, but I don’t recommend it because it makes it too easy to accidentally run your code out of order when you are running line\-by\-line for development or debugging.
One way to avoid extra objects is to nest your functions, literally replacing each data object with the code that generated it in the previous step. This can be fine for very short chains.
```
mean_petal_width <- round(mean(iris$Petal.Width), 2)
```
But it gets extremely confusing for long chains:
```
# do not ever do this!!
data <- rename(
spread(
summarise(
group_by(
separate(
gather(
tibble(
id = 1:10,
A1 = rnorm(10, 0),
A2 = rnorm(10, 1),
B1 = rnorm(10, 2),
B2 = rnorm(10,3)),
variable, value, A1:B2),
variable, c("var", "var_n"), sep = 1),
id, var),
mean = mean(value), .groups = "drop"),
var, mean),
A_mean = A, B_mean = B)
```
The pipe lets you “pipe” the result of each function into the next function, allowing you to put your code in a logical order without creating too many extra objects.
```
# calculate mean of A and B variables for each participant
data <- tibble(
id = 1:10,
A1 = rnorm(10, 0),
A2 = rnorm(10, 1),
B1 = rnorm(10, 2),
B2 = rnorm(10,3)
) %>%
gather(variable, value, A1:B2) %>%
separate(variable, c("var", "var_n"), sep=1) %>%
group_by(id, var) %>%
summarise(mean = mean(value), .groups = "drop") %>%
spread(var, mean) %>%
rename(A_mean = A, B_mean = B)
```
You can read this code from top to bottom as follows:
1. Make a tibble called `data` with
* id of 1 to 10,
* A1 of 10 random numbers from a normal distribution,
* A2 of 10 random numbers from a normal distribution,
* B1 of 10 random numbers from a normal distribution,
* B2 of 10 random numbers from a normal distribution; and then
2. Gather to create `variable` and `value` column from columns `A_1` to `B_2`; and then
3. Separate the column `variable` into 2 new columns called `var`and `var_n`, separate at character 1; and then
4. Group by columns `id` and `var`; and then
5. Summarise and new column called `mean` as the mean of the `value` column for each group and drop the grouping; and then
6. Spread to make new columns with the key names in `var` and values in `mean`; and then
7. Rename to make columns called `A_mean` (old `A`) and `B_mean` (old `B`)
You can make intermediate objects whenever you need to break up your code because it’s getting too complicated or you need to debug something.
You can debug a pipe by highlighting from the beginning to just before the pipe you want to stop at. Try this by highlighting from `data <-` to the end of the `separate` function and typing cmd\-return. What does `data` look like now?
Chain all the steps above using pipes.
```
personality_reshaped <- personality %>%
gather("question", "score", Op1:Ex9) %>%
separate(question, c("domain", "qnumber"), sep = 2) %>%
unite("domain_n", domain, qnumber, sep = "_Q") %>%
spread(domain_n, score)
```
4\.8 More Complex Example
-------------------------
### 4\.8\.1 Load Data
Get data on infant and maternal mortality rates from the dataskills package. If you don’t have the package, you can download them here:
* [infant mortality](https://psyteachr.github.io/msc-data-skills/data/infmort.csv)
* [maternal mortality](https://psyteachr.github.io/msc-data-skills/data/matmort.xls)
```
data("infmort", package = "dataskills")
head(infmort)
```
| Country | Year | Infant mortality rate (probability of dying between birth and age 1 per 1000 live births) |
| --- | --- | --- |
| Afghanistan | 2015 | 66\.3 \[52\.7\-83\.9] |
| Afghanistan | 2014 | 68\.1 \[55\.7\-83\.6] |
| Afghanistan | 2013 | 69\.9 \[58\.7\-83\.5] |
| Afghanistan | 2012 | 71\.7 \[61\.6\-83\.7] |
| Afghanistan | 2011 | 73\.4 \[64\.4\-84\.2] |
| Afghanistan | 2010 | 75\.1 \[66\.9\-85\.1] |
```
data("matmort", package = "dataskills")
head(matmort)
```
| Country | 1990 | 2000 | 2015 |
| --- | --- | --- | --- |
| Afghanistan | 1 340 \[ 878 \- 1 950] | 1 100 \[ 745 \- 1 570] | 396 \[ 253 \- 620] |
| Albania | 71 \[ 58 \- 88] | 43 \[ 33 \- 56] | 29 \[ 16 \- 46] |
| Algeria | 216 \[ 141 \- 327] | 170 \[ 118 \- 241] | 140 \[ 82 \- 244] |
| Angola | 1 160 \[ 627 \- 2 020] | 924 \[ 472 \- 1 730] | 477 \[ 221 \- 988] |
| Argentina | 72 \[ 64 \- 80] | 60 \[ 54 \- 65] | 52 \[ 44 \- 63] |
| Armenia | 58 \[ 51 \- 65] | 40 \[ 35 \- 46] | 25 \[ 21 \- 31] |
### 4\.8\.2 Wide to Long
`matmort` is in wide format, with a separate column for each year. Change it to long format, with a row for each Country/Year observation.
This example is complicated because the column names to gather *are* numbers. If the column names are non\-standard (e.g., have spaces, start with numbers, or have special characters), you can enclose them in backticks (\`) like the example below.
```
matmort_long <- matmort %>%
pivot_longer(cols = `1990`:`2015`,
names_to = "Year",
values_to = "stats") %>%
glimpse()
```
```
## Rows: 543
## Columns: 3
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Albania", "Alban…
## $ Year <chr> "1990", "2000", "2015", "1990", "2000", "2015", "1990", "2000"…
## $ stats <chr> "1 340 [ 878 - 1 950]", "1 100 [ 745 - 1 570]", "396 [ 253 - …
```
You can put `matmort` at the first argument to `pivot_longer()`; you don’t have to pipe it in. But when I’m working on data processing I often find myself needing to insert or rearrange steps and I constantly introduce errors by forgetting to take the first argument out of a pipe chain, so now I start with the original data table and pipe from there.
Alternatively, you can use the `gather()` function.
```
matmort_long <- matmort %>%
gather("Year", "stats", `1990`:`2015`) %>%
glimpse()
```
```
## Rows: 543
## Columns: 3
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ stats <chr> "1 340 [ 878 - 1 950]", "71 [ 58 - 88]", "216 [ 141 - 327]",…
```
### 4\.8\.3 One Piece of Data per Column
The data in the `stats` column is in an unusual format with some sort of confidence interval in brackets and lots of extra spaces. We don’t need any of the spaces, so first we’ll remove them with `mutate()`, which we’ll learn more about in the next lesson.
The `separate` function will separate your data on anything that is not a number or letter, so try it first without specifying the `sep` argument. The `into` argument is a list of the new column names.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi")) %>%
glimpse()
```
```
## Warning: Expected 3 pieces. Additional pieces discarded in 543 rows [1, 2, 3, 4,
## 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, ...].
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
The `gsub(pattern, replacement, x)` function is a flexible way to do search and replace. The example above replaces all occurances of the `pattern` " " (a space), with the `replacement` "" (nothing), in the string `x` (the `stats` column). Use `sub()` instead if you only want to replace the first occurance of a pattern. We only used a simple pattern here, but you can use more complicated [regex](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) patterns to replace, for example, all even numbers (e.g., `gsub(“[:02468:],” "“,”id = 123456“)`) or all occurances of the word colour in US or UK spelling (e.g., `gsub(”colo(u)?r“,”**“,”replace color, colour, or colours, but not collors")`).
#### 4\.8\.3\.1 Handle spare columns with `extra`
The previous example should have given you an error warning about “Additional pieces discarded in 543 rows.” This is because `separate` splits the column at the brackets and dashes, so the text `100[90-110]` would split into four values `c(“100,” “90,” “110,” "“)`, but we only specified 3 new columns. The fourth value is always empty (just the part after the last bracket), so we are happy to drop it, but `separate` generates a warning so you don’t do that accidentally. You can turn off the warning by adding the `extra` argument and setting it to “drop.” Look at the help for `??tidyr::separate` to see what the other options do.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
#### 4\.8\.3\.2 Set delimiters with `sep`
Now do the same with `infmort`. It’s already in long format, so you don’t need to use `gather`, but the third column has a ridiculously long name, so we can just refer to it by its column number (3\).
```
infmort_split <- infmort %>%
separate(3, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66", "68", "69", "71", "73", "75", "76", "78", "80", "82", "8…
## $ ci_low <chr> "3", "1", "9", "7", "4", "1", "8", "6", "4", "3", "4", "7", "0…
## $ ci_hi <chr> "52", "55", "58", "61", "64", "66", "69", "71", "73", "75", "7…
```
**Wait, that didn’t work at all!** It split the column on spaces, brackets, *and* full stops. We just want to split on the spaces, brackets and dashes. So we need to manually set `sep` to what the delimiters are. Also, once there are more than a few arguments specified for a function, it’s easier to read them if you put one argument on each line.
{\#regex}
You can use [regular expressions](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) to separate complex columns. Here, we want to separate on dashes and brackets. You can separate on a list of delimiters by putting them in parentheses, separated by “\|.” It’s a little more complicated because brackets have a special meaning in regex, so you need to “escape” the left one with two backslashes “\\.”
```
infmort_split <- infmort %>%
separate(
col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])"
) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66.3 ", "68.1 ", "69.9 ", "71.7 ", "73.4 ", "75.1 ", "76.8 ",…
## $ ci_low <chr> "52.7", "55.7", "58.7", "61.6", "64.4", "66.9", "69.0", "71.2"…
## $ ci_hi <chr> "83.9", "83.6", "83.5", "83.7", "84.2", "85.1", "86.1", "87.3"…
```
#### 4\.8\.3\.3 Fix data types with `convert`
That’s better. Notice the next to `Year`, `rate`, `ci_low` and `ci_hi`. That means these columns hold characters (like words), not numbers or integers. This can cause problems when you try to do thigs like average the numbers (you can’t average words), so we can fix it by adding the argument `convert` and setting it to `TRUE`.
```
infmort_split <- infmort %>%
separate(col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <dbl> 66.3, 68.1, 69.9, 71.7, 73.4, 75.1, 76.8, 78.6, 80.4, 82.3, 84…
## $ ci_low <dbl> 52.7, 55.7, 58.7, 61.6, 64.4, 66.9, 69.0, 71.2, 73.4, 75.5, 77…
## $ ci_hi <dbl> 83.9, 83.6, 83.5, 83.7, 84.2, 85.1, 86.1, 87.3, 88.9, 90.7, 92…
```
Do the same for `matmort`.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
### 4\.8\.4 All in one step
We can chain all the steps for `matmort` above together, since we don’t need those intermediate data tables.
```
matmort2<- dataskills::matmort %>%
gather("Year", "stats", `1990`:`2015`) %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(
col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE
) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
### 4\.8\.5 Columns by Year
Spread out the maternal mortality rate by year.
```
matmort_wide <- matmort2 %>%
spread(key = Year, value = rate) %>%
print()
```
```
## # A tibble: 542 x 6
## Country ci_low ci_hi `1990` `2000` `2015`
## <chr> <int> <int> <int> <int> <int>
## 1 Afghanistan 253 620 NA NA 396
## 2 Afghanistan 745 1570 NA 1100 NA
## 3 Afghanistan 878 1950 1340 NA NA
## 4 Albania 16 46 NA NA 29
## 5 Albania 33 56 NA 43 NA
## 6 Albania 58 88 71 NA NA
## 7 Algeria 82 244 NA NA 140
## 8 Algeria 118 241 NA 170 NA
## 9 Algeria 141 327 216 NA NA
## 10 Angola 221 988 NA NA 477
## # … with 532 more rows
```
Nope, that didn’t work at all, but it’s a really common mistake when spreading data. This is because `spread` matches on all the remaining columns, so Afghanistan with `ci_low` of 253 is treated as a different observation than Afghanistan with `ci_low` of 745\.
This is where `pivot_wider()` can be very useful. You can set `values_from` to multiple column names and their names will be added to the `names_from` values.
```
matmort_wide <- matmort2 %>%
pivot_wider(
names_from = Year,
values_from = c(rate, ci_low, ci_hi)
)
glimpse(matmort_wide)
```
```
## Rows: 181
## Columns: 10
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina"…
## $ rate_1990 <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33…
## $ rate_2000 <int> 1100, 43, 170, 924, 60, 40, 9, 5, 48, 61, 21, 399, 48, 26,…
## $ rate_2015 <int> 396, 29, 140, 477, 52, 25, 6, 4, 25, 80, 15, 176, 27, 4, 7…
## $ ci_low_1990 <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, …
## $ ci_low_2000 <int> 745, 33, 118, 472, 54, 35, 8, 4, 42, 50, 18, 322, 38, 22, …
## $ ci_low_2015 <int> 253, 16, 82, 221, 44, 21, 5, 3, 17, 53, 12, 125, 19, 3, 5,…
## $ ci_hi_1990 <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 3…
## $ ci_hi_2000 <int> 1570, 56, 241, 1730, 65, 46, 10, 6, 55, 74, 26, 496, 58, 3…
## $ ci_hi_2015 <int> 620, 46, 244, 988, 63, 31, 7, 5, 35, 124, 19, 280, 37, 6, …
```
### 4\.8\.6 Experimentum Data
Students in the Institute of Neuroscience and Psychology at the University of Glasgow can use the online experiment builder platform, [Experimentum](https://debruine.github.io/experimentum/). The platform is also [open source on github](https://github.com/debruine/experimentum) for anyone who can install it on a web server. It allows you to group questionnaires and experiments into **projects** with randomisation and counterbalancing. Data for questionnaires and experiments are downloadable in long format, but researchers often need to put them in wide format for analysis.
Look at the help menu for built\-in dataset `dataskills::experimentum_quests` to learn what each column is. Subjects are asked questions about dogs to test the different questionnaire response types.
* current: Do you own a dog? (yes/no)
* past: Have you ever owned a dog? (yes/no)
* name: What is the best name for a dog? (free short text)
* good: How good are dogs? (1\=pretty good:7\=very good)
* country: What country do borzois come from?
* good\_borzoi: How good are borzois? (0\=pretty good:100\=very good)
* text: Write some text about dogs. (free long text)
* time: What time is it? (time)
To get the dataset into wide format, where each question is in a separate column, use the following code:
```
q <- dataskills::experimentum_quests %>%
pivot_wider(id_cols = session_id:user_age,
names_from = q_name,
values_from = dv) %>%
type.convert(as.is = TRUE) %>%
print()
```
```
## # A tibble: 24 x 15
## session_id project_id quest_id user_id user_sex user_status user_age current
## <int> <int> <int> <int> <chr> <chr> <dbl> <int>
## 1 34034 1 1 31105 female guest 28.2 1
## 2 34104 1 1 31164 male registered 19.4 1
## 3 34326 1 1 31392 female guest 17 0
## 4 34343 1 1 31397 male guest 22 1
## 5 34765 1 1 31770 female guest 44 1
## 6 34796 1 1 31796 female guest 35.9 0
## 7 34806 1 1 31798 female guest 35 0
## 8 34822 1 1 31802 female guest 58 1
## 9 34864 1 1 31820 male guest 20 0
## 10 35014 1 1 31921 female student 39.2 1
## # … with 14 more rows, and 7 more variables: past <int>, name <chr>,
## # good <int>, country <chr>, text <chr>, good_borzoi <int>, time <chr>
```
The responses in the `dv` column have multiple types (e.g., `r glossary(“integer”)`, `r glossary(“double”)`, and `r glossary(“character”)`), but they are all represented as character strings when they’re in the same column. After you spread the data to wide format, each column should be given the ocrrect data type. The function `type.convert()` makes a best guess at what type each new column should be and converts it. The argument `as.is = TRUE` converts columns where none of the numbers have decimal places to integers.
### 4\.8\.1 Load Data
Get data on infant and maternal mortality rates from the dataskills package. If you don’t have the package, you can download them here:
* [infant mortality](https://psyteachr.github.io/msc-data-skills/data/infmort.csv)
* [maternal mortality](https://psyteachr.github.io/msc-data-skills/data/matmort.xls)
```
data("infmort", package = "dataskills")
head(infmort)
```
| Country | Year | Infant mortality rate (probability of dying between birth and age 1 per 1000 live births) |
| --- | --- | --- |
| Afghanistan | 2015 | 66\.3 \[52\.7\-83\.9] |
| Afghanistan | 2014 | 68\.1 \[55\.7\-83\.6] |
| Afghanistan | 2013 | 69\.9 \[58\.7\-83\.5] |
| Afghanistan | 2012 | 71\.7 \[61\.6\-83\.7] |
| Afghanistan | 2011 | 73\.4 \[64\.4\-84\.2] |
| Afghanistan | 2010 | 75\.1 \[66\.9\-85\.1] |
```
data("matmort", package = "dataskills")
head(matmort)
```
| Country | 1990 | 2000 | 2015 |
| --- | --- | --- | --- |
| Afghanistan | 1 340 \[ 878 \- 1 950] | 1 100 \[ 745 \- 1 570] | 396 \[ 253 \- 620] |
| Albania | 71 \[ 58 \- 88] | 43 \[ 33 \- 56] | 29 \[ 16 \- 46] |
| Algeria | 216 \[ 141 \- 327] | 170 \[ 118 \- 241] | 140 \[ 82 \- 244] |
| Angola | 1 160 \[ 627 \- 2 020] | 924 \[ 472 \- 1 730] | 477 \[ 221 \- 988] |
| Argentina | 72 \[ 64 \- 80] | 60 \[ 54 \- 65] | 52 \[ 44 \- 63] |
| Armenia | 58 \[ 51 \- 65] | 40 \[ 35 \- 46] | 25 \[ 21 \- 31] |
### 4\.8\.2 Wide to Long
`matmort` is in wide format, with a separate column for each year. Change it to long format, with a row for each Country/Year observation.
This example is complicated because the column names to gather *are* numbers. If the column names are non\-standard (e.g., have spaces, start with numbers, or have special characters), you can enclose them in backticks (\`) like the example below.
```
matmort_long <- matmort %>%
pivot_longer(cols = `1990`:`2015`,
names_to = "Year",
values_to = "stats") %>%
glimpse()
```
```
## Rows: 543
## Columns: 3
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Albania", "Alban…
## $ Year <chr> "1990", "2000", "2015", "1990", "2000", "2015", "1990", "2000"…
## $ stats <chr> "1 340 [ 878 - 1 950]", "1 100 [ 745 - 1 570]", "396 [ 253 - …
```
You can put `matmort` at the first argument to `pivot_longer()`; you don’t have to pipe it in. But when I’m working on data processing I often find myself needing to insert or rearrange steps and I constantly introduce errors by forgetting to take the first argument out of a pipe chain, so now I start with the original data table and pipe from there.
Alternatively, you can use the `gather()` function.
```
matmort_long <- matmort %>%
gather("Year", "stats", `1990`:`2015`) %>%
glimpse()
```
```
## Rows: 543
## Columns: 3
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ stats <chr> "1 340 [ 878 - 1 950]", "71 [ 58 - 88]", "216 [ 141 - 327]",…
```
### 4\.8\.3 One Piece of Data per Column
The data in the `stats` column is in an unusual format with some sort of confidence interval in brackets and lots of extra spaces. We don’t need any of the spaces, so first we’ll remove them with `mutate()`, which we’ll learn more about in the next lesson.
The `separate` function will separate your data on anything that is not a number or letter, so try it first without specifying the `sep` argument. The `into` argument is a list of the new column names.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi")) %>%
glimpse()
```
```
## Warning: Expected 3 pieces. Additional pieces discarded in 543 rows [1, 2, 3, 4,
## 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, ...].
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
The `gsub(pattern, replacement, x)` function is a flexible way to do search and replace. The example above replaces all occurances of the `pattern` " " (a space), with the `replacement` "" (nothing), in the string `x` (the `stats` column). Use `sub()` instead if you only want to replace the first occurance of a pattern. We only used a simple pattern here, but you can use more complicated [regex](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) patterns to replace, for example, all even numbers (e.g., `gsub(“[:02468:],” "“,”id = 123456“)`) or all occurances of the word colour in US or UK spelling (e.g., `gsub(”colo(u)?r“,”**“,”replace color, colour, or colours, but not collors")`).
#### 4\.8\.3\.1 Handle spare columns with `extra`
The previous example should have given you an error warning about “Additional pieces discarded in 543 rows.” This is because `separate` splits the column at the brackets and dashes, so the text `100[90-110]` would split into four values `c(“100,” “90,” “110,” "“)`, but we only specified 3 new columns. The fourth value is always empty (just the part after the last bracket), so we are happy to drop it, but `separate` generates a warning so you don’t do that accidentally. You can turn off the warning by adding the `extra` argument and setting it to “drop.” Look at the help for `??tidyr::separate` to see what the other options do.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
#### 4\.8\.3\.2 Set delimiters with `sep`
Now do the same with `infmort`. It’s already in long format, so you don’t need to use `gather`, but the third column has a ridiculously long name, so we can just refer to it by its column number (3\).
```
infmort_split <- infmort %>%
separate(3, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66", "68", "69", "71", "73", "75", "76", "78", "80", "82", "8…
## $ ci_low <chr> "3", "1", "9", "7", "4", "1", "8", "6", "4", "3", "4", "7", "0…
## $ ci_hi <chr> "52", "55", "58", "61", "64", "66", "69", "71", "73", "75", "7…
```
**Wait, that didn’t work at all!** It split the column on spaces, brackets, *and* full stops. We just want to split on the spaces, brackets and dashes. So we need to manually set `sep` to what the delimiters are. Also, once there are more than a few arguments specified for a function, it’s easier to read them if you put one argument on each line.
{\#regex}
You can use [regular expressions](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) to separate complex columns. Here, we want to separate on dashes and brackets. You can separate on a list of delimiters by putting them in parentheses, separated by “\|.” It’s a little more complicated because brackets have a special meaning in regex, so you need to “escape” the left one with two backslashes “\\.”
```
infmort_split <- infmort %>%
separate(
col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])"
) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66.3 ", "68.1 ", "69.9 ", "71.7 ", "73.4 ", "75.1 ", "76.8 ",…
## $ ci_low <chr> "52.7", "55.7", "58.7", "61.6", "64.4", "66.9", "69.0", "71.2"…
## $ ci_hi <chr> "83.9", "83.6", "83.5", "83.7", "84.2", "85.1", "86.1", "87.3"…
```
#### 4\.8\.3\.3 Fix data types with `convert`
That’s better. Notice the next to `Year`, `rate`, `ci_low` and `ci_hi`. That means these columns hold characters (like words), not numbers or integers. This can cause problems when you try to do thigs like average the numbers (you can’t average words), so we can fix it by adding the argument `convert` and setting it to `TRUE`.
```
infmort_split <- infmort %>%
separate(col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <dbl> 66.3, 68.1, 69.9, 71.7, 73.4, 75.1, 76.8, 78.6, 80.4, 82.3, 84…
## $ ci_low <dbl> 52.7, 55.7, 58.7, 61.6, 64.4, 66.9, 69.0, 71.2, 73.4, 75.5, 77…
## $ ci_hi <dbl> 83.9, 83.6, 83.5, 83.7, 84.2, 85.1, 86.1, 87.3, 88.9, 90.7, 92…
```
Do the same for `matmort`.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
#### 4\.8\.3\.1 Handle spare columns with `extra`
The previous example should have given you an error warning about “Additional pieces discarded in 543 rows.” This is because `separate` splits the column at the brackets and dashes, so the text `100[90-110]` would split into four values `c(“100,” “90,” “110,” "“)`, but we only specified 3 new columns. The fourth value is always empty (just the part after the last bracket), so we are happy to drop it, but `separate` generates a warning so you don’t do that accidentally. You can turn off the warning by adding the `extra` argument and setting it to “drop.” Look at the help for `??tidyr::separate` to see what the other options do.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
#### 4\.8\.3\.2 Set delimiters with `sep`
Now do the same with `infmort`. It’s already in long format, so you don’t need to use `gather`, but the third column has a ridiculously long name, so we can just refer to it by its column number (3\).
```
infmort_split <- infmort %>%
separate(3, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66", "68", "69", "71", "73", "75", "76", "78", "80", "82", "8…
## $ ci_low <chr> "3", "1", "9", "7", "4", "1", "8", "6", "4", "3", "4", "7", "0…
## $ ci_hi <chr> "52", "55", "58", "61", "64", "66", "69", "71", "73", "75", "7…
```
**Wait, that didn’t work at all!** It split the column on spaces, brackets, *and* full stops. We just want to split on the spaces, brackets and dashes. So we need to manually set `sep` to what the delimiters are. Also, once there are more than a few arguments specified for a function, it’s easier to read them if you put one argument on each line.
{\#regex}
You can use [regular expressions](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) to separate complex columns. Here, we want to separate on dashes and brackets. You can separate on a list of delimiters by putting them in parentheses, separated by “\|.” It’s a little more complicated because brackets have a special meaning in regex, so you need to “escape” the left one with two backslashes “\\.”
```
infmort_split <- infmort %>%
separate(
col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])"
) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66.3 ", "68.1 ", "69.9 ", "71.7 ", "73.4 ", "75.1 ", "76.8 ",…
## $ ci_low <chr> "52.7", "55.7", "58.7", "61.6", "64.4", "66.9", "69.0", "71.2"…
## $ ci_hi <chr> "83.9", "83.6", "83.5", "83.7", "84.2", "85.1", "86.1", "87.3"…
```
#### 4\.8\.3\.3 Fix data types with `convert`
That’s better. Notice the next to `Year`, `rate`, `ci_low` and `ci_hi`. That means these columns hold characters (like words), not numbers or integers. This can cause problems when you try to do thigs like average the numbers (you can’t average words), so we can fix it by adding the argument `convert` and setting it to `TRUE`.
```
infmort_split <- infmort %>%
separate(col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <dbl> 66.3, 68.1, 69.9, 71.7, 73.4, 75.1, 76.8, 78.6, 80.4, 82.3, 84…
## $ ci_low <dbl> 52.7, 55.7, 58.7, 61.6, 64.4, 66.9, 69.0, 71.2, 73.4, 75.5, 77…
## $ ci_hi <dbl> 83.9, 83.6, 83.5, 83.7, 84.2, 85.1, 86.1, 87.3, 88.9, 90.7, 92…
```
Do the same for `matmort`.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
### 4\.8\.4 All in one step
We can chain all the steps for `matmort` above together, since we don’t need those intermediate data tables.
```
matmort2<- dataskills::matmort %>%
gather("Year", "stats", `1990`:`2015`) %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(
col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE
) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
### 4\.8\.5 Columns by Year
Spread out the maternal mortality rate by year.
```
matmort_wide <- matmort2 %>%
spread(key = Year, value = rate) %>%
print()
```
```
## # A tibble: 542 x 6
## Country ci_low ci_hi `1990` `2000` `2015`
## <chr> <int> <int> <int> <int> <int>
## 1 Afghanistan 253 620 NA NA 396
## 2 Afghanistan 745 1570 NA 1100 NA
## 3 Afghanistan 878 1950 1340 NA NA
## 4 Albania 16 46 NA NA 29
## 5 Albania 33 56 NA 43 NA
## 6 Albania 58 88 71 NA NA
## 7 Algeria 82 244 NA NA 140
## 8 Algeria 118 241 NA 170 NA
## 9 Algeria 141 327 216 NA NA
## 10 Angola 221 988 NA NA 477
## # … with 532 more rows
```
Nope, that didn’t work at all, but it’s a really common mistake when spreading data. This is because `spread` matches on all the remaining columns, so Afghanistan with `ci_low` of 253 is treated as a different observation than Afghanistan with `ci_low` of 745\.
This is where `pivot_wider()` can be very useful. You can set `values_from` to multiple column names and their names will be added to the `names_from` values.
```
matmort_wide <- matmort2 %>%
pivot_wider(
names_from = Year,
values_from = c(rate, ci_low, ci_hi)
)
glimpse(matmort_wide)
```
```
## Rows: 181
## Columns: 10
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina"…
## $ rate_1990 <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33…
## $ rate_2000 <int> 1100, 43, 170, 924, 60, 40, 9, 5, 48, 61, 21, 399, 48, 26,…
## $ rate_2015 <int> 396, 29, 140, 477, 52, 25, 6, 4, 25, 80, 15, 176, 27, 4, 7…
## $ ci_low_1990 <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, …
## $ ci_low_2000 <int> 745, 33, 118, 472, 54, 35, 8, 4, 42, 50, 18, 322, 38, 22, …
## $ ci_low_2015 <int> 253, 16, 82, 221, 44, 21, 5, 3, 17, 53, 12, 125, 19, 3, 5,…
## $ ci_hi_1990 <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 3…
## $ ci_hi_2000 <int> 1570, 56, 241, 1730, 65, 46, 10, 6, 55, 74, 26, 496, 58, 3…
## $ ci_hi_2015 <int> 620, 46, 244, 988, 63, 31, 7, 5, 35, 124, 19, 280, 37, 6, …
```
### 4\.8\.6 Experimentum Data
Students in the Institute of Neuroscience and Psychology at the University of Glasgow can use the online experiment builder platform, [Experimentum](https://debruine.github.io/experimentum/). The platform is also [open source on github](https://github.com/debruine/experimentum) for anyone who can install it on a web server. It allows you to group questionnaires and experiments into **projects** with randomisation and counterbalancing. Data for questionnaires and experiments are downloadable in long format, but researchers often need to put them in wide format for analysis.
Look at the help menu for built\-in dataset `dataskills::experimentum_quests` to learn what each column is. Subjects are asked questions about dogs to test the different questionnaire response types.
* current: Do you own a dog? (yes/no)
* past: Have you ever owned a dog? (yes/no)
* name: What is the best name for a dog? (free short text)
* good: How good are dogs? (1\=pretty good:7\=very good)
* country: What country do borzois come from?
* good\_borzoi: How good are borzois? (0\=pretty good:100\=very good)
* text: Write some text about dogs. (free long text)
* time: What time is it? (time)
To get the dataset into wide format, where each question is in a separate column, use the following code:
```
q <- dataskills::experimentum_quests %>%
pivot_wider(id_cols = session_id:user_age,
names_from = q_name,
values_from = dv) %>%
type.convert(as.is = TRUE) %>%
print()
```
```
## # A tibble: 24 x 15
## session_id project_id quest_id user_id user_sex user_status user_age current
## <int> <int> <int> <int> <chr> <chr> <dbl> <int>
## 1 34034 1 1 31105 female guest 28.2 1
## 2 34104 1 1 31164 male registered 19.4 1
## 3 34326 1 1 31392 female guest 17 0
## 4 34343 1 1 31397 male guest 22 1
## 5 34765 1 1 31770 female guest 44 1
## 6 34796 1 1 31796 female guest 35.9 0
## 7 34806 1 1 31798 female guest 35 0
## 8 34822 1 1 31802 female guest 58 1
## 9 34864 1 1 31820 male guest 20 0
## 10 35014 1 1 31921 female student 39.2 1
## # … with 14 more rows, and 7 more variables: past <int>, name <chr>,
## # good <int>, country <chr>, text <chr>, good_borzoi <int>, time <chr>
```
The responses in the `dv` column have multiple types (e.g., `r glossary(“integer”)`, `r glossary(“double”)`, and `r glossary(“character”)`), but they are all represented as character strings when they’re in the same column. After you spread the data to wide format, each column should be given the ocrrect data type. The function `type.convert()` makes a best guess at what type each new column should be and converts it. The argument `as.is = TRUE` converts columns where none of the numbers have decimal places to integers.
4\.9 Glossary
-------------
| term | definition |
| --- | --- |
| [long](https://psyteachr.github.io/glossary/l#long) | Data where each observation is on a separate row |
| [observation](https://psyteachr.github.io/glossary/o#observation) | All of the data about a single trial or question. |
| [value](https://psyteachr.github.io/glossary/v#value) | A single number or piece of data. |
| [variable](https://psyteachr.github.io/glossary/v#variable) | A word that identifies and stores the value of some data for later use. |
| [wide](https://psyteachr.github.io/glossary/w#wide) | Data where all of the observations about one subject are in the same row |
4\.10 Exercises
---------------
Download the [exercises](exercises/04_tidyr_exercise.Rmd). See the [answers](exercises/04_tidyr_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(4)
# run this to access the answers
dataskills::exercise(4, answers = TRUE)
```
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/tidyr.html |
Chapter 4 Tidy Data
===================
4\.1 Learning Objectives
------------------------
### 4\.1\.1 Basic
1. Understand the concept of [tidy data](tidyr.html#tidy-data) [(video)](https://youtu.be/EsSN4OdsNpc)
2. Be able to convert between long and wide formats using pivot functions [(video)](https://youtu.be/4dvLmjhwN8I)
* [`pivot_longer()`](tidyr.html#pivot_longer)
* [`pivot_wider()`](tidyr.html#pivot_wider)
3. Be able to use the 4 basic `tidyr` verbs [(video)](https://youtu.be/oUWjb0JC8zM)
* [`gather()`](tidyr.html#gather)
* [`separate()`](tidyr.html#separate)
* [`spread()`](tidyr.html#spread)
* [`unite()`](tidyr.html#unite)
4. Be able to chain functions using [pipes](tidyr.html#pipes) [(video)](https://youtu.be/itfrlLaN4SE)
### 4\.1\.2 Advanced
5. Be able to use [regular expressions](#regex) to separate complex columns
4\.2 Resources
--------------
* [Tidy Data](http://vita.had.co.nz/papers/tidy-data.html)
* [Chapter 12: Tidy Data](http://r4ds.had.co.nz/tidy-data.html) in *R for Data Science*
* [Chapter 18: Pipes](http://r4ds.had.co.nz/pipes.html) in *R for Data Science*
* [Data wrangling cheat sheet](https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf)
4\.3 Setup
----------
```
# libraries needed
library(tidyverse)
library(dataskills)
set.seed(8675309) # makes sure random numbers are reproducible
```
4\.4 Tidy Data
--------------
### 4\.4\.1 Three Rules
* Each [variable](https://psyteachr.github.io/glossary/v#variable "A word that identifies and stores the value of some data for later use.") must have its own column
* Each [observation](https://psyteachr.github.io/glossary/o#observation "All of the data about a single trial or question.") must have its own row
* Each [value](https://psyteachr.github.io/glossary/v#value "A single number or piece of data.") must have its own cell
This table has three observations per row and the `total_meanRT` column contains two values.
Table 4\.1: Untidy table
| id | score\_1 | score\_2 | score\_3 | rt\_1 | rt\_2 | rt\_3 | total\_meanRT |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 4 | 3 | 7 | 857 | 890 | 859 | 14 (869\) |
| 2 | 3 | 1 | 1 | 902 | 900 | 959 | 5 (920\) |
| 3 | 2 | 5 | 4 | 757 | 823 | 901 | 11 (827\) |
| 4 | 6 | 2 | 6 | 844 | 788 | 624 | 14 (752\) |
| 5 | 1 | 7 | 2 | 659 | 764 | 690 | 10 (704\) |
This is the tidy version.
Table 4\.1: Tidy table
| id | trial | rt | score | total | mean\_rt |
| --- | --- | --- | --- | --- | --- |
| 1 | 1 | 857 | 4 | 14 | 869 |
| 1 | 2 | 890 | 3 | 14 | 869 |
| 1 | 3 | 859 | 7 | 14 | 869 |
| 2 | 1 | 902 | 3 | 5 | 920 |
| 2 | 2 | 900 | 1 | 5 | 920 |
| 2 | 3 | 959 | 1 | 5 | 920 |
| 3 | 1 | 757 | 2 | 11 | 827 |
| 3 | 2 | 823 | 5 | 11 | 827 |
| 3 | 3 | 901 | 4 | 11 | 827 |
| 4 | 1 | 844 | 6 | 14 | 752 |
| 4 | 2 | 788 | 2 | 14 | 752 |
| 4 | 3 | 624 | 6 | 14 | 752 |
| 5 | 1 | 659 | 1 | 10 | 704 |
| 5 | 2 | 764 | 7 | 10 | 704 |
| 5 | 3 | 690 | 2 | 10 | 704 |
### 4\.4\.2 Wide versus long
Data tables can be in [wide](https://psyteachr.github.io/glossary/w#wide "Data where all of the observations about one subject are in the same row") format or [long](https://psyteachr.github.io/glossary/l#long "Data where each observation is on a separate row") format (and sometimes a mix of the two). Wide data are where all of the observations about one subject are in the same row, while long data are where each observation is on a separate row. You often need to convert between these formats to do different types of analyses or data processing.
Imagine a study where each subject completes a questionnaire with three items. Each answer is an [observation](https://psyteachr.github.io/glossary/o#observation "All of the data about a single trial or question.") of that subject. You are probably most familiar with data like this in a wide format, where the subject `id` is in one column, and each of the three item responses is in its own column.
Table 4\.2: Wide data
| id | Q1 | Q2 | Q3 |
| --- | --- | --- | --- |
| A | 1 | 2 | 3 |
| B | 4 | 5 | 6 |
The same data can be represented in a long format by creating a new column that specifies what `item` the observation is from and a new column that specifies the `value` of that observation.
Table 4\.3: Long data
| id | item | value |
| --- | --- | --- |
| A | Q1 | 1 |
| B | Q1 | 4 |
| A | Q2 | 2 |
| B | Q2 | 5 |
| A | Q3 | 3 |
| B | Q3 | 6 |
Create a long version of the following table.
| id | fav\_colour | fav\_animal |
| --- | --- | --- |
| Lisa | red | echidna |
| Robbie | orange | babirusa |
| Steven | green | frog |
Answer
Your answer doesn’t need to have the same column headers or be in the same order.
| id | fav | answer |
| --- | --- | --- |
| Lisa | colour | red |
| Lisa | animal | echidna |
| Robbie | colour | orange |
| Robbie | animal | babirusa |
| Steven | colour | green |
| Steven | animal | frog |
4\.5 Pivot Functions
--------------------
The pivot functions allow you to transform a data table from wide to long or long to wide in one step.
### 4\.5\.1 Load Data
We will used the dataset `personality` from the dataskills package (or download the data from [personality.csv](https://psyteachr.github.io/msc-data-skills/data/personality.csv)). These data are from a 5\-factor (personality) personality questionnaire. Each question is labelled with the domain (Op \= openness, Co \= conscientiousness, Ex \= extroversion, Ag \= agreeableness, and Ne \= neuroticism) and the question number.
```
data("personality", package = "dataskills")
```
| user\_id | date | Op1 | Ne1 | Ne2 | Op2 | Ex1 | Ex2 | Co1 | Co2 | Ne3 | Ag1 | Ag2 | Ne4 | Ex3 | Co3 | Op3 | Ex4 | Op4 | Ex5 | Ag3 | Co4 | Co5 | Ne5 | Op5 | Ag4 | Op6 | Co6 | Ex6 | Ne6 | Co7 | Ag5 | Co8 | Ex7 | Ne7 | Co9 | Op7 | Ne8 | Ag6 | Ag7 | Co10 | Ex8 | Ex9 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 2006\-03\-23 | 3 | 4 | 0 | 6 | 3 | 3 | 3 | 3 | 0 | 2 | 1 | 3 | 3 | 2 | 2 | 1 | 3 | 3 | 1 | 3 | 0 | 3 | 6 | 1 | 0 | 6 | 3 | 1 | 3 | 3 | 3 | 3 | NA | 3 | 0 | 2 | NA | 3 | 1 | 2 | 4 |
| 1 | 2006\-02\-08 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 6 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 6 | 6 | 0 | 6 | 0 | 6 | 0 | 6 | 6 | 6 | 6 | 0 | 6 | 0 | 6 | 6 | 0 | 6 | 0 | 6 | 0 | 6 |
| 2 | 2005\-10\-24 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 6 | 5 | 1 | 5 | 1 | 1 | 1 | 1 | 5 | 5 | 1 | 5 | 1 | 5 | 1 | 5 | 5 | 5 | 5 | 1 | 5 | 1 | 5 | 5 | 1 | 5 | 1 | 5 | 1 | 5 |
| 5 | 2005\-12\-07 | 6 | 4 | 4 | 4 | 2 | 3 | 3 | 3 | 1 | 4 | 0 | 2 | 5 | 3 | 5 | 3 | 6 | 6 | 1 | 5 | 5 | 4 | 2 | 4 | 1 | 4 | 3 | 1 | 1 | 0 | 1 | 4 | 2 | 4 | 5 | 1 | 2 | 1 | 5 | 4 | 5 |
| 8 | 2006\-07\-27 | 6 | 1 | 2 | 6 | 2 | 3 | 5 | 4 | 0 | 6 | 5 | 3 | 3 | 4 | 5 | 3 | 6 | 3 | 0 | 5 | 5 | 1 | 5 | 6 | 6 | 6 | 0 | 0 | 3 | 2 | 3 | 1 | 0 | 3 | 5 | 1 | 3 | 1 | 3 | 3 | 5 |
| 108 | 2006\-02\-28 | 3 | 2 | 1 | 4 | 4 | 4 | 4 | 3 | 1 | 5 | 4 | 2 | 3 | 4 | 4 | 3 | 3 | 3 | 4 | 3 | 3 | 1 | 4 | 5 | 4 | 5 | 4 | 1 | 4 | 5 | 4 | 2 | 2 | 4 | 4 | 1 | 4 | 3 | 5 | 4 | 2 |
### 4\.5\.2 pivot\_longer()
`pivot_longer()` converts a wide data table to long format by converting the headers from specified columns into the values of new columns, and combining the values of those columns into a new condensed column.
* `cols` refers to the columns you want to make long You can refer to them by their names, like `col1, col2, col3, col4` or `col1:col4` or by their numbers, like `8, 9, 10` or `8:10`.
* `names_to` is what you want to call the new columns that the gathered column headers will go into; it’s “domain” and “qnumber” in this example.
* `names_sep` is an optional argument if you have more than one value for `names_to`. It specifies the characters or position to split the values of the `cols` headers.
* `values_to` is what you want to call the values in the columns `...`; they’re “score” in this example.
```
personality_long <- pivot_longer(
data = personality,
cols = Op1:Ex9, # columns to make long
names_to = c("domain", "qnumber"), # new column names for headers
names_sep = 2, # how to split the headers
values_to = "score" # new column name for values
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 5
## $ user_id <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
## $ date <date> 2006-03-23, 2006-03-23, 2006-03-23, 2006-03-23, 2006-03-23, 2…
## $ domain <chr> "Op", "Ne", "Ne", "Op", "Ex", "Ex", "Co", "Co", "Ne", "Ag", "A…
## $ qnumber <chr> "1", "1", "2", "2", "1", "2", "1", "2", "3", "1", "2", "4", "3…
## $ score <dbl> 3, 4, 0, 6, 3, 3, 3, 3, 0, 2, 1, 3, 3, 2, 2, 1, 3, 3, 1, 3, 0,…
```
You can pipe a data table to `glimpse()` at the end to have a quick look at it. It will still save to the object.
What would you set `names_sep` to in order to split the `cols` headers listed below into the results?
| `cols` | `names_to` | `names_sep` |
| --- | --- | --- |
| `A_1`, `A_2`, `B_1`, `B_2` | `c("condition", "version")` | A B 1 2 \_ |
| `A1`, `A2`, `B1`, `B2` | `c("condition", "version")` | A B 1 2 \_ |
| `cat-day&pre`, `cat-day&post`, `cat-night&pre`, `cat-night&post`, `dog-day&pre`, `dog-day&post`, `dog-night&pre`, `dog-night&post` | `c("pet", "time", "condition")` | \- \& \- |
### 4\.5\.3 pivot\_wider()
We can also go from long to wide format using the `pivot_wider()` function.
* `names_from` is the columns that contain your new column headers.
* `values_from` is the column that contains the values for the new columns.
* `names_sep` is the character string used to join names if `names_from` is more than one column.
```
personality_wide <- pivot_wider(
data = personality_long,
names_from = c(domain, qnumber),
values_from = score,
names_sep = ""
) %>%
glimpse()
```
```
## Rows: 15,000
## Columns: 43
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ Op1 <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
## $ Ne1 <dbl> 4, 0, 0, 4, 1, 2, 3, 4, 0, 3, 3, 3, 2, 1, 1, 3, 4, 5, 2, 4, 5,…
## $ Ne2 <dbl> 0, 6, 6, 4, 2, 1, 2, 3, 1, 2, 5, 5, 3, 1, 1, 1, 1, 6, 1, 2, 5,…
## $ Op2 <dbl> 6, 0, 0, 4, 6, 4, 4, 0, 0, 3, 4, 3, 3, 4, 5, 3, 3, 4, 1, 6, 6,…
## $ Ex1 <dbl> 3, 0, 0, 2, 2, 4, 4, 3, 5, 4, 1, 1, 3, 3, 1, 3, 5, 1, 0, 4, 1,…
## $ Ex2 <dbl> 3, 0, 0, 3, 3, 4, 5, 2, 5, 3, 4, 1, 3, 2, 1, 6, 5, 3, 4, 4, 1,…
## $ Co1 <dbl> 3, 0, 0, 3, 5, 4, 3, 4, 5, 3, 3, 3, 1, 5, 5, 4, 4, 5, 6, 4, 2,…
## $ Co2 <dbl> 3, 0, 0, 3, 4, 3, 3, 4, 5, 3, 5, 3, 3, 4, 5, 1, 5, 4, 5, 2, 5,…
## $ Ne3 <dbl> 0, 0, 0, 1, 0, 1, 4, 4, 0, 4, 2, 5, 1, 2, 5, 5, 2, 2, 1, 2, 5,…
## $ Ag1 <dbl> 2, 0, 0, 4, 6, 5, 5, 4, 2, 5, 4, 3, 2, 4, 5, 3, 5, 5, 5, 4, 4,…
## $ Ag2 <dbl> 1, 6, 6, 0, 5, 4, 5, 3, 4, 3, 5, 1, 5, 4, 2, 6, 5, 5, 5, 5, 2,…
## $ Ne4 <dbl> 3, 6, 6, 2, 3, 2, 3, 3, 0, 4, 4, 5, 5, 4, 5, 3, 2, 5, 2, 4, 5,…
## $ Ex3 <dbl> 3, 6, 5, 5, 3, 3, 3, 0, 6, 1, 4, 2, 3, 2, 1, 2, 5, 1, 0, 5, 5,…
## $ Co3 <dbl> 2, 0, 1, 3, 4, 4, 5, 4, 5, 3, 4, 3, 4, 4, 5, 4, 2, 4, 5, 2, 2,…
## $ Op3 <dbl> 2, 6, 5, 5, 5, 4, 3, 2, 4, 3, 3, 6, 5, 5, 6, 5, 4, 4, 3, 6, 5,…
## $ Ex4 <dbl> 1, 0, 1, 3, 3, 3, 4, 3, 5, 3, 2, 0, 3, 3, 1, 2, NA, 4, 4, 4, 1…
## $ Op4 <dbl> 3, 0, 1, 6, 6, 3, 3, 0, 6, 3, 4, 5, 4, 5, 6, 6, 2, 2, 4, 5, 5,…
## $ Ex5 <dbl> 3, 0, 1, 6, 3, 3, 4, 2, 5, 2, 2, 4, 2, 3, 0, 4, 5, 2, 3, 1, 1,…
## $ Ag3 <dbl> 1, 0, 1, 1, 0, 4, 4, 4, 3, 3, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 4,…
## $ Co4 <dbl> 3, 6, 5, 5, 5, 3, 2, 4, 3, 1, 4, 3, 1, 2, 4, 2, NA, 5, 6, 1, 1…
## $ Co5 <dbl> 0, 6, 5, 5, 5, 3, 3, 1, 5, 1, 2, 4, 4, 4, 2, 1, 6, 4, 3, 1, 3,…
## $ Ne5 <dbl> 3, 0, 1, 4, 1, 1, 4, 5, 0, 3, 4, 6, 2, 0, 1, 1, 0, 4, 3, 1, 5,…
## $ Op5 <dbl> 6, 6, 5, 2, 5, 4, 3, 2, 6, 6, 2, 4, 3, 4, 6, 6, 6, 5, 3, 3, 5,…
## $ Ag4 <dbl> 1, 0, 1, 4, 6, 5, 5, 6, 6, 6, 4, 2, 4, 5, 4, 5, 6, 4, 5, 6, 5,…
## $ Op6 <dbl> 0, 6, 5, 1, 6, 4, 6, 0, 0, 3, 5, 3, 5, 5, 5, 2, 5, 1, 1, 6, 2,…
## $ Co6 <dbl> 6, 0, 1, 4, 6, 5, 6, 5, 4, 3, 5, 5, 4, 6, 6, 1, 3, 4, 5, 4, 6,…
## $ Ex6 <dbl> 3, 6, 5, 3, 0, 4, 3, 1, 6, 3, 2, 1, 4, 2, 1, 5, 6, 2, 1, 2, 1,…
## $ Ne6 <dbl> 1, 6, 5, 1, 0, 1, 3, 4, 0, 4, 4, 5, 2, 1, 5, 6, 1, 2, 2, 3, 5,…
## $ Co7 <dbl> 3, 6, 5, 1, 3, 4, NA, 2, 3, 3, 2, 2, 4, 2, 5, 2, 5, 5, 3, 1, 1…
## $ Ag5 <dbl> 3, 6, 5, 0, 2, 5, 6, 2, 2, 3, 4, 1, 3, 5, 2, 6, 5, 6, 5, 3, 3,…
## $ Co8 <dbl> 3, 0, 1, 1, 3, 4, 3, 0, 1, 3, 2, 2, 1, 2, 4, 3, 2, 4, 5, 2, 6,…
## $ Ex7 <dbl> 3, 6, 5, 4, 1, 2, 5, 3, 6, 3, 4, 3, 5, 1, 1, 6, 6, 3, 1, 1, 3,…
## $ Ne7 <dbl> NA, 0, 1, 2, 0, 2, 4, 4, 0, 3, 2, 5, 1, 2, 5, 2, 2, 4, 1, 3, 5…
## $ Co9 <dbl> 3, 6, 5, 4, 3, 4, 5, 3, 5, 3, 4, 3, 4, 4, 2, 4, 6, 5, 5, 2, 2,…
## $ Op7 <dbl> 0, 6, 5, 5, 5, 4, 6, 2, 1, 3, 2, 4, 5, 5, 6, 3, 6, 5, 2, 6, 5,…
## $ Ne8 <dbl> 2, 0, 1, 1, 1, 1, 5, 4, 0, 4, 4, 5, 1, 2, 5, 2, 1, 5, 1, 2, 5,…
## $ Ag6 <dbl> NA, 6, 5, 2, 3, 4, 5, 6, 1, 3, 4, 2, 3, 5, 1, 6, 2, 6, 6, 5, 3…
## $ Ag7 <dbl> 3, 0, 1, 1, 1, 3, 3, 5, 0, 3, 2, 1, 2, 3, 5, 6, 4, 4, 6, 6, 2,…
## $ Co10 <dbl> 1, 6, 5, 5, 3, 5, 1, 2, 5, 2, 4, 3, 4, 4, 3, 2, 5, 5, 5, 2, 2,…
## $ Ex8 <dbl> 2, 0, 1, 4, 3, 4, 2, 4, 6, 2, 4, 0, 4, 4, 1, 3, 5, 4, 3, 1, 1,…
## $ Ex9 <dbl> 4, 6, 5, 5, 5, 2, 3, 3, 6, 3, 3, 4, 4, 3, 2, 5, 5, 4, 4, 0, 4,…
```
4\.6 Tidy Verbs
---------------
The pivot functions above are relatively new functions that combine the four basic tidy verbs. You can also convert data between long and wide formats using these functions. Many researchers still use these functions and older code will not use the pivot functions, so it is useful to know how to interpret these.
### 4\.6\.1 gather()
Much like `pivot_longer()`, `gather()` makes a wide data table long by creating a column for the headers and a column for the values. The main difference is that you cannot turn the headers into more than one column.
* `key` is what you want to call the new column that the gathered column headers will go into; it’s “question” in this example. It is like `names_to` in `pivot_longer()`, but can only take one value (multiple values need to be separated after `separate()`).
* `value` is what you want to call the values in the gathered columns; they’re “score” in this example. It is like `values_to` in `pivot_longer()`.
* `...` refers to the columns you want to gather. It is like `cols` in `pivot_longer()`.
The `gather()` function converts `personality` from a wide data table to long format, with a row for each user/question observation. The resulting data table should have the columns: `user_id`, `date`, `question`, and `score`.
```
personality_gathered <- gather(
data = personality,
key = "question", # new column name for gathered headers
value = "score", # new column name for gathered values
Op1:Ex9 # columns to gather
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 4
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 9…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, …
## $ question <chr> "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1"…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4…
```
### 4\.6\.2 separate()
* `col` is the column you want to separate
* `into` is a vector of new column names
* `sep` is the character(s) that separate your new columns. This defaults to anything that isn’t alphanumeric, like .`,_-/:` and is like the `names_sep` argument in `pivot_longer()`.
Split the `question` column into two columns: `domain` and `qnumber`.
There is no character to split on, here, but you can separate a column after a specific number of characters by setting `sep` to an integer. For example, to split “abcde” after the third character, use `sep = 3`, which results in `c("abc", "de")`. You can also use negative number to split before the *n*th character from the right. For example, to split a column that has words of various lengths and 2\-digit suffixes (like “lisa03”“,”amanda38"), you can use `sep = -2`.
```
personality_sep <- separate(
data = personality_gathered,
col = question, # column to separate
into = c("domain", "qnumber"), # new column names
sep = 2 # where to separate
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 5
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ domain <chr> "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "O…
## $ qnumber <chr> "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
```
If you want to separate just at full stops, you need to use `sep = "\\."`, not `sep = "."`. The two slashes **escape** the full stop, making it interpreted as a literal full stop and not the regular expression for any character.
### 4\.6\.3 unite()
* `col` is your new united column
* `...` refers to the columns you want to unite
* `sep` is the character(s) that will separate your united columns
Put the domain and qnumber columns back together into a new column named `domain_n`. Make it in a format like “Op\_Q1\.”
```
personality_unite <- unite(
data = personality_sep,
col = "domain_n", # new column name
domain, qnumber, # columns to unite
sep = "_Q" # separation characters
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 4
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 9…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, …
## $ domain_n <chr> "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1"…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4…
```
### 4\.6\.4 spread()
You can reverse the processes above, as well. For example, you can convert data from long format into wide format.
* `key` is the column that contains your new column headers. It is like `names_from` in `pivot_wider()`, but can only take one value (multiple values need to be merged first using `unite()`).
* `value` is the column that contains the values in the new spread columns. It is like `values_from` in `pivot_wider()`.
```
personality_spread <- spread(
data = personality_unite,
key = domain_n, # column that contains new headers
value = score # column that contains new values
) %>%
glimpse()
```
```
## Rows: 15,000
## Columns: 43
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ Ag_Q1 <dbl> 2, 0, 0, 4, 6, 5, 5, 4, 2, 5, 4, 3, 2, 4, 5, 3, 5, 5, 5, 4, 4,…
## $ Ag_Q2 <dbl> 1, 6, 6, 0, 5, 4, 5, 3, 4, 3, 5, 1, 5, 4, 2, 6, 5, 5, 5, 5, 2,…
## $ Ag_Q3 <dbl> 1, 0, 1, 1, 0, 4, 4, 4, 3, 3, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 4,…
## $ Ag_Q4 <dbl> 1, 0, 1, 4, 6, 5, 5, 6, 6, 6, 4, 2, 4, 5, 4, 5, 6, 4, 5, 6, 5,…
## $ Ag_Q5 <dbl> 3, 6, 5, 0, 2, 5, 6, 2, 2, 3, 4, 1, 3, 5, 2, 6, 5, 6, 5, 3, 3,…
## $ Ag_Q6 <dbl> NA, 6, 5, 2, 3, 4, 5, 6, 1, 3, 4, 2, 3, 5, 1, 6, 2, 6, 6, 5, 3…
## $ Ag_Q7 <dbl> 3, 0, 1, 1, 1, 3, 3, 5, 0, 3, 2, 1, 2, 3, 5, 6, 4, 4, 6, 6, 2,…
## $ Co_Q1 <dbl> 3, 0, 0, 3, 5, 4, 3, 4, 5, 3, 3, 3, 1, 5, 5, 4, 4, 5, 6, 4, 2,…
## $ Co_Q10 <dbl> 1, 6, 5, 5, 3, 5, 1, 2, 5, 2, 4, 3, 4, 4, 3, 2, 5, 5, 5, 2, 2,…
## $ Co_Q2 <dbl> 3, 0, 0, 3, 4, 3, 3, 4, 5, 3, 5, 3, 3, 4, 5, 1, 5, 4, 5, 2, 5,…
## $ Co_Q3 <dbl> 2, 0, 1, 3, 4, 4, 5, 4, 5, 3, 4, 3, 4, 4, 5, 4, 2, 4, 5, 2, 2,…
## $ Co_Q4 <dbl> 3, 6, 5, 5, 5, 3, 2, 4, 3, 1, 4, 3, 1, 2, 4, 2, NA, 5, 6, 1, 1…
## $ Co_Q5 <dbl> 0, 6, 5, 5, 5, 3, 3, 1, 5, 1, 2, 4, 4, 4, 2, 1, 6, 4, 3, 1, 3,…
## $ Co_Q6 <dbl> 6, 0, 1, 4, 6, 5, 6, 5, 4, 3, 5, 5, 4, 6, 6, 1, 3, 4, 5, 4, 6,…
## $ Co_Q7 <dbl> 3, 6, 5, 1, 3, 4, NA, 2, 3, 3, 2, 2, 4, 2, 5, 2, 5, 5, 3, 1, 1…
## $ Co_Q8 <dbl> 3, 0, 1, 1, 3, 4, 3, 0, 1, 3, 2, 2, 1, 2, 4, 3, 2, 4, 5, 2, 6,…
## $ Co_Q9 <dbl> 3, 6, 5, 4, 3, 4, 5, 3, 5, 3, 4, 3, 4, 4, 2, 4, 6, 5, 5, 2, 2,…
## $ Ex_Q1 <dbl> 3, 0, 0, 2, 2, 4, 4, 3, 5, 4, 1, 1, 3, 3, 1, 3, 5, 1, 0, 4, 1,…
## $ Ex_Q2 <dbl> 3, 0, 0, 3, 3, 4, 5, 2, 5, 3, 4, 1, 3, 2, 1, 6, 5, 3, 4, 4, 1,…
## $ Ex_Q3 <dbl> 3, 6, 5, 5, 3, 3, 3, 0, 6, 1, 4, 2, 3, 2, 1, 2, 5, 1, 0, 5, 5,…
## $ Ex_Q4 <dbl> 1, 0, 1, 3, 3, 3, 4, 3, 5, 3, 2, 0, 3, 3, 1, 2, NA, 4, 4, 4, 1…
## $ Ex_Q5 <dbl> 3, 0, 1, 6, 3, 3, 4, 2, 5, 2, 2, 4, 2, 3, 0, 4, 5, 2, 3, 1, 1,…
## $ Ex_Q6 <dbl> 3, 6, 5, 3, 0, 4, 3, 1, 6, 3, 2, 1, 4, 2, 1, 5, 6, 2, 1, 2, 1,…
## $ Ex_Q7 <dbl> 3, 6, 5, 4, 1, 2, 5, 3, 6, 3, 4, 3, 5, 1, 1, 6, 6, 3, 1, 1, 3,…
## $ Ex_Q8 <dbl> 2, 0, 1, 4, 3, 4, 2, 4, 6, 2, 4, 0, 4, 4, 1, 3, 5, 4, 3, 1, 1,…
## $ Ex_Q9 <dbl> 4, 6, 5, 5, 5, 2, 3, 3, 6, 3, 3, 4, 4, 3, 2, 5, 5, 4, 4, 0, 4,…
## $ Ne_Q1 <dbl> 4, 0, 0, 4, 1, 2, 3, 4, 0, 3, 3, 3, 2, 1, 1, 3, 4, 5, 2, 4, 5,…
## $ Ne_Q2 <dbl> 0, 6, 6, 4, 2, 1, 2, 3, 1, 2, 5, 5, 3, 1, 1, 1, 1, 6, 1, 2, 5,…
## $ Ne_Q3 <dbl> 0, 0, 0, 1, 0, 1, 4, 4, 0, 4, 2, 5, 1, 2, 5, 5, 2, 2, 1, 2, 5,…
## $ Ne_Q4 <dbl> 3, 6, 6, 2, 3, 2, 3, 3, 0, 4, 4, 5, 5, 4, 5, 3, 2, 5, 2, 4, 5,…
## $ Ne_Q5 <dbl> 3, 0, 1, 4, 1, 1, 4, 5, 0, 3, 4, 6, 2, 0, 1, 1, 0, 4, 3, 1, 5,…
## $ Ne_Q6 <dbl> 1, 6, 5, 1, 0, 1, 3, 4, 0, 4, 4, 5, 2, 1, 5, 6, 1, 2, 2, 3, 5,…
## $ Ne_Q7 <dbl> NA, 0, 1, 2, 0, 2, 4, 4, 0, 3, 2, 5, 1, 2, 5, 2, 2, 4, 1, 3, 5…
## $ Ne_Q8 <dbl> 2, 0, 1, 1, 1, 1, 5, 4, 0, 4, 4, 5, 1, 2, 5, 2, 1, 5, 1, 2, 5,…
## $ Op_Q1 <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
## $ Op_Q2 <dbl> 6, 0, 0, 4, 6, 4, 4, 0, 0, 3, 4, 3, 3, 4, 5, 3, 3, 4, 1, 6, 6,…
## $ Op_Q3 <dbl> 2, 6, 5, 5, 5, 4, 3, 2, 4, 3, 3, 6, 5, 5, 6, 5, 4, 4, 3, 6, 5,…
## $ Op_Q4 <dbl> 3, 0, 1, 6, 6, 3, 3, 0, 6, 3, 4, 5, 4, 5, 6, 6, 2, 2, 4, 5, 5,…
## $ Op_Q5 <dbl> 6, 6, 5, 2, 5, 4, 3, 2, 6, 6, 2, 4, 3, 4, 6, 6, 6, 5, 3, 3, 5,…
## $ Op_Q6 <dbl> 0, 6, 5, 1, 6, 4, 6, 0, 0, 3, 5, 3, 5, 5, 5, 2, 5, 1, 1, 6, 2,…
## $ Op_Q7 <dbl> 0, 6, 5, 5, 5, 4, 6, 2, 1, 3, 2, 4, 5, 5, 6, 3, 6, 5, 2, 6, 5,…
```
4\.7 Pipes
----------
Pipes are a way to order your code in a more readable format.
Let’s say you have a small data table with 10 participant IDs, two columns with variable type A, and 2 columns with variable type B. You want to calculate the mean of the A variables and the mean of the B variables and return a table with 10 rows (1 for each participant) and 3 columns (`id`, `A_mean` and `B_mean`).
One way you could do this is by creating a new object at every step and using that object in the next step. This is pretty clear, but you’ve created 6 unnecessary data objects in your environment. This can get confusing in very long scripts.
```
# make a data table with 10 subjects
data_original <- tibble(
id = 1:10,
A1 = rnorm(10, 0),
A2 = rnorm(10, 1),
B1 = rnorm(10, 2),
B2 = rnorm(10, 3)
)
# gather columns A1 to B2 into "variable" and "value" columns
data_gathered <- gather(data_original, variable, value, A1:B2)
# separate the variable column at the _ into "var" and "var_n" columns
data_separated <- separate(data_gathered, variable, c("var", "var_n"), sep = 1)
# group the data by id and var
data_grouped <- group_by(data_separated, id, var)
# calculate the mean value for each id/var
data_summarised <- summarise(data_grouped, mean = mean(value), .groups = "drop")
# spread the mean column into A and B columns
data_spread <- spread(data_summarised, var, mean)
# rename A and B to A_mean and B_mean
data <- rename(data_spread, A_mean = A, B_mean = B)
data
```
| id | A\_mean | B\_mean |
| --- | --- | --- |
| 1 | \-0\.5938256 | 1\.0243046 |
| 2 | 0\.7440623 | 2\.7172046 |
| 3 | 0\.9309275 | 3\.9262358 |
| 4 | 0\.7197686 | 1\.9662632 |
| 5 | \-0\.0280832 | 1\.9473456 |
| 6 | \-0\.0982555 | 3\.2073687 |
| 7 | 0\.1256922 | 0\.9256321 |
| 8 | 1\.4526447 | 2\.3778116 |
| 9 | 0\.2976443 | 1\.6617481 |
| 10 | 0\.5589199 | 2\.1034679 |
You *can* name each object `data` and keep replacing the old data object with the new one at each step. This will keep your environment clean, but I don’t recommend it because it makes it too easy to accidentally run your code out of order when you are running line\-by\-line for development or debugging.
One way to avoid extra objects is to nest your functions, literally replacing each data object with the code that generated it in the previous step. This can be fine for very short chains.
```
mean_petal_width <- round(mean(iris$Petal.Width), 2)
```
But it gets extremely confusing for long chains:
```
# do not ever do this!!
data <- rename(
spread(
summarise(
group_by(
separate(
gather(
tibble(
id = 1:10,
A1 = rnorm(10, 0),
A2 = rnorm(10, 1),
B1 = rnorm(10, 2),
B2 = rnorm(10,3)),
variable, value, A1:B2),
variable, c("var", "var_n"), sep = 1),
id, var),
mean = mean(value), .groups = "drop"),
var, mean),
A_mean = A, B_mean = B)
```
The pipe lets you “pipe” the result of each function into the next function, allowing you to put your code in a logical order without creating too many extra objects.
```
# calculate mean of A and B variables for each participant
data <- tibble(
id = 1:10,
A1 = rnorm(10, 0),
A2 = rnorm(10, 1),
B1 = rnorm(10, 2),
B2 = rnorm(10,3)
) %>%
gather(variable, value, A1:B2) %>%
separate(variable, c("var", "var_n"), sep=1) %>%
group_by(id, var) %>%
summarise(mean = mean(value), .groups = "drop") %>%
spread(var, mean) %>%
rename(A_mean = A, B_mean = B)
```
You can read this code from top to bottom as follows:
1. Make a tibble called `data` with
* id of 1 to 10,
* A1 of 10 random numbers from a normal distribution,
* A2 of 10 random numbers from a normal distribution,
* B1 of 10 random numbers from a normal distribution,
* B2 of 10 random numbers from a normal distribution; and then
2. Gather to create `variable` and `value` column from columns `A_1` to `B_2`; and then
3. Separate the column `variable` into 2 new columns called `var`and `var_n`, separate at character 1; and then
4. Group by columns `id` and `var`; and then
5. Summarise and new column called `mean` as the mean of the `value` column for each group and drop the grouping; and then
6. Spread to make new columns with the key names in `var` and values in `mean`; and then
7. Rename to make columns called `A_mean` (old `A`) and `B_mean` (old `B`)
You can make intermediate objects whenever you need to break up your code because it’s getting too complicated or you need to debug something.
You can debug a pipe by highlighting from the beginning to just before the pipe you want to stop at. Try this by highlighting from `data <-` to the end of the `separate` function and typing cmd\-return. What does `data` look like now?
Chain all the steps above using pipes.
```
personality_reshaped <- personality %>%
gather("question", "score", Op1:Ex9) %>%
separate(question, c("domain", "qnumber"), sep = 2) %>%
unite("domain_n", domain, qnumber, sep = "_Q") %>%
spread(domain_n, score)
```
4\.8 More Complex Example
-------------------------
### 4\.8\.1 Load Data
Get data on infant and maternal mortality rates from the dataskills package. If you don’t have the package, you can download them here:
* [infant mortality](https://psyteachr.github.io/msc-data-skills/data/infmort.csv)
* [maternal mortality](https://psyteachr.github.io/msc-data-skills/data/matmort.xls)
```
data("infmort", package = "dataskills")
head(infmort)
```
| Country | Year | Infant mortality rate (probability of dying between birth and age 1 per 1000 live births) |
| --- | --- | --- |
| Afghanistan | 2015 | 66\.3 \[52\.7\-83\.9] |
| Afghanistan | 2014 | 68\.1 \[55\.7\-83\.6] |
| Afghanistan | 2013 | 69\.9 \[58\.7\-83\.5] |
| Afghanistan | 2012 | 71\.7 \[61\.6\-83\.7] |
| Afghanistan | 2011 | 73\.4 \[64\.4\-84\.2] |
| Afghanistan | 2010 | 75\.1 \[66\.9\-85\.1] |
```
data("matmort", package = "dataskills")
head(matmort)
```
| Country | 1990 | 2000 | 2015 |
| --- | --- | --- | --- |
| Afghanistan | 1 340 \[ 878 \- 1 950] | 1 100 \[ 745 \- 1 570] | 396 \[ 253 \- 620] |
| Albania | 71 \[ 58 \- 88] | 43 \[ 33 \- 56] | 29 \[ 16 \- 46] |
| Algeria | 216 \[ 141 \- 327] | 170 \[ 118 \- 241] | 140 \[ 82 \- 244] |
| Angola | 1 160 \[ 627 \- 2 020] | 924 \[ 472 \- 1 730] | 477 \[ 221 \- 988] |
| Argentina | 72 \[ 64 \- 80] | 60 \[ 54 \- 65] | 52 \[ 44 \- 63] |
| Armenia | 58 \[ 51 \- 65] | 40 \[ 35 \- 46] | 25 \[ 21 \- 31] |
### 4\.8\.2 Wide to Long
`matmort` is in wide format, with a separate column for each year. Change it to long format, with a row for each Country/Year observation.
This example is complicated because the column names to gather *are* numbers. If the column names are non\-standard (e.g., have spaces, start with numbers, or have special characters), you can enclose them in backticks (\`) like the example below.
```
matmort_long <- matmort %>%
pivot_longer(cols = `1990`:`2015`,
names_to = "Year",
values_to = "stats") %>%
glimpse()
```
```
## Rows: 543
## Columns: 3
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Albania", "Alban…
## $ Year <chr> "1990", "2000", "2015", "1990", "2000", "2015", "1990", "2000"…
## $ stats <chr> "1 340 [ 878 - 1 950]", "1 100 [ 745 - 1 570]", "396 [ 253 - …
```
You can put `matmort` at the first argument to `pivot_longer()`; you don’t have to pipe it in. But when I’m working on data processing I often find myself needing to insert or rearrange steps and I constantly introduce errors by forgetting to take the first argument out of a pipe chain, so now I start with the original data table and pipe from there.
Alternatively, you can use the `gather()` function.
```
matmort_long <- matmort %>%
gather("Year", "stats", `1990`:`2015`) %>%
glimpse()
```
```
## Rows: 543
## Columns: 3
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ stats <chr> "1 340 [ 878 - 1 950]", "71 [ 58 - 88]", "216 [ 141 - 327]",…
```
### 4\.8\.3 One Piece of Data per Column
The data in the `stats` column is in an unusual format with some sort of confidence interval in brackets and lots of extra spaces. We don’t need any of the spaces, so first we’ll remove them with `mutate()`, which we’ll learn more about in the next lesson.
The `separate` function will separate your data on anything that is not a number or letter, so try it first without specifying the `sep` argument. The `into` argument is a list of the new column names.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi")) %>%
glimpse()
```
```
## Warning: Expected 3 pieces. Additional pieces discarded in 543 rows [1, 2, 3, 4,
## 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, ...].
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
The `gsub(pattern, replacement, x)` function is a flexible way to do search and replace. The example above replaces all occurances of the `pattern` " " (a space), with the `replacement` "" (nothing), in the string `x` (the `stats` column). Use `sub()` instead if you only want to replace the first occurance of a pattern. We only used a simple pattern here, but you can use more complicated [regex](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) patterns to replace, for example, all even numbers (e.g., `gsub(“[:02468:],” "“,”id = 123456“)`) or all occurances of the word colour in US or UK spelling (e.g., `gsub(”colo(u)?r“,”**“,”replace color, colour, or colours, but not collors")`).
#### 4\.8\.3\.1 Handle spare columns with `extra`
The previous example should have given you an error warning about “Additional pieces discarded in 543 rows.” This is because `separate` splits the column at the brackets and dashes, so the text `100[90-110]` would split into four values `c(“100,” “90,” “110,” "“)`, but we only specified 3 new columns. The fourth value is always empty (just the part after the last bracket), so we are happy to drop it, but `separate` generates a warning so you don’t do that accidentally. You can turn off the warning by adding the `extra` argument and setting it to “drop.” Look at the help for `??tidyr::separate` to see what the other options do.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
#### 4\.8\.3\.2 Set delimiters with `sep`
Now do the same with `infmort`. It’s already in long format, so you don’t need to use `gather`, but the third column has a ridiculously long name, so we can just refer to it by its column number (3\).
```
infmort_split <- infmort %>%
separate(3, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66", "68", "69", "71", "73", "75", "76", "78", "80", "82", "8…
## $ ci_low <chr> "3", "1", "9", "7", "4", "1", "8", "6", "4", "3", "4", "7", "0…
## $ ci_hi <chr> "52", "55", "58", "61", "64", "66", "69", "71", "73", "75", "7…
```
**Wait, that didn’t work at all!** It split the column on spaces, brackets, *and* full stops. We just want to split on the spaces, brackets and dashes. So we need to manually set `sep` to what the delimiters are. Also, once there are more than a few arguments specified for a function, it’s easier to read them if you put one argument on each line.
{\#regex}
You can use [regular expressions](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) to separate complex columns. Here, we want to separate on dashes and brackets. You can separate on a list of delimiters by putting them in parentheses, separated by “\|.” It’s a little more complicated because brackets have a special meaning in regex, so you need to “escape” the left one with two backslashes “\\.”
```
infmort_split <- infmort %>%
separate(
col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])"
) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66.3 ", "68.1 ", "69.9 ", "71.7 ", "73.4 ", "75.1 ", "76.8 ",…
## $ ci_low <chr> "52.7", "55.7", "58.7", "61.6", "64.4", "66.9", "69.0", "71.2"…
## $ ci_hi <chr> "83.9", "83.6", "83.5", "83.7", "84.2", "85.1", "86.1", "87.3"…
```
#### 4\.8\.3\.3 Fix data types with `convert`
That’s better. Notice the next to `Year`, `rate`, `ci_low` and `ci_hi`. That means these columns hold characters (like words), not numbers or integers. This can cause problems when you try to do thigs like average the numbers (you can’t average words), so we can fix it by adding the argument `convert` and setting it to `TRUE`.
```
infmort_split <- infmort %>%
separate(col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <dbl> 66.3, 68.1, 69.9, 71.7, 73.4, 75.1, 76.8, 78.6, 80.4, 82.3, 84…
## $ ci_low <dbl> 52.7, 55.7, 58.7, 61.6, 64.4, 66.9, 69.0, 71.2, 73.4, 75.5, 77…
## $ ci_hi <dbl> 83.9, 83.6, 83.5, 83.7, 84.2, 85.1, 86.1, 87.3, 88.9, 90.7, 92…
```
Do the same for `matmort`.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
### 4\.8\.4 All in one step
We can chain all the steps for `matmort` above together, since we don’t need those intermediate data tables.
```
matmort2<- dataskills::matmort %>%
gather("Year", "stats", `1990`:`2015`) %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(
col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE
) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
### 4\.8\.5 Columns by Year
Spread out the maternal mortality rate by year.
```
matmort_wide <- matmort2 %>%
spread(key = Year, value = rate) %>%
print()
```
```
## # A tibble: 542 x 6
## Country ci_low ci_hi `1990` `2000` `2015`
## <chr> <int> <int> <int> <int> <int>
## 1 Afghanistan 253 620 NA NA 396
## 2 Afghanistan 745 1570 NA 1100 NA
## 3 Afghanistan 878 1950 1340 NA NA
## 4 Albania 16 46 NA NA 29
## 5 Albania 33 56 NA 43 NA
## 6 Albania 58 88 71 NA NA
## 7 Algeria 82 244 NA NA 140
## 8 Algeria 118 241 NA 170 NA
## 9 Algeria 141 327 216 NA NA
## 10 Angola 221 988 NA NA 477
## # … with 532 more rows
```
Nope, that didn’t work at all, but it’s a really common mistake when spreading data. This is because `spread` matches on all the remaining columns, so Afghanistan with `ci_low` of 253 is treated as a different observation than Afghanistan with `ci_low` of 745\.
This is where `pivot_wider()` can be very useful. You can set `values_from` to multiple column names and their names will be added to the `names_from` values.
```
matmort_wide <- matmort2 %>%
pivot_wider(
names_from = Year,
values_from = c(rate, ci_low, ci_hi)
)
glimpse(matmort_wide)
```
```
## Rows: 181
## Columns: 10
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina"…
## $ rate_1990 <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33…
## $ rate_2000 <int> 1100, 43, 170, 924, 60, 40, 9, 5, 48, 61, 21, 399, 48, 26,…
## $ rate_2015 <int> 396, 29, 140, 477, 52, 25, 6, 4, 25, 80, 15, 176, 27, 4, 7…
## $ ci_low_1990 <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, …
## $ ci_low_2000 <int> 745, 33, 118, 472, 54, 35, 8, 4, 42, 50, 18, 322, 38, 22, …
## $ ci_low_2015 <int> 253, 16, 82, 221, 44, 21, 5, 3, 17, 53, 12, 125, 19, 3, 5,…
## $ ci_hi_1990 <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 3…
## $ ci_hi_2000 <int> 1570, 56, 241, 1730, 65, 46, 10, 6, 55, 74, 26, 496, 58, 3…
## $ ci_hi_2015 <int> 620, 46, 244, 988, 63, 31, 7, 5, 35, 124, 19, 280, 37, 6, …
```
### 4\.8\.6 Experimentum Data
Students in the Institute of Neuroscience and Psychology at the University of Glasgow can use the online experiment builder platform, [Experimentum](https://debruine.github.io/experimentum/). The platform is also [open source on github](https://github.com/debruine/experimentum) for anyone who can install it on a web server. It allows you to group questionnaires and experiments into **projects** with randomisation and counterbalancing. Data for questionnaires and experiments are downloadable in long format, but researchers often need to put them in wide format for analysis.
Look at the help menu for built\-in dataset `dataskills::experimentum_quests` to learn what each column is. Subjects are asked questions about dogs to test the different questionnaire response types.
* current: Do you own a dog? (yes/no)
* past: Have you ever owned a dog? (yes/no)
* name: What is the best name for a dog? (free short text)
* good: How good are dogs? (1\=pretty good:7\=very good)
* country: What country do borzois come from?
* good\_borzoi: How good are borzois? (0\=pretty good:100\=very good)
* text: Write some text about dogs. (free long text)
* time: What time is it? (time)
To get the dataset into wide format, where each question is in a separate column, use the following code:
```
q <- dataskills::experimentum_quests %>%
pivot_wider(id_cols = session_id:user_age,
names_from = q_name,
values_from = dv) %>%
type.convert(as.is = TRUE) %>%
print()
```
```
## # A tibble: 24 x 15
## session_id project_id quest_id user_id user_sex user_status user_age current
## <int> <int> <int> <int> <chr> <chr> <dbl> <int>
## 1 34034 1 1 31105 female guest 28.2 1
## 2 34104 1 1 31164 male registered 19.4 1
## 3 34326 1 1 31392 female guest 17 0
## 4 34343 1 1 31397 male guest 22 1
## 5 34765 1 1 31770 female guest 44 1
## 6 34796 1 1 31796 female guest 35.9 0
## 7 34806 1 1 31798 female guest 35 0
## 8 34822 1 1 31802 female guest 58 1
## 9 34864 1 1 31820 male guest 20 0
## 10 35014 1 1 31921 female student 39.2 1
## # … with 14 more rows, and 7 more variables: past <int>, name <chr>,
## # good <int>, country <chr>, text <chr>, good_borzoi <int>, time <chr>
```
The responses in the `dv` column have multiple types (e.g., `r glossary(“integer”)`, `r glossary(“double”)`, and `r glossary(“character”)`), but they are all represented as character strings when they’re in the same column. After you spread the data to wide format, each column should be given the ocrrect data type. The function `type.convert()` makes a best guess at what type each new column should be and converts it. The argument `as.is = TRUE` converts columns where none of the numbers have decimal places to integers.
4\.9 Glossary
-------------
| term | definition |
| --- | --- |
| [long](https://psyteachr.github.io/glossary/l#long) | Data where each observation is on a separate row |
| [observation](https://psyteachr.github.io/glossary/o#observation) | All of the data about a single trial or question. |
| [value](https://psyteachr.github.io/glossary/v#value) | A single number or piece of data. |
| [variable](https://psyteachr.github.io/glossary/v#variable) | A word that identifies and stores the value of some data for later use. |
| [wide](https://psyteachr.github.io/glossary/w#wide) | Data where all of the observations about one subject are in the same row |
4\.10 Exercises
---------------
Download the [exercises](exercises/04_tidyr_exercise.Rmd). See the [answers](exercises/04_tidyr_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(4)
# run this to access the answers
dataskills::exercise(4, answers = TRUE)
```
4\.1 Learning Objectives
------------------------
### 4\.1\.1 Basic
1. Understand the concept of [tidy data](tidyr.html#tidy-data) [(video)](https://youtu.be/EsSN4OdsNpc)
2. Be able to convert between long and wide formats using pivot functions [(video)](https://youtu.be/4dvLmjhwN8I)
* [`pivot_longer()`](tidyr.html#pivot_longer)
* [`pivot_wider()`](tidyr.html#pivot_wider)
3. Be able to use the 4 basic `tidyr` verbs [(video)](https://youtu.be/oUWjb0JC8zM)
* [`gather()`](tidyr.html#gather)
* [`separate()`](tidyr.html#separate)
* [`spread()`](tidyr.html#spread)
* [`unite()`](tidyr.html#unite)
4. Be able to chain functions using [pipes](tidyr.html#pipes) [(video)](https://youtu.be/itfrlLaN4SE)
### 4\.1\.2 Advanced
5. Be able to use [regular expressions](#regex) to separate complex columns
### 4\.1\.1 Basic
1. Understand the concept of [tidy data](tidyr.html#tidy-data) [(video)](https://youtu.be/EsSN4OdsNpc)
2. Be able to convert between long and wide formats using pivot functions [(video)](https://youtu.be/4dvLmjhwN8I)
* [`pivot_longer()`](tidyr.html#pivot_longer)
* [`pivot_wider()`](tidyr.html#pivot_wider)
3. Be able to use the 4 basic `tidyr` verbs [(video)](https://youtu.be/oUWjb0JC8zM)
* [`gather()`](tidyr.html#gather)
* [`separate()`](tidyr.html#separate)
* [`spread()`](tidyr.html#spread)
* [`unite()`](tidyr.html#unite)
4. Be able to chain functions using [pipes](tidyr.html#pipes) [(video)](https://youtu.be/itfrlLaN4SE)
### 4\.1\.2 Advanced
5. Be able to use [regular expressions](#regex) to separate complex columns
4\.2 Resources
--------------
* [Tidy Data](http://vita.had.co.nz/papers/tidy-data.html)
* [Chapter 12: Tidy Data](http://r4ds.had.co.nz/tidy-data.html) in *R for Data Science*
* [Chapter 18: Pipes](http://r4ds.had.co.nz/pipes.html) in *R for Data Science*
* [Data wrangling cheat sheet](https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf)
4\.3 Setup
----------
```
# libraries needed
library(tidyverse)
library(dataskills)
set.seed(8675309) # makes sure random numbers are reproducible
```
4\.4 Tidy Data
--------------
### 4\.4\.1 Three Rules
* Each [variable](https://psyteachr.github.io/glossary/v#variable "A word that identifies and stores the value of some data for later use.") must have its own column
* Each [observation](https://psyteachr.github.io/glossary/o#observation "All of the data about a single trial or question.") must have its own row
* Each [value](https://psyteachr.github.io/glossary/v#value "A single number or piece of data.") must have its own cell
This table has three observations per row and the `total_meanRT` column contains two values.
Table 4\.1: Untidy table
| id | score\_1 | score\_2 | score\_3 | rt\_1 | rt\_2 | rt\_3 | total\_meanRT |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 4 | 3 | 7 | 857 | 890 | 859 | 14 (869\) |
| 2 | 3 | 1 | 1 | 902 | 900 | 959 | 5 (920\) |
| 3 | 2 | 5 | 4 | 757 | 823 | 901 | 11 (827\) |
| 4 | 6 | 2 | 6 | 844 | 788 | 624 | 14 (752\) |
| 5 | 1 | 7 | 2 | 659 | 764 | 690 | 10 (704\) |
This is the tidy version.
Table 4\.1: Tidy table
| id | trial | rt | score | total | mean\_rt |
| --- | --- | --- | --- | --- | --- |
| 1 | 1 | 857 | 4 | 14 | 869 |
| 1 | 2 | 890 | 3 | 14 | 869 |
| 1 | 3 | 859 | 7 | 14 | 869 |
| 2 | 1 | 902 | 3 | 5 | 920 |
| 2 | 2 | 900 | 1 | 5 | 920 |
| 2 | 3 | 959 | 1 | 5 | 920 |
| 3 | 1 | 757 | 2 | 11 | 827 |
| 3 | 2 | 823 | 5 | 11 | 827 |
| 3 | 3 | 901 | 4 | 11 | 827 |
| 4 | 1 | 844 | 6 | 14 | 752 |
| 4 | 2 | 788 | 2 | 14 | 752 |
| 4 | 3 | 624 | 6 | 14 | 752 |
| 5 | 1 | 659 | 1 | 10 | 704 |
| 5 | 2 | 764 | 7 | 10 | 704 |
| 5 | 3 | 690 | 2 | 10 | 704 |
### 4\.4\.2 Wide versus long
Data tables can be in [wide](https://psyteachr.github.io/glossary/w#wide "Data where all of the observations about one subject are in the same row") format or [long](https://psyteachr.github.io/glossary/l#long "Data where each observation is on a separate row") format (and sometimes a mix of the two). Wide data are where all of the observations about one subject are in the same row, while long data are where each observation is on a separate row. You often need to convert between these formats to do different types of analyses or data processing.
Imagine a study where each subject completes a questionnaire with three items. Each answer is an [observation](https://psyteachr.github.io/glossary/o#observation "All of the data about a single trial or question.") of that subject. You are probably most familiar with data like this in a wide format, where the subject `id` is in one column, and each of the three item responses is in its own column.
Table 4\.2: Wide data
| id | Q1 | Q2 | Q3 |
| --- | --- | --- | --- |
| A | 1 | 2 | 3 |
| B | 4 | 5 | 6 |
The same data can be represented in a long format by creating a new column that specifies what `item` the observation is from and a new column that specifies the `value` of that observation.
Table 4\.3: Long data
| id | item | value |
| --- | --- | --- |
| A | Q1 | 1 |
| B | Q1 | 4 |
| A | Q2 | 2 |
| B | Q2 | 5 |
| A | Q3 | 3 |
| B | Q3 | 6 |
Create a long version of the following table.
| id | fav\_colour | fav\_animal |
| --- | --- | --- |
| Lisa | red | echidna |
| Robbie | orange | babirusa |
| Steven | green | frog |
Answer
Your answer doesn’t need to have the same column headers or be in the same order.
| id | fav | answer |
| --- | --- | --- |
| Lisa | colour | red |
| Lisa | animal | echidna |
| Robbie | colour | orange |
| Robbie | animal | babirusa |
| Steven | colour | green |
| Steven | animal | frog |
### 4\.4\.1 Three Rules
* Each [variable](https://psyteachr.github.io/glossary/v#variable "A word that identifies and stores the value of some data for later use.") must have its own column
* Each [observation](https://psyteachr.github.io/glossary/o#observation "All of the data about a single trial or question.") must have its own row
* Each [value](https://psyteachr.github.io/glossary/v#value "A single number or piece of data.") must have its own cell
This table has three observations per row and the `total_meanRT` column contains two values.
Table 4\.1: Untidy table
| id | score\_1 | score\_2 | score\_3 | rt\_1 | rt\_2 | rt\_3 | total\_meanRT |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 4 | 3 | 7 | 857 | 890 | 859 | 14 (869\) |
| 2 | 3 | 1 | 1 | 902 | 900 | 959 | 5 (920\) |
| 3 | 2 | 5 | 4 | 757 | 823 | 901 | 11 (827\) |
| 4 | 6 | 2 | 6 | 844 | 788 | 624 | 14 (752\) |
| 5 | 1 | 7 | 2 | 659 | 764 | 690 | 10 (704\) |
This is the tidy version.
Table 4\.1: Tidy table
| id | trial | rt | score | total | mean\_rt |
| --- | --- | --- | --- | --- | --- |
| 1 | 1 | 857 | 4 | 14 | 869 |
| 1 | 2 | 890 | 3 | 14 | 869 |
| 1 | 3 | 859 | 7 | 14 | 869 |
| 2 | 1 | 902 | 3 | 5 | 920 |
| 2 | 2 | 900 | 1 | 5 | 920 |
| 2 | 3 | 959 | 1 | 5 | 920 |
| 3 | 1 | 757 | 2 | 11 | 827 |
| 3 | 2 | 823 | 5 | 11 | 827 |
| 3 | 3 | 901 | 4 | 11 | 827 |
| 4 | 1 | 844 | 6 | 14 | 752 |
| 4 | 2 | 788 | 2 | 14 | 752 |
| 4 | 3 | 624 | 6 | 14 | 752 |
| 5 | 1 | 659 | 1 | 10 | 704 |
| 5 | 2 | 764 | 7 | 10 | 704 |
| 5 | 3 | 690 | 2 | 10 | 704 |
### 4\.4\.2 Wide versus long
Data tables can be in [wide](https://psyteachr.github.io/glossary/w#wide "Data where all of the observations about one subject are in the same row") format or [long](https://psyteachr.github.io/glossary/l#long "Data where each observation is on a separate row") format (and sometimes a mix of the two). Wide data are where all of the observations about one subject are in the same row, while long data are where each observation is on a separate row. You often need to convert between these formats to do different types of analyses or data processing.
Imagine a study where each subject completes a questionnaire with three items. Each answer is an [observation](https://psyteachr.github.io/glossary/o#observation "All of the data about a single trial or question.") of that subject. You are probably most familiar with data like this in a wide format, where the subject `id` is in one column, and each of the three item responses is in its own column.
Table 4\.2: Wide data
| id | Q1 | Q2 | Q3 |
| --- | --- | --- | --- |
| A | 1 | 2 | 3 |
| B | 4 | 5 | 6 |
The same data can be represented in a long format by creating a new column that specifies what `item` the observation is from and a new column that specifies the `value` of that observation.
Table 4\.3: Long data
| id | item | value |
| --- | --- | --- |
| A | Q1 | 1 |
| B | Q1 | 4 |
| A | Q2 | 2 |
| B | Q2 | 5 |
| A | Q3 | 3 |
| B | Q3 | 6 |
Create a long version of the following table.
| id | fav\_colour | fav\_animal |
| --- | --- | --- |
| Lisa | red | echidna |
| Robbie | orange | babirusa |
| Steven | green | frog |
Answer
Your answer doesn’t need to have the same column headers or be in the same order.
| id | fav | answer |
| --- | --- | --- |
| Lisa | colour | red |
| Lisa | animal | echidna |
| Robbie | colour | orange |
| Robbie | animal | babirusa |
| Steven | colour | green |
| Steven | animal | frog |
4\.5 Pivot Functions
--------------------
The pivot functions allow you to transform a data table from wide to long or long to wide in one step.
### 4\.5\.1 Load Data
We will used the dataset `personality` from the dataskills package (or download the data from [personality.csv](https://psyteachr.github.io/msc-data-skills/data/personality.csv)). These data are from a 5\-factor (personality) personality questionnaire. Each question is labelled with the domain (Op \= openness, Co \= conscientiousness, Ex \= extroversion, Ag \= agreeableness, and Ne \= neuroticism) and the question number.
```
data("personality", package = "dataskills")
```
| user\_id | date | Op1 | Ne1 | Ne2 | Op2 | Ex1 | Ex2 | Co1 | Co2 | Ne3 | Ag1 | Ag2 | Ne4 | Ex3 | Co3 | Op3 | Ex4 | Op4 | Ex5 | Ag3 | Co4 | Co5 | Ne5 | Op5 | Ag4 | Op6 | Co6 | Ex6 | Ne6 | Co7 | Ag5 | Co8 | Ex7 | Ne7 | Co9 | Op7 | Ne8 | Ag6 | Ag7 | Co10 | Ex8 | Ex9 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 2006\-03\-23 | 3 | 4 | 0 | 6 | 3 | 3 | 3 | 3 | 0 | 2 | 1 | 3 | 3 | 2 | 2 | 1 | 3 | 3 | 1 | 3 | 0 | 3 | 6 | 1 | 0 | 6 | 3 | 1 | 3 | 3 | 3 | 3 | NA | 3 | 0 | 2 | NA | 3 | 1 | 2 | 4 |
| 1 | 2006\-02\-08 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 6 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 6 | 6 | 0 | 6 | 0 | 6 | 0 | 6 | 6 | 6 | 6 | 0 | 6 | 0 | 6 | 6 | 0 | 6 | 0 | 6 | 0 | 6 |
| 2 | 2005\-10\-24 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 6 | 5 | 1 | 5 | 1 | 1 | 1 | 1 | 5 | 5 | 1 | 5 | 1 | 5 | 1 | 5 | 5 | 5 | 5 | 1 | 5 | 1 | 5 | 5 | 1 | 5 | 1 | 5 | 1 | 5 |
| 5 | 2005\-12\-07 | 6 | 4 | 4 | 4 | 2 | 3 | 3 | 3 | 1 | 4 | 0 | 2 | 5 | 3 | 5 | 3 | 6 | 6 | 1 | 5 | 5 | 4 | 2 | 4 | 1 | 4 | 3 | 1 | 1 | 0 | 1 | 4 | 2 | 4 | 5 | 1 | 2 | 1 | 5 | 4 | 5 |
| 8 | 2006\-07\-27 | 6 | 1 | 2 | 6 | 2 | 3 | 5 | 4 | 0 | 6 | 5 | 3 | 3 | 4 | 5 | 3 | 6 | 3 | 0 | 5 | 5 | 1 | 5 | 6 | 6 | 6 | 0 | 0 | 3 | 2 | 3 | 1 | 0 | 3 | 5 | 1 | 3 | 1 | 3 | 3 | 5 |
| 108 | 2006\-02\-28 | 3 | 2 | 1 | 4 | 4 | 4 | 4 | 3 | 1 | 5 | 4 | 2 | 3 | 4 | 4 | 3 | 3 | 3 | 4 | 3 | 3 | 1 | 4 | 5 | 4 | 5 | 4 | 1 | 4 | 5 | 4 | 2 | 2 | 4 | 4 | 1 | 4 | 3 | 5 | 4 | 2 |
### 4\.5\.2 pivot\_longer()
`pivot_longer()` converts a wide data table to long format by converting the headers from specified columns into the values of new columns, and combining the values of those columns into a new condensed column.
* `cols` refers to the columns you want to make long You can refer to them by their names, like `col1, col2, col3, col4` or `col1:col4` or by their numbers, like `8, 9, 10` or `8:10`.
* `names_to` is what you want to call the new columns that the gathered column headers will go into; it’s “domain” and “qnumber” in this example.
* `names_sep` is an optional argument if you have more than one value for `names_to`. It specifies the characters or position to split the values of the `cols` headers.
* `values_to` is what you want to call the values in the columns `...`; they’re “score” in this example.
```
personality_long <- pivot_longer(
data = personality,
cols = Op1:Ex9, # columns to make long
names_to = c("domain", "qnumber"), # new column names for headers
names_sep = 2, # how to split the headers
values_to = "score" # new column name for values
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 5
## $ user_id <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
## $ date <date> 2006-03-23, 2006-03-23, 2006-03-23, 2006-03-23, 2006-03-23, 2…
## $ domain <chr> "Op", "Ne", "Ne", "Op", "Ex", "Ex", "Co", "Co", "Ne", "Ag", "A…
## $ qnumber <chr> "1", "1", "2", "2", "1", "2", "1", "2", "3", "1", "2", "4", "3…
## $ score <dbl> 3, 4, 0, 6, 3, 3, 3, 3, 0, 2, 1, 3, 3, 2, 2, 1, 3, 3, 1, 3, 0,…
```
You can pipe a data table to `glimpse()` at the end to have a quick look at it. It will still save to the object.
What would you set `names_sep` to in order to split the `cols` headers listed below into the results?
| `cols` | `names_to` | `names_sep` |
| --- | --- | --- |
| `A_1`, `A_2`, `B_1`, `B_2` | `c("condition", "version")` | A B 1 2 \_ |
| `A1`, `A2`, `B1`, `B2` | `c("condition", "version")` | A B 1 2 \_ |
| `cat-day&pre`, `cat-day&post`, `cat-night&pre`, `cat-night&post`, `dog-day&pre`, `dog-day&post`, `dog-night&pre`, `dog-night&post` | `c("pet", "time", "condition")` | \- \& \- |
### 4\.5\.3 pivot\_wider()
We can also go from long to wide format using the `pivot_wider()` function.
* `names_from` is the columns that contain your new column headers.
* `values_from` is the column that contains the values for the new columns.
* `names_sep` is the character string used to join names if `names_from` is more than one column.
```
personality_wide <- pivot_wider(
data = personality_long,
names_from = c(domain, qnumber),
values_from = score,
names_sep = ""
) %>%
glimpse()
```
```
## Rows: 15,000
## Columns: 43
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ Op1 <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
## $ Ne1 <dbl> 4, 0, 0, 4, 1, 2, 3, 4, 0, 3, 3, 3, 2, 1, 1, 3, 4, 5, 2, 4, 5,…
## $ Ne2 <dbl> 0, 6, 6, 4, 2, 1, 2, 3, 1, 2, 5, 5, 3, 1, 1, 1, 1, 6, 1, 2, 5,…
## $ Op2 <dbl> 6, 0, 0, 4, 6, 4, 4, 0, 0, 3, 4, 3, 3, 4, 5, 3, 3, 4, 1, 6, 6,…
## $ Ex1 <dbl> 3, 0, 0, 2, 2, 4, 4, 3, 5, 4, 1, 1, 3, 3, 1, 3, 5, 1, 0, 4, 1,…
## $ Ex2 <dbl> 3, 0, 0, 3, 3, 4, 5, 2, 5, 3, 4, 1, 3, 2, 1, 6, 5, 3, 4, 4, 1,…
## $ Co1 <dbl> 3, 0, 0, 3, 5, 4, 3, 4, 5, 3, 3, 3, 1, 5, 5, 4, 4, 5, 6, 4, 2,…
## $ Co2 <dbl> 3, 0, 0, 3, 4, 3, 3, 4, 5, 3, 5, 3, 3, 4, 5, 1, 5, 4, 5, 2, 5,…
## $ Ne3 <dbl> 0, 0, 0, 1, 0, 1, 4, 4, 0, 4, 2, 5, 1, 2, 5, 5, 2, 2, 1, 2, 5,…
## $ Ag1 <dbl> 2, 0, 0, 4, 6, 5, 5, 4, 2, 5, 4, 3, 2, 4, 5, 3, 5, 5, 5, 4, 4,…
## $ Ag2 <dbl> 1, 6, 6, 0, 5, 4, 5, 3, 4, 3, 5, 1, 5, 4, 2, 6, 5, 5, 5, 5, 2,…
## $ Ne4 <dbl> 3, 6, 6, 2, 3, 2, 3, 3, 0, 4, 4, 5, 5, 4, 5, 3, 2, 5, 2, 4, 5,…
## $ Ex3 <dbl> 3, 6, 5, 5, 3, 3, 3, 0, 6, 1, 4, 2, 3, 2, 1, 2, 5, 1, 0, 5, 5,…
## $ Co3 <dbl> 2, 0, 1, 3, 4, 4, 5, 4, 5, 3, 4, 3, 4, 4, 5, 4, 2, 4, 5, 2, 2,…
## $ Op3 <dbl> 2, 6, 5, 5, 5, 4, 3, 2, 4, 3, 3, 6, 5, 5, 6, 5, 4, 4, 3, 6, 5,…
## $ Ex4 <dbl> 1, 0, 1, 3, 3, 3, 4, 3, 5, 3, 2, 0, 3, 3, 1, 2, NA, 4, 4, 4, 1…
## $ Op4 <dbl> 3, 0, 1, 6, 6, 3, 3, 0, 6, 3, 4, 5, 4, 5, 6, 6, 2, 2, 4, 5, 5,…
## $ Ex5 <dbl> 3, 0, 1, 6, 3, 3, 4, 2, 5, 2, 2, 4, 2, 3, 0, 4, 5, 2, 3, 1, 1,…
## $ Ag3 <dbl> 1, 0, 1, 1, 0, 4, 4, 4, 3, 3, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 4,…
## $ Co4 <dbl> 3, 6, 5, 5, 5, 3, 2, 4, 3, 1, 4, 3, 1, 2, 4, 2, NA, 5, 6, 1, 1…
## $ Co5 <dbl> 0, 6, 5, 5, 5, 3, 3, 1, 5, 1, 2, 4, 4, 4, 2, 1, 6, 4, 3, 1, 3,…
## $ Ne5 <dbl> 3, 0, 1, 4, 1, 1, 4, 5, 0, 3, 4, 6, 2, 0, 1, 1, 0, 4, 3, 1, 5,…
## $ Op5 <dbl> 6, 6, 5, 2, 5, 4, 3, 2, 6, 6, 2, 4, 3, 4, 6, 6, 6, 5, 3, 3, 5,…
## $ Ag4 <dbl> 1, 0, 1, 4, 6, 5, 5, 6, 6, 6, 4, 2, 4, 5, 4, 5, 6, 4, 5, 6, 5,…
## $ Op6 <dbl> 0, 6, 5, 1, 6, 4, 6, 0, 0, 3, 5, 3, 5, 5, 5, 2, 5, 1, 1, 6, 2,…
## $ Co6 <dbl> 6, 0, 1, 4, 6, 5, 6, 5, 4, 3, 5, 5, 4, 6, 6, 1, 3, 4, 5, 4, 6,…
## $ Ex6 <dbl> 3, 6, 5, 3, 0, 4, 3, 1, 6, 3, 2, 1, 4, 2, 1, 5, 6, 2, 1, 2, 1,…
## $ Ne6 <dbl> 1, 6, 5, 1, 0, 1, 3, 4, 0, 4, 4, 5, 2, 1, 5, 6, 1, 2, 2, 3, 5,…
## $ Co7 <dbl> 3, 6, 5, 1, 3, 4, NA, 2, 3, 3, 2, 2, 4, 2, 5, 2, 5, 5, 3, 1, 1…
## $ Ag5 <dbl> 3, 6, 5, 0, 2, 5, 6, 2, 2, 3, 4, 1, 3, 5, 2, 6, 5, 6, 5, 3, 3,…
## $ Co8 <dbl> 3, 0, 1, 1, 3, 4, 3, 0, 1, 3, 2, 2, 1, 2, 4, 3, 2, 4, 5, 2, 6,…
## $ Ex7 <dbl> 3, 6, 5, 4, 1, 2, 5, 3, 6, 3, 4, 3, 5, 1, 1, 6, 6, 3, 1, 1, 3,…
## $ Ne7 <dbl> NA, 0, 1, 2, 0, 2, 4, 4, 0, 3, 2, 5, 1, 2, 5, 2, 2, 4, 1, 3, 5…
## $ Co9 <dbl> 3, 6, 5, 4, 3, 4, 5, 3, 5, 3, 4, 3, 4, 4, 2, 4, 6, 5, 5, 2, 2,…
## $ Op7 <dbl> 0, 6, 5, 5, 5, 4, 6, 2, 1, 3, 2, 4, 5, 5, 6, 3, 6, 5, 2, 6, 5,…
## $ Ne8 <dbl> 2, 0, 1, 1, 1, 1, 5, 4, 0, 4, 4, 5, 1, 2, 5, 2, 1, 5, 1, 2, 5,…
## $ Ag6 <dbl> NA, 6, 5, 2, 3, 4, 5, 6, 1, 3, 4, 2, 3, 5, 1, 6, 2, 6, 6, 5, 3…
## $ Ag7 <dbl> 3, 0, 1, 1, 1, 3, 3, 5, 0, 3, 2, 1, 2, 3, 5, 6, 4, 4, 6, 6, 2,…
## $ Co10 <dbl> 1, 6, 5, 5, 3, 5, 1, 2, 5, 2, 4, 3, 4, 4, 3, 2, 5, 5, 5, 2, 2,…
## $ Ex8 <dbl> 2, 0, 1, 4, 3, 4, 2, 4, 6, 2, 4, 0, 4, 4, 1, 3, 5, 4, 3, 1, 1,…
## $ Ex9 <dbl> 4, 6, 5, 5, 5, 2, 3, 3, 6, 3, 3, 4, 4, 3, 2, 5, 5, 4, 4, 0, 4,…
```
### 4\.5\.1 Load Data
We will used the dataset `personality` from the dataskills package (or download the data from [personality.csv](https://psyteachr.github.io/msc-data-skills/data/personality.csv)). These data are from a 5\-factor (personality) personality questionnaire. Each question is labelled with the domain (Op \= openness, Co \= conscientiousness, Ex \= extroversion, Ag \= agreeableness, and Ne \= neuroticism) and the question number.
```
data("personality", package = "dataskills")
```
| user\_id | date | Op1 | Ne1 | Ne2 | Op2 | Ex1 | Ex2 | Co1 | Co2 | Ne3 | Ag1 | Ag2 | Ne4 | Ex3 | Co3 | Op3 | Ex4 | Op4 | Ex5 | Ag3 | Co4 | Co5 | Ne5 | Op5 | Ag4 | Op6 | Co6 | Ex6 | Ne6 | Co7 | Ag5 | Co8 | Ex7 | Ne7 | Co9 | Op7 | Ne8 | Ag6 | Ag7 | Co10 | Ex8 | Ex9 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 2006\-03\-23 | 3 | 4 | 0 | 6 | 3 | 3 | 3 | 3 | 0 | 2 | 1 | 3 | 3 | 2 | 2 | 1 | 3 | 3 | 1 | 3 | 0 | 3 | 6 | 1 | 0 | 6 | 3 | 1 | 3 | 3 | 3 | 3 | NA | 3 | 0 | 2 | NA | 3 | 1 | 2 | 4 |
| 1 | 2006\-02\-08 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 6 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 6 | 6 | 0 | 6 | 0 | 6 | 0 | 6 | 6 | 6 | 6 | 0 | 6 | 0 | 6 | 6 | 0 | 6 | 0 | 6 | 0 | 6 |
| 2 | 2005\-10\-24 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 6 | 5 | 1 | 5 | 1 | 1 | 1 | 1 | 5 | 5 | 1 | 5 | 1 | 5 | 1 | 5 | 5 | 5 | 5 | 1 | 5 | 1 | 5 | 5 | 1 | 5 | 1 | 5 | 1 | 5 |
| 5 | 2005\-12\-07 | 6 | 4 | 4 | 4 | 2 | 3 | 3 | 3 | 1 | 4 | 0 | 2 | 5 | 3 | 5 | 3 | 6 | 6 | 1 | 5 | 5 | 4 | 2 | 4 | 1 | 4 | 3 | 1 | 1 | 0 | 1 | 4 | 2 | 4 | 5 | 1 | 2 | 1 | 5 | 4 | 5 |
| 8 | 2006\-07\-27 | 6 | 1 | 2 | 6 | 2 | 3 | 5 | 4 | 0 | 6 | 5 | 3 | 3 | 4 | 5 | 3 | 6 | 3 | 0 | 5 | 5 | 1 | 5 | 6 | 6 | 6 | 0 | 0 | 3 | 2 | 3 | 1 | 0 | 3 | 5 | 1 | 3 | 1 | 3 | 3 | 5 |
| 108 | 2006\-02\-28 | 3 | 2 | 1 | 4 | 4 | 4 | 4 | 3 | 1 | 5 | 4 | 2 | 3 | 4 | 4 | 3 | 3 | 3 | 4 | 3 | 3 | 1 | 4 | 5 | 4 | 5 | 4 | 1 | 4 | 5 | 4 | 2 | 2 | 4 | 4 | 1 | 4 | 3 | 5 | 4 | 2 |
### 4\.5\.2 pivot\_longer()
`pivot_longer()` converts a wide data table to long format by converting the headers from specified columns into the values of new columns, and combining the values of those columns into a new condensed column.
* `cols` refers to the columns you want to make long You can refer to them by their names, like `col1, col2, col3, col4` or `col1:col4` or by their numbers, like `8, 9, 10` or `8:10`.
* `names_to` is what you want to call the new columns that the gathered column headers will go into; it’s “domain” and “qnumber” in this example.
* `names_sep` is an optional argument if you have more than one value for `names_to`. It specifies the characters or position to split the values of the `cols` headers.
* `values_to` is what you want to call the values in the columns `...`; they’re “score” in this example.
```
personality_long <- pivot_longer(
data = personality,
cols = Op1:Ex9, # columns to make long
names_to = c("domain", "qnumber"), # new column names for headers
names_sep = 2, # how to split the headers
values_to = "score" # new column name for values
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 5
## $ user_id <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
## $ date <date> 2006-03-23, 2006-03-23, 2006-03-23, 2006-03-23, 2006-03-23, 2…
## $ domain <chr> "Op", "Ne", "Ne", "Op", "Ex", "Ex", "Co", "Co", "Ne", "Ag", "A…
## $ qnumber <chr> "1", "1", "2", "2", "1", "2", "1", "2", "3", "1", "2", "4", "3…
## $ score <dbl> 3, 4, 0, 6, 3, 3, 3, 3, 0, 2, 1, 3, 3, 2, 2, 1, 3, 3, 1, 3, 0,…
```
You can pipe a data table to `glimpse()` at the end to have a quick look at it. It will still save to the object.
What would you set `names_sep` to in order to split the `cols` headers listed below into the results?
| `cols` | `names_to` | `names_sep` |
| --- | --- | --- |
| `A_1`, `A_2`, `B_1`, `B_2` | `c("condition", "version")` | A B 1 2 \_ |
| `A1`, `A2`, `B1`, `B2` | `c("condition", "version")` | A B 1 2 \_ |
| `cat-day&pre`, `cat-day&post`, `cat-night&pre`, `cat-night&post`, `dog-day&pre`, `dog-day&post`, `dog-night&pre`, `dog-night&post` | `c("pet", "time", "condition")` | \- \& \- |
### 4\.5\.3 pivot\_wider()
We can also go from long to wide format using the `pivot_wider()` function.
* `names_from` is the columns that contain your new column headers.
* `values_from` is the column that contains the values for the new columns.
* `names_sep` is the character string used to join names if `names_from` is more than one column.
```
personality_wide <- pivot_wider(
data = personality_long,
names_from = c(domain, qnumber),
values_from = score,
names_sep = ""
) %>%
glimpse()
```
```
## Rows: 15,000
## Columns: 43
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ Op1 <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
## $ Ne1 <dbl> 4, 0, 0, 4, 1, 2, 3, 4, 0, 3, 3, 3, 2, 1, 1, 3, 4, 5, 2, 4, 5,…
## $ Ne2 <dbl> 0, 6, 6, 4, 2, 1, 2, 3, 1, 2, 5, 5, 3, 1, 1, 1, 1, 6, 1, 2, 5,…
## $ Op2 <dbl> 6, 0, 0, 4, 6, 4, 4, 0, 0, 3, 4, 3, 3, 4, 5, 3, 3, 4, 1, 6, 6,…
## $ Ex1 <dbl> 3, 0, 0, 2, 2, 4, 4, 3, 5, 4, 1, 1, 3, 3, 1, 3, 5, 1, 0, 4, 1,…
## $ Ex2 <dbl> 3, 0, 0, 3, 3, 4, 5, 2, 5, 3, 4, 1, 3, 2, 1, 6, 5, 3, 4, 4, 1,…
## $ Co1 <dbl> 3, 0, 0, 3, 5, 4, 3, 4, 5, 3, 3, 3, 1, 5, 5, 4, 4, 5, 6, 4, 2,…
## $ Co2 <dbl> 3, 0, 0, 3, 4, 3, 3, 4, 5, 3, 5, 3, 3, 4, 5, 1, 5, 4, 5, 2, 5,…
## $ Ne3 <dbl> 0, 0, 0, 1, 0, 1, 4, 4, 0, 4, 2, 5, 1, 2, 5, 5, 2, 2, 1, 2, 5,…
## $ Ag1 <dbl> 2, 0, 0, 4, 6, 5, 5, 4, 2, 5, 4, 3, 2, 4, 5, 3, 5, 5, 5, 4, 4,…
## $ Ag2 <dbl> 1, 6, 6, 0, 5, 4, 5, 3, 4, 3, 5, 1, 5, 4, 2, 6, 5, 5, 5, 5, 2,…
## $ Ne4 <dbl> 3, 6, 6, 2, 3, 2, 3, 3, 0, 4, 4, 5, 5, 4, 5, 3, 2, 5, 2, 4, 5,…
## $ Ex3 <dbl> 3, 6, 5, 5, 3, 3, 3, 0, 6, 1, 4, 2, 3, 2, 1, 2, 5, 1, 0, 5, 5,…
## $ Co3 <dbl> 2, 0, 1, 3, 4, 4, 5, 4, 5, 3, 4, 3, 4, 4, 5, 4, 2, 4, 5, 2, 2,…
## $ Op3 <dbl> 2, 6, 5, 5, 5, 4, 3, 2, 4, 3, 3, 6, 5, 5, 6, 5, 4, 4, 3, 6, 5,…
## $ Ex4 <dbl> 1, 0, 1, 3, 3, 3, 4, 3, 5, 3, 2, 0, 3, 3, 1, 2, NA, 4, 4, 4, 1…
## $ Op4 <dbl> 3, 0, 1, 6, 6, 3, 3, 0, 6, 3, 4, 5, 4, 5, 6, 6, 2, 2, 4, 5, 5,…
## $ Ex5 <dbl> 3, 0, 1, 6, 3, 3, 4, 2, 5, 2, 2, 4, 2, 3, 0, 4, 5, 2, 3, 1, 1,…
## $ Ag3 <dbl> 1, 0, 1, 1, 0, 4, 4, 4, 3, 3, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 4,…
## $ Co4 <dbl> 3, 6, 5, 5, 5, 3, 2, 4, 3, 1, 4, 3, 1, 2, 4, 2, NA, 5, 6, 1, 1…
## $ Co5 <dbl> 0, 6, 5, 5, 5, 3, 3, 1, 5, 1, 2, 4, 4, 4, 2, 1, 6, 4, 3, 1, 3,…
## $ Ne5 <dbl> 3, 0, 1, 4, 1, 1, 4, 5, 0, 3, 4, 6, 2, 0, 1, 1, 0, 4, 3, 1, 5,…
## $ Op5 <dbl> 6, 6, 5, 2, 5, 4, 3, 2, 6, 6, 2, 4, 3, 4, 6, 6, 6, 5, 3, 3, 5,…
## $ Ag4 <dbl> 1, 0, 1, 4, 6, 5, 5, 6, 6, 6, 4, 2, 4, 5, 4, 5, 6, 4, 5, 6, 5,…
## $ Op6 <dbl> 0, 6, 5, 1, 6, 4, 6, 0, 0, 3, 5, 3, 5, 5, 5, 2, 5, 1, 1, 6, 2,…
## $ Co6 <dbl> 6, 0, 1, 4, 6, 5, 6, 5, 4, 3, 5, 5, 4, 6, 6, 1, 3, 4, 5, 4, 6,…
## $ Ex6 <dbl> 3, 6, 5, 3, 0, 4, 3, 1, 6, 3, 2, 1, 4, 2, 1, 5, 6, 2, 1, 2, 1,…
## $ Ne6 <dbl> 1, 6, 5, 1, 0, 1, 3, 4, 0, 4, 4, 5, 2, 1, 5, 6, 1, 2, 2, 3, 5,…
## $ Co7 <dbl> 3, 6, 5, 1, 3, 4, NA, 2, 3, 3, 2, 2, 4, 2, 5, 2, 5, 5, 3, 1, 1…
## $ Ag5 <dbl> 3, 6, 5, 0, 2, 5, 6, 2, 2, 3, 4, 1, 3, 5, 2, 6, 5, 6, 5, 3, 3,…
## $ Co8 <dbl> 3, 0, 1, 1, 3, 4, 3, 0, 1, 3, 2, 2, 1, 2, 4, 3, 2, 4, 5, 2, 6,…
## $ Ex7 <dbl> 3, 6, 5, 4, 1, 2, 5, 3, 6, 3, 4, 3, 5, 1, 1, 6, 6, 3, 1, 1, 3,…
## $ Ne7 <dbl> NA, 0, 1, 2, 0, 2, 4, 4, 0, 3, 2, 5, 1, 2, 5, 2, 2, 4, 1, 3, 5…
## $ Co9 <dbl> 3, 6, 5, 4, 3, 4, 5, 3, 5, 3, 4, 3, 4, 4, 2, 4, 6, 5, 5, 2, 2,…
## $ Op7 <dbl> 0, 6, 5, 5, 5, 4, 6, 2, 1, 3, 2, 4, 5, 5, 6, 3, 6, 5, 2, 6, 5,…
## $ Ne8 <dbl> 2, 0, 1, 1, 1, 1, 5, 4, 0, 4, 4, 5, 1, 2, 5, 2, 1, 5, 1, 2, 5,…
## $ Ag6 <dbl> NA, 6, 5, 2, 3, 4, 5, 6, 1, 3, 4, 2, 3, 5, 1, 6, 2, 6, 6, 5, 3…
## $ Ag7 <dbl> 3, 0, 1, 1, 1, 3, 3, 5, 0, 3, 2, 1, 2, 3, 5, 6, 4, 4, 6, 6, 2,…
## $ Co10 <dbl> 1, 6, 5, 5, 3, 5, 1, 2, 5, 2, 4, 3, 4, 4, 3, 2, 5, 5, 5, 2, 2,…
## $ Ex8 <dbl> 2, 0, 1, 4, 3, 4, 2, 4, 6, 2, 4, 0, 4, 4, 1, 3, 5, 4, 3, 1, 1,…
## $ Ex9 <dbl> 4, 6, 5, 5, 5, 2, 3, 3, 6, 3, 3, 4, 4, 3, 2, 5, 5, 4, 4, 0, 4,…
```
4\.6 Tidy Verbs
---------------
The pivot functions above are relatively new functions that combine the four basic tidy verbs. You can also convert data between long and wide formats using these functions. Many researchers still use these functions and older code will not use the pivot functions, so it is useful to know how to interpret these.
### 4\.6\.1 gather()
Much like `pivot_longer()`, `gather()` makes a wide data table long by creating a column for the headers and a column for the values. The main difference is that you cannot turn the headers into more than one column.
* `key` is what you want to call the new column that the gathered column headers will go into; it’s “question” in this example. It is like `names_to` in `pivot_longer()`, but can only take one value (multiple values need to be separated after `separate()`).
* `value` is what you want to call the values in the gathered columns; they’re “score” in this example. It is like `values_to` in `pivot_longer()`.
* `...` refers to the columns you want to gather. It is like `cols` in `pivot_longer()`.
The `gather()` function converts `personality` from a wide data table to long format, with a row for each user/question observation. The resulting data table should have the columns: `user_id`, `date`, `question`, and `score`.
```
personality_gathered <- gather(
data = personality,
key = "question", # new column name for gathered headers
value = "score", # new column name for gathered values
Op1:Ex9 # columns to gather
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 4
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 9…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, …
## $ question <chr> "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1"…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4…
```
### 4\.6\.2 separate()
* `col` is the column you want to separate
* `into` is a vector of new column names
* `sep` is the character(s) that separate your new columns. This defaults to anything that isn’t alphanumeric, like .`,_-/:` and is like the `names_sep` argument in `pivot_longer()`.
Split the `question` column into two columns: `domain` and `qnumber`.
There is no character to split on, here, but you can separate a column after a specific number of characters by setting `sep` to an integer. For example, to split “abcde” after the third character, use `sep = 3`, which results in `c("abc", "de")`. You can also use negative number to split before the *n*th character from the right. For example, to split a column that has words of various lengths and 2\-digit suffixes (like “lisa03”“,”amanda38"), you can use `sep = -2`.
```
personality_sep <- separate(
data = personality_gathered,
col = question, # column to separate
into = c("domain", "qnumber"), # new column names
sep = 2 # where to separate
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 5
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ domain <chr> "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "O…
## $ qnumber <chr> "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
```
If you want to separate just at full stops, you need to use `sep = "\\."`, not `sep = "."`. The two slashes **escape** the full stop, making it interpreted as a literal full stop and not the regular expression for any character.
### 4\.6\.3 unite()
* `col` is your new united column
* `...` refers to the columns you want to unite
* `sep` is the character(s) that will separate your united columns
Put the domain and qnumber columns back together into a new column named `domain_n`. Make it in a format like “Op\_Q1\.”
```
personality_unite <- unite(
data = personality_sep,
col = "domain_n", # new column name
domain, qnumber, # columns to unite
sep = "_Q" # separation characters
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 4
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 9…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, …
## $ domain_n <chr> "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1"…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4…
```
### 4\.6\.4 spread()
You can reverse the processes above, as well. For example, you can convert data from long format into wide format.
* `key` is the column that contains your new column headers. It is like `names_from` in `pivot_wider()`, but can only take one value (multiple values need to be merged first using `unite()`).
* `value` is the column that contains the values in the new spread columns. It is like `values_from` in `pivot_wider()`.
```
personality_spread <- spread(
data = personality_unite,
key = domain_n, # column that contains new headers
value = score # column that contains new values
) %>%
glimpse()
```
```
## Rows: 15,000
## Columns: 43
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ Ag_Q1 <dbl> 2, 0, 0, 4, 6, 5, 5, 4, 2, 5, 4, 3, 2, 4, 5, 3, 5, 5, 5, 4, 4,…
## $ Ag_Q2 <dbl> 1, 6, 6, 0, 5, 4, 5, 3, 4, 3, 5, 1, 5, 4, 2, 6, 5, 5, 5, 5, 2,…
## $ Ag_Q3 <dbl> 1, 0, 1, 1, 0, 4, 4, 4, 3, 3, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 4,…
## $ Ag_Q4 <dbl> 1, 0, 1, 4, 6, 5, 5, 6, 6, 6, 4, 2, 4, 5, 4, 5, 6, 4, 5, 6, 5,…
## $ Ag_Q5 <dbl> 3, 6, 5, 0, 2, 5, 6, 2, 2, 3, 4, 1, 3, 5, 2, 6, 5, 6, 5, 3, 3,…
## $ Ag_Q6 <dbl> NA, 6, 5, 2, 3, 4, 5, 6, 1, 3, 4, 2, 3, 5, 1, 6, 2, 6, 6, 5, 3…
## $ Ag_Q7 <dbl> 3, 0, 1, 1, 1, 3, 3, 5, 0, 3, 2, 1, 2, 3, 5, 6, 4, 4, 6, 6, 2,…
## $ Co_Q1 <dbl> 3, 0, 0, 3, 5, 4, 3, 4, 5, 3, 3, 3, 1, 5, 5, 4, 4, 5, 6, 4, 2,…
## $ Co_Q10 <dbl> 1, 6, 5, 5, 3, 5, 1, 2, 5, 2, 4, 3, 4, 4, 3, 2, 5, 5, 5, 2, 2,…
## $ Co_Q2 <dbl> 3, 0, 0, 3, 4, 3, 3, 4, 5, 3, 5, 3, 3, 4, 5, 1, 5, 4, 5, 2, 5,…
## $ Co_Q3 <dbl> 2, 0, 1, 3, 4, 4, 5, 4, 5, 3, 4, 3, 4, 4, 5, 4, 2, 4, 5, 2, 2,…
## $ Co_Q4 <dbl> 3, 6, 5, 5, 5, 3, 2, 4, 3, 1, 4, 3, 1, 2, 4, 2, NA, 5, 6, 1, 1…
## $ Co_Q5 <dbl> 0, 6, 5, 5, 5, 3, 3, 1, 5, 1, 2, 4, 4, 4, 2, 1, 6, 4, 3, 1, 3,…
## $ Co_Q6 <dbl> 6, 0, 1, 4, 6, 5, 6, 5, 4, 3, 5, 5, 4, 6, 6, 1, 3, 4, 5, 4, 6,…
## $ Co_Q7 <dbl> 3, 6, 5, 1, 3, 4, NA, 2, 3, 3, 2, 2, 4, 2, 5, 2, 5, 5, 3, 1, 1…
## $ Co_Q8 <dbl> 3, 0, 1, 1, 3, 4, 3, 0, 1, 3, 2, 2, 1, 2, 4, 3, 2, 4, 5, 2, 6,…
## $ Co_Q9 <dbl> 3, 6, 5, 4, 3, 4, 5, 3, 5, 3, 4, 3, 4, 4, 2, 4, 6, 5, 5, 2, 2,…
## $ Ex_Q1 <dbl> 3, 0, 0, 2, 2, 4, 4, 3, 5, 4, 1, 1, 3, 3, 1, 3, 5, 1, 0, 4, 1,…
## $ Ex_Q2 <dbl> 3, 0, 0, 3, 3, 4, 5, 2, 5, 3, 4, 1, 3, 2, 1, 6, 5, 3, 4, 4, 1,…
## $ Ex_Q3 <dbl> 3, 6, 5, 5, 3, 3, 3, 0, 6, 1, 4, 2, 3, 2, 1, 2, 5, 1, 0, 5, 5,…
## $ Ex_Q4 <dbl> 1, 0, 1, 3, 3, 3, 4, 3, 5, 3, 2, 0, 3, 3, 1, 2, NA, 4, 4, 4, 1…
## $ Ex_Q5 <dbl> 3, 0, 1, 6, 3, 3, 4, 2, 5, 2, 2, 4, 2, 3, 0, 4, 5, 2, 3, 1, 1,…
## $ Ex_Q6 <dbl> 3, 6, 5, 3, 0, 4, 3, 1, 6, 3, 2, 1, 4, 2, 1, 5, 6, 2, 1, 2, 1,…
## $ Ex_Q7 <dbl> 3, 6, 5, 4, 1, 2, 5, 3, 6, 3, 4, 3, 5, 1, 1, 6, 6, 3, 1, 1, 3,…
## $ Ex_Q8 <dbl> 2, 0, 1, 4, 3, 4, 2, 4, 6, 2, 4, 0, 4, 4, 1, 3, 5, 4, 3, 1, 1,…
## $ Ex_Q9 <dbl> 4, 6, 5, 5, 5, 2, 3, 3, 6, 3, 3, 4, 4, 3, 2, 5, 5, 4, 4, 0, 4,…
## $ Ne_Q1 <dbl> 4, 0, 0, 4, 1, 2, 3, 4, 0, 3, 3, 3, 2, 1, 1, 3, 4, 5, 2, 4, 5,…
## $ Ne_Q2 <dbl> 0, 6, 6, 4, 2, 1, 2, 3, 1, 2, 5, 5, 3, 1, 1, 1, 1, 6, 1, 2, 5,…
## $ Ne_Q3 <dbl> 0, 0, 0, 1, 0, 1, 4, 4, 0, 4, 2, 5, 1, 2, 5, 5, 2, 2, 1, 2, 5,…
## $ Ne_Q4 <dbl> 3, 6, 6, 2, 3, 2, 3, 3, 0, 4, 4, 5, 5, 4, 5, 3, 2, 5, 2, 4, 5,…
## $ Ne_Q5 <dbl> 3, 0, 1, 4, 1, 1, 4, 5, 0, 3, 4, 6, 2, 0, 1, 1, 0, 4, 3, 1, 5,…
## $ Ne_Q6 <dbl> 1, 6, 5, 1, 0, 1, 3, 4, 0, 4, 4, 5, 2, 1, 5, 6, 1, 2, 2, 3, 5,…
## $ Ne_Q7 <dbl> NA, 0, 1, 2, 0, 2, 4, 4, 0, 3, 2, 5, 1, 2, 5, 2, 2, 4, 1, 3, 5…
## $ Ne_Q8 <dbl> 2, 0, 1, 1, 1, 1, 5, 4, 0, 4, 4, 5, 1, 2, 5, 2, 1, 5, 1, 2, 5,…
## $ Op_Q1 <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
## $ Op_Q2 <dbl> 6, 0, 0, 4, 6, 4, 4, 0, 0, 3, 4, 3, 3, 4, 5, 3, 3, 4, 1, 6, 6,…
## $ Op_Q3 <dbl> 2, 6, 5, 5, 5, 4, 3, 2, 4, 3, 3, 6, 5, 5, 6, 5, 4, 4, 3, 6, 5,…
## $ Op_Q4 <dbl> 3, 0, 1, 6, 6, 3, 3, 0, 6, 3, 4, 5, 4, 5, 6, 6, 2, 2, 4, 5, 5,…
## $ Op_Q5 <dbl> 6, 6, 5, 2, 5, 4, 3, 2, 6, 6, 2, 4, 3, 4, 6, 6, 6, 5, 3, 3, 5,…
## $ Op_Q6 <dbl> 0, 6, 5, 1, 6, 4, 6, 0, 0, 3, 5, 3, 5, 5, 5, 2, 5, 1, 1, 6, 2,…
## $ Op_Q7 <dbl> 0, 6, 5, 5, 5, 4, 6, 2, 1, 3, 2, 4, 5, 5, 6, 3, 6, 5, 2, 6, 5,…
```
### 4\.6\.1 gather()
Much like `pivot_longer()`, `gather()` makes a wide data table long by creating a column for the headers and a column for the values. The main difference is that you cannot turn the headers into more than one column.
* `key` is what you want to call the new column that the gathered column headers will go into; it’s “question” in this example. It is like `names_to` in `pivot_longer()`, but can only take one value (multiple values need to be separated after `separate()`).
* `value` is what you want to call the values in the gathered columns; they’re “score” in this example. It is like `values_to` in `pivot_longer()`.
* `...` refers to the columns you want to gather. It is like `cols` in `pivot_longer()`.
The `gather()` function converts `personality` from a wide data table to long format, with a row for each user/question observation. The resulting data table should have the columns: `user_id`, `date`, `question`, and `score`.
```
personality_gathered <- gather(
data = personality,
key = "question", # new column name for gathered headers
value = "score", # new column name for gathered values
Op1:Ex9 # columns to gather
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 4
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 9…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, …
## $ question <chr> "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1"…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4…
```
### 4\.6\.2 separate()
* `col` is the column you want to separate
* `into` is a vector of new column names
* `sep` is the character(s) that separate your new columns. This defaults to anything that isn’t alphanumeric, like .`,_-/:` and is like the `names_sep` argument in `pivot_longer()`.
Split the `question` column into two columns: `domain` and `qnumber`.
There is no character to split on, here, but you can separate a column after a specific number of characters by setting `sep` to an integer. For example, to split “abcde” after the third character, use `sep = 3`, which results in `c("abc", "de")`. You can also use negative number to split before the *n*th character from the right. For example, to split a column that has words of various lengths and 2\-digit suffixes (like “lisa03”“,”amanda38"), you can use `sep = -2`.
```
personality_sep <- separate(
data = personality_gathered,
col = question, # column to separate
into = c("domain", "qnumber"), # new column names
sep = 2 # where to separate
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 5
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ domain <chr> "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "O…
## $ qnumber <chr> "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
```
If you want to separate just at full stops, you need to use `sep = "\\."`, not `sep = "."`. The two slashes **escape** the full stop, making it interpreted as a literal full stop and not the regular expression for any character.
### 4\.6\.3 unite()
* `col` is your new united column
* `...` refers to the columns you want to unite
* `sep` is the character(s) that will separate your united columns
Put the domain and qnumber columns back together into a new column named `domain_n`. Make it in a format like “Op\_Q1\.”
```
personality_unite <- unite(
data = personality_sep,
col = "domain_n", # new column name
domain, qnumber, # columns to unite
sep = "_Q" # separation characters
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 4
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 9…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, …
## $ domain_n <chr> "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1"…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4…
```
### 4\.6\.4 spread()
You can reverse the processes above, as well. For example, you can convert data from long format into wide format.
* `key` is the column that contains your new column headers. It is like `names_from` in `pivot_wider()`, but can only take one value (multiple values need to be merged first using `unite()`).
* `value` is the column that contains the values in the new spread columns. It is like `values_from` in `pivot_wider()`.
```
personality_spread <- spread(
data = personality_unite,
key = domain_n, # column that contains new headers
value = score # column that contains new values
) %>%
glimpse()
```
```
## Rows: 15,000
## Columns: 43
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ Ag_Q1 <dbl> 2, 0, 0, 4, 6, 5, 5, 4, 2, 5, 4, 3, 2, 4, 5, 3, 5, 5, 5, 4, 4,…
## $ Ag_Q2 <dbl> 1, 6, 6, 0, 5, 4, 5, 3, 4, 3, 5, 1, 5, 4, 2, 6, 5, 5, 5, 5, 2,…
## $ Ag_Q3 <dbl> 1, 0, 1, 1, 0, 4, 4, 4, 3, 3, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 4,…
## $ Ag_Q4 <dbl> 1, 0, 1, 4, 6, 5, 5, 6, 6, 6, 4, 2, 4, 5, 4, 5, 6, 4, 5, 6, 5,…
## $ Ag_Q5 <dbl> 3, 6, 5, 0, 2, 5, 6, 2, 2, 3, 4, 1, 3, 5, 2, 6, 5, 6, 5, 3, 3,…
## $ Ag_Q6 <dbl> NA, 6, 5, 2, 3, 4, 5, 6, 1, 3, 4, 2, 3, 5, 1, 6, 2, 6, 6, 5, 3…
## $ Ag_Q7 <dbl> 3, 0, 1, 1, 1, 3, 3, 5, 0, 3, 2, 1, 2, 3, 5, 6, 4, 4, 6, 6, 2,…
## $ Co_Q1 <dbl> 3, 0, 0, 3, 5, 4, 3, 4, 5, 3, 3, 3, 1, 5, 5, 4, 4, 5, 6, 4, 2,…
## $ Co_Q10 <dbl> 1, 6, 5, 5, 3, 5, 1, 2, 5, 2, 4, 3, 4, 4, 3, 2, 5, 5, 5, 2, 2,…
## $ Co_Q2 <dbl> 3, 0, 0, 3, 4, 3, 3, 4, 5, 3, 5, 3, 3, 4, 5, 1, 5, 4, 5, 2, 5,…
## $ Co_Q3 <dbl> 2, 0, 1, 3, 4, 4, 5, 4, 5, 3, 4, 3, 4, 4, 5, 4, 2, 4, 5, 2, 2,…
## $ Co_Q4 <dbl> 3, 6, 5, 5, 5, 3, 2, 4, 3, 1, 4, 3, 1, 2, 4, 2, NA, 5, 6, 1, 1…
## $ Co_Q5 <dbl> 0, 6, 5, 5, 5, 3, 3, 1, 5, 1, 2, 4, 4, 4, 2, 1, 6, 4, 3, 1, 3,…
## $ Co_Q6 <dbl> 6, 0, 1, 4, 6, 5, 6, 5, 4, 3, 5, 5, 4, 6, 6, 1, 3, 4, 5, 4, 6,…
## $ Co_Q7 <dbl> 3, 6, 5, 1, 3, 4, NA, 2, 3, 3, 2, 2, 4, 2, 5, 2, 5, 5, 3, 1, 1…
## $ Co_Q8 <dbl> 3, 0, 1, 1, 3, 4, 3, 0, 1, 3, 2, 2, 1, 2, 4, 3, 2, 4, 5, 2, 6,…
## $ Co_Q9 <dbl> 3, 6, 5, 4, 3, 4, 5, 3, 5, 3, 4, 3, 4, 4, 2, 4, 6, 5, 5, 2, 2,…
## $ Ex_Q1 <dbl> 3, 0, 0, 2, 2, 4, 4, 3, 5, 4, 1, 1, 3, 3, 1, 3, 5, 1, 0, 4, 1,…
## $ Ex_Q2 <dbl> 3, 0, 0, 3, 3, 4, 5, 2, 5, 3, 4, 1, 3, 2, 1, 6, 5, 3, 4, 4, 1,…
## $ Ex_Q3 <dbl> 3, 6, 5, 5, 3, 3, 3, 0, 6, 1, 4, 2, 3, 2, 1, 2, 5, 1, 0, 5, 5,…
## $ Ex_Q4 <dbl> 1, 0, 1, 3, 3, 3, 4, 3, 5, 3, 2, 0, 3, 3, 1, 2, NA, 4, 4, 4, 1…
## $ Ex_Q5 <dbl> 3, 0, 1, 6, 3, 3, 4, 2, 5, 2, 2, 4, 2, 3, 0, 4, 5, 2, 3, 1, 1,…
## $ Ex_Q6 <dbl> 3, 6, 5, 3, 0, 4, 3, 1, 6, 3, 2, 1, 4, 2, 1, 5, 6, 2, 1, 2, 1,…
## $ Ex_Q7 <dbl> 3, 6, 5, 4, 1, 2, 5, 3, 6, 3, 4, 3, 5, 1, 1, 6, 6, 3, 1, 1, 3,…
## $ Ex_Q8 <dbl> 2, 0, 1, 4, 3, 4, 2, 4, 6, 2, 4, 0, 4, 4, 1, 3, 5, 4, 3, 1, 1,…
## $ Ex_Q9 <dbl> 4, 6, 5, 5, 5, 2, 3, 3, 6, 3, 3, 4, 4, 3, 2, 5, 5, 4, 4, 0, 4,…
## $ Ne_Q1 <dbl> 4, 0, 0, 4, 1, 2, 3, 4, 0, 3, 3, 3, 2, 1, 1, 3, 4, 5, 2, 4, 5,…
## $ Ne_Q2 <dbl> 0, 6, 6, 4, 2, 1, 2, 3, 1, 2, 5, 5, 3, 1, 1, 1, 1, 6, 1, 2, 5,…
## $ Ne_Q3 <dbl> 0, 0, 0, 1, 0, 1, 4, 4, 0, 4, 2, 5, 1, 2, 5, 5, 2, 2, 1, 2, 5,…
## $ Ne_Q4 <dbl> 3, 6, 6, 2, 3, 2, 3, 3, 0, 4, 4, 5, 5, 4, 5, 3, 2, 5, 2, 4, 5,…
## $ Ne_Q5 <dbl> 3, 0, 1, 4, 1, 1, 4, 5, 0, 3, 4, 6, 2, 0, 1, 1, 0, 4, 3, 1, 5,…
## $ Ne_Q6 <dbl> 1, 6, 5, 1, 0, 1, 3, 4, 0, 4, 4, 5, 2, 1, 5, 6, 1, 2, 2, 3, 5,…
## $ Ne_Q7 <dbl> NA, 0, 1, 2, 0, 2, 4, 4, 0, 3, 2, 5, 1, 2, 5, 2, 2, 4, 1, 3, 5…
## $ Ne_Q8 <dbl> 2, 0, 1, 1, 1, 1, 5, 4, 0, 4, 4, 5, 1, 2, 5, 2, 1, 5, 1, 2, 5,…
## $ Op_Q1 <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
## $ Op_Q2 <dbl> 6, 0, 0, 4, 6, 4, 4, 0, 0, 3, 4, 3, 3, 4, 5, 3, 3, 4, 1, 6, 6,…
## $ Op_Q3 <dbl> 2, 6, 5, 5, 5, 4, 3, 2, 4, 3, 3, 6, 5, 5, 6, 5, 4, 4, 3, 6, 5,…
## $ Op_Q4 <dbl> 3, 0, 1, 6, 6, 3, 3, 0, 6, 3, 4, 5, 4, 5, 6, 6, 2, 2, 4, 5, 5,…
## $ Op_Q5 <dbl> 6, 6, 5, 2, 5, 4, 3, 2, 6, 6, 2, 4, 3, 4, 6, 6, 6, 5, 3, 3, 5,…
## $ Op_Q6 <dbl> 0, 6, 5, 1, 6, 4, 6, 0, 0, 3, 5, 3, 5, 5, 5, 2, 5, 1, 1, 6, 2,…
## $ Op_Q7 <dbl> 0, 6, 5, 5, 5, 4, 6, 2, 1, 3, 2, 4, 5, 5, 6, 3, 6, 5, 2, 6, 5,…
```
4\.7 Pipes
----------
Pipes are a way to order your code in a more readable format.
Let’s say you have a small data table with 10 participant IDs, two columns with variable type A, and 2 columns with variable type B. You want to calculate the mean of the A variables and the mean of the B variables and return a table with 10 rows (1 for each participant) and 3 columns (`id`, `A_mean` and `B_mean`).
One way you could do this is by creating a new object at every step and using that object in the next step. This is pretty clear, but you’ve created 6 unnecessary data objects in your environment. This can get confusing in very long scripts.
```
# make a data table with 10 subjects
data_original <- tibble(
id = 1:10,
A1 = rnorm(10, 0),
A2 = rnorm(10, 1),
B1 = rnorm(10, 2),
B2 = rnorm(10, 3)
)
# gather columns A1 to B2 into "variable" and "value" columns
data_gathered <- gather(data_original, variable, value, A1:B2)
# separate the variable column at the _ into "var" and "var_n" columns
data_separated <- separate(data_gathered, variable, c("var", "var_n"), sep = 1)
# group the data by id and var
data_grouped <- group_by(data_separated, id, var)
# calculate the mean value for each id/var
data_summarised <- summarise(data_grouped, mean = mean(value), .groups = "drop")
# spread the mean column into A and B columns
data_spread <- spread(data_summarised, var, mean)
# rename A and B to A_mean and B_mean
data <- rename(data_spread, A_mean = A, B_mean = B)
data
```
| id | A\_mean | B\_mean |
| --- | --- | --- |
| 1 | \-0\.5938256 | 1\.0243046 |
| 2 | 0\.7440623 | 2\.7172046 |
| 3 | 0\.9309275 | 3\.9262358 |
| 4 | 0\.7197686 | 1\.9662632 |
| 5 | \-0\.0280832 | 1\.9473456 |
| 6 | \-0\.0982555 | 3\.2073687 |
| 7 | 0\.1256922 | 0\.9256321 |
| 8 | 1\.4526447 | 2\.3778116 |
| 9 | 0\.2976443 | 1\.6617481 |
| 10 | 0\.5589199 | 2\.1034679 |
You *can* name each object `data` and keep replacing the old data object with the new one at each step. This will keep your environment clean, but I don’t recommend it because it makes it too easy to accidentally run your code out of order when you are running line\-by\-line for development or debugging.
One way to avoid extra objects is to nest your functions, literally replacing each data object with the code that generated it in the previous step. This can be fine for very short chains.
```
mean_petal_width <- round(mean(iris$Petal.Width), 2)
```
But it gets extremely confusing for long chains:
```
# do not ever do this!!
data <- rename(
spread(
summarise(
group_by(
separate(
gather(
tibble(
id = 1:10,
A1 = rnorm(10, 0),
A2 = rnorm(10, 1),
B1 = rnorm(10, 2),
B2 = rnorm(10,3)),
variable, value, A1:B2),
variable, c("var", "var_n"), sep = 1),
id, var),
mean = mean(value), .groups = "drop"),
var, mean),
A_mean = A, B_mean = B)
```
The pipe lets you “pipe” the result of each function into the next function, allowing you to put your code in a logical order without creating too many extra objects.
```
# calculate mean of A and B variables for each participant
data <- tibble(
id = 1:10,
A1 = rnorm(10, 0),
A2 = rnorm(10, 1),
B1 = rnorm(10, 2),
B2 = rnorm(10,3)
) %>%
gather(variable, value, A1:B2) %>%
separate(variable, c("var", "var_n"), sep=1) %>%
group_by(id, var) %>%
summarise(mean = mean(value), .groups = "drop") %>%
spread(var, mean) %>%
rename(A_mean = A, B_mean = B)
```
You can read this code from top to bottom as follows:
1. Make a tibble called `data` with
* id of 1 to 10,
* A1 of 10 random numbers from a normal distribution,
* A2 of 10 random numbers from a normal distribution,
* B1 of 10 random numbers from a normal distribution,
* B2 of 10 random numbers from a normal distribution; and then
2. Gather to create `variable` and `value` column from columns `A_1` to `B_2`; and then
3. Separate the column `variable` into 2 new columns called `var`and `var_n`, separate at character 1; and then
4. Group by columns `id` and `var`; and then
5. Summarise and new column called `mean` as the mean of the `value` column for each group and drop the grouping; and then
6. Spread to make new columns with the key names in `var` and values in `mean`; and then
7. Rename to make columns called `A_mean` (old `A`) and `B_mean` (old `B`)
You can make intermediate objects whenever you need to break up your code because it’s getting too complicated or you need to debug something.
You can debug a pipe by highlighting from the beginning to just before the pipe you want to stop at. Try this by highlighting from `data <-` to the end of the `separate` function and typing cmd\-return. What does `data` look like now?
Chain all the steps above using pipes.
```
personality_reshaped <- personality %>%
gather("question", "score", Op1:Ex9) %>%
separate(question, c("domain", "qnumber"), sep = 2) %>%
unite("domain_n", domain, qnumber, sep = "_Q") %>%
spread(domain_n, score)
```
4\.8 More Complex Example
-------------------------
### 4\.8\.1 Load Data
Get data on infant and maternal mortality rates from the dataskills package. If you don’t have the package, you can download them here:
* [infant mortality](https://psyteachr.github.io/msc-data-skills/data/infmort.csv)
* [maternal mortality](https://psyteachr.github.io/msc-data-skills/data/matmort.xls)
```
data("infmort", package = "dataskills")
head(infmort)
```
| Country | Year | Infant mortality rate (probability of dying between birth and age 1 per 1000 live births) |
| --- | --- | --- |
| Afghanistan | 2015 | 66\.3 \[52\.7\-83\.9] |
| Afghanistan | 2014 | 68\.1 \[55\.7\-83\.6] |
| Afghanistan | 2013 | 69\.9 \[58\.7\-83\.5] |
| Afghanistan | 2012 | 71\.7 \[61\.6\-83\.7] |
| Afghanistan | 2011 | 73\.4 \[64\.4\-84\.2] |
| Afghanistan | 2010 | 75\.1 \[66\.9\-85\.1] |
```
data("matmort", package = "dataskills")
head(matmort)
```
| Country | 1990 | 2000 | 2015 |
| --- | --- | --- | --- |
| Afghanistan | 1 340 \[ 878 \- 1 950] | 1 100 \[ 745 \- 1 570] | 396 \[ 253 \- 620] |
| Albania | 71 \[ 58 \- 88] | 43 \[ 33 \- 56] | 29 \[ 16 \- 46] |
| Algeria | 216 \[ 141 \- 327] | 170 \[ 118 \- 241] | 140 \[ 82 \- 244] |
| Angola | 1 160 \[ 627 \- 2 020] | 924 \[ 472 \- 1 730] | 477 \[ 221 \- 988] |
| Argentina | 72 \[ 64 \- 80] | 60 \[ 54 \- 65] | 52 \[ 44 \- 63] |
| Armenia | 58 \[ 51 \- 65] | 40 \[ 35 \- 46] | 25 \[ 21 \- 31] |
### 4\.8\.2 Wide to Long
`matmort` is in wide format, with a separate column for each year. Change it to long format, with a row for each Country/Year observation.
This example is complicated because the column names to gather *are* numbers. If the column names are non\-standard (e.g., have spaces, start with numbers, or have special characters), you can enclose them in backticks (\`) like the example below.
```
matmort_long <- matmort %>%
pivot_longer(cols = `1990`:`2015`,
names_to = "Year",
values_to = "stats") %>%
glimpse()
```
```
## Rows: 543
## Columns: 3
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Albania", "Alban…
## $ Year <chr> "1990", "2000", "2015", "1990", "2000", "2015", "1990", "2000"…
## $ stats <chr> "1 340 [ 878 - 1 950]", "1 100 [ 745 - 1 570]", "396 [ 253 - …
```
You can put `matmort` at the first argument to `pivot_longer()`; you don’t have to pipe it in. But when I’m working on data processing I often find myself needing to insert or rearrange steps and I constantly introduce errors by forgetting to take the first argument out of a pipe chain, so now I start with the original data table and pipe from there.
Alternatively, you can use the `gather()` function.
```
matmort_long <- matmort %>%
gather("Year", "stats", `1990`:`2015`) %>%
glimpse()
```
```
## Rows: 543
## Columns: 3
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ stats <chr> "1 340 [ 878 - 1 950]", "71 [ 58 - 88]", "216 [ 141 - 327]",…
```
### 4\.8\.3 One Piece of Data per Column
The data in the `stats` column is in an unusual format with some sort of confidence interval in brackets and lots of extra spaces. We don’t need any of the spaces, so first we’ll remove them with `mutate()`, which we’ll learn more about in the next lesson.
The `separate` function will separate your data on anything that is not a number or letter, so try it first without specifying the `sep` argument. The `into` argument is a list of the new column names.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi")) %>%
glimpse()
```
```
## Warning: Expected 3 pieces. Additional pieces discarded in 543 rows [1, 2, 3, 4,
## 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, ...].
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
The `gsub(pattern, replacement, x)` function is a flexible way to do search and replace. The example above replaces all occurances of the `pattern` " " (a space), with the `replacement` "" (nothing), in the string `x` (the `stats` column). Use `sub()` instead if you only want to replace the first occurance of a pattern. We only used a simple pattern here, but you can use more complicated [regex](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) patterns to replace, for example, all even numbers (e.g., `gsub(“[:02468:],” "“,”id = 123456“)`) or all occurances of the word colour in US or UK spelling (e.g., `gsub(”colo(u)?r“,”**“,”replace color, colour, or colours, but not collors")`).
#### 4\.8\.3\.1 Handle spare columns with `extra`
The previous example should have given you an error warning about “Additional pieces discarded in 543 rows.” This is because `separate` splits the column at the brackets and dashes, so the text `100[90-110]` would split into four values `c(“100,” “90,” “110,” "“)`, but we only specified 3 new columns. The fourth value is always empty (just the part after the last bracket), so we are happy to drop it, but `separate` generates a warning so you don’t do that accidentally. You can turn off the warning by adding the `extra` argument and setting it to “drop.” Look at the help for `??tidyr::separate` to see what the other options do.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
#### 4\.8\.3\.2 Set delimiters with `sep`
Now do the same with `infmort`. It’s already in long format, so you don’t need to use `gather`, but the third column has a ridiculously long name, so we can just refer to it by its column number (3\).
```
infmort_split <- infmort %>%
separate(3, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66", "68", "69", "71", "73", "75", "76", "78", "80", "82", "8…
## $ ci_low <chr> "3", "1", "9", "7", "4", "1", "8", "6", "4", "3", "4", "7", "0…
## $ ci_hi <chr> "52", "55", "58", "61", "64", "66", "69", "71", "73", "75", "7…
```
**Wait, that didn’t work at all!** It split the column on spaces, brackets, *and* full stops. We just want to split on the spaces, brackets and dashes. So we need to manually set `sep` to what the delimiters are. Also, once there are more than a few arguments specified for a function, it’s easier to read them if you put one argument on each line.
{\#regex}
You can use [regular expressions](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) to separate complex columns. Here, we want to separate on dashes and brackets. You can separate on a list of delimiters by putting them in parentheses, separated by “\|.” It’s a little more complicated because brackets have a special meaning in regex, so you need to “escape” the left one with two backslashes “\\.”
```
infmort_split <- infmort %>%
separate(
col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])"
) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66.3 ", "68.1 ", "69.9 ", "71.7 ", "73.4 ", "75.1 ", "76.8 ",…
## $ ci_low <chr> "52.7", "55.7", "58.7", "61.6", "64.4", "66.9", "69.0", "71.2"…
## $ ci_hi <chr> "83.9", "83.6", "83.5", "83.7", "84.2", "85.1", "86.1", "87.3"…
```
#### 4\.8\.3\.3 Fix data types with `convert`
That’s better. Notice the next to `Year`, `rate`, `ci_low` and `ci_hi`. That means these columns hold characters (like words), not numbers or integers. This can cause problems when you try to do thigs like average the numbers (you can’t average words), so we can fix it by adding the argument `convert` and setting it to `TRUE`.
```
infmort_split <- infmort %>%
separate(col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <dbl> 66.3, 68.1, 69.9, 71.7, 73.4, 75.1, 76.8, 78.6, 80.4, 82.3, 84…
## $ ci_low <dbl> 52.7, 55.7, 58.7, 61.6, 64.4, 66.9, 69.0, 71.2, 73.4, 75.5, 77…
## $ ci_hi <dbl> 83.9, 83.6, 83.5, 83.7, 84.2, 85.1, 86.1, 87.3, 88.9, 90.7, 92…
```
Do the same for `matmort`.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
### 4\.8\.4 All in one step
We can chain all the steps for `matmort` above together, since we don’t need those intermediate data tables.
```
matmort2<- dataskills::matmort %>%
gather("Year", "stats", `1990`:`2015`) %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(
col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE
) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
### 4\.8\.5 Columns by Year
Spread out the maternal mortality rate by year.
```
matmort_wide <- matmort2 %>%
spread(key = Year, value = rate) %>%
print()
```
```
## # A tibble: 542 x 6
## Country ci_low ci_hi `1990` `2000` `2015`
## <chr> <int> <int> <int> <int> <int>
## 1 Afghanistan 253 620 NA NA 396
## 2 Afghanistan 745 1570 NA 1100 NA
## 3 Afghanistan 878 1950 1340 NA NA
## 4 Albania 16 46 NA NA 29
## 5 Albania 33 56 NA 43 NA
## 6 Albania 58 88 71 NA NA
## 7 Algeria 82 244 NA NA 140
## 8 Algeria 118 241 NA 170 NA
## 9 Algeria 141 327 216 NA NA
## 10 Angola 221 988 NA NA 477
## # … with 532 more rows
```
Nope, that didn’t work at all, but it’s a really common mistake when spreading data. This is because `spread` matches on all the remaining columns, so Afghanistan with `ci_low` of 253 is treated as a different observation than Afghanistan with `ci_low` of 745\.
This is where `pivot_wider()` can be very useful. You can set `values_from` to multiple column names and their names will be added to the `names_from` values.
```
matmort_wide <- matmort2 %>%
pivot_wider(
names_from = Year,
values_from = c(rate, ci_low, ci_hi)
)
glimpse(matmort_wide)
```
```
## Rows: 181
## Columns: 10
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina"…
## $ rate_1990 <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33…
## $ rate_2000 <int> 1100, 43, 170, 924, 60, 40, 9, 5, 48, 61, 21, 399, 48, 26,…
## $ rate_2015 <int> 396, 29, 140, 477, 52, 25, 6, 4, 25, 80, 15, 176, 27, 4, 7…
## $ ci_low_1990 <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, …
## $ ci_low_2000 <int> 745, 33, 118, 472, 54, 35, 8, 4, 42, 50, 18, 322, 38, 22, …
## $ ci_low_2015 <int> 253, 16, 82, 221, 44, 21, 5, 3, 17, 53, 12, 125, 19, 3, 5,…
## $ ci_hi_1990 <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 3…
## $ ci_hi_2000 <int> 1570, 56, 241, 1730, 65, 46, 10, 6, 55, 74, 26, 496, 58, 3…
## $ ci_hi_2015 <int> 620, 46, 244, 988, 63, 31, 7, 5, 35, 124, 19, 280, 37, 6, …
```
### 4\.8\.6 Experimentum Data
Students in the Institute of Neuroscience and Psychology at the University of Glasgow can use the online experiment builder platform, [Experimentum](https://debruine.github.io/experimentum/). The platform is also [open source on github](https://github.com/debruine/experimentum) for anyone who can install it on a web server. It allows you to group questionnaires and experiments into **projects** with randomisation and counterbalancing. Data for questionnaires and experiments are downloadable in long format, but researchers often need to put them in wide format for analysis.
Look at the help menu for built\-in dataset `dataskills::experimentum_quests` to learn what each column is. Subjects are asked questions about dogs to test the different questionnaire response types.
* current: Do you own a dog? (yes/no)
* past: Have you ever owned a dog? (yes/no)
* name: What is the best name for a dog? (free short text)
* good: How good are dogs? (1\=pretty good:7\=very good)
* country: What country do borzois come from?
* good\_borzoi: How good are borzois? (0\=pretty good:100\=very good)
* text: Write some text about dogs. (free long text)
* time: What time is it? (time)
To get the dataset into wide format, where each question is in a separate column, use the following code:
```
q <- dataskills::experimentum_quests %>%
pivot_wider(id_cols = session_id:user_age,
names_from = q_name,
values_from = dv) %>%
type.convert(as.is = TRUE) %>%
print()
```
```
## # A tibble: 24 x 15
## session_id project_id quest_id user_id user_sex user_status user_age current
## <int> <int> <int> <int> <chr> <chr> <dbl> <int>
## 1 34034 1 1 31105 female guest 28.2 1
## 2 34104 1 1 31164 male registered 19.4 1
## 3 34326 1 1 31392 female guest 17 0
## 4 34343 1 1 31397 male guest 22 1
## 5 34765 1 1 31770 female guest 44 1
## 6 34796 1 1 31796 female guest 35.9 0
## 7 34806 1 1 31798 female guest 35 0
## 8 34822 1 1 31802 female guest 58 1
## 9 34864 1 1 31820 male guest 20 0
## 10 35014 1 1 31921 female student 39.2 1
## # … with 14 more rows, and 7 more variables: past <int>, name <chr>,
## # good <int>, country <chr>, text <chr>, good_borzoi <int>, time <chr>
```
The responses in the `dv` column have multiple types (e.g., `r glossary(“integer”)`, `r glossary(“double”)`, and `r glossary(“character”)`), but they are all represented as character strings when they’re in the same column. After you spread the data to wide format, each column should be given the ocrrect data type. The function `type.convert()` makes a best guess at what type each new column should be and converts it. The argument `as.is = TRUE` converts columns where none of the numbers have decimal places to integers.
### 4\.8\.1 Load Data
Get data on infant and maternal mortality rates from the dataskills package. If you don’t have the package, you can download them here:
* [infant mortality](https://psyteachr.github.io/msc-data-skills/data/infmort.csv)
* [maternal mortality](https://psyteachr.github.io/msc-data-skills/data/matmort.xls)
```
data("infmort", package = "dataskills")
head(infmort)
```
| Country | Year | Infant mortality rate (probability of dying between birth and age 1 per 1000 live births) |
| --- | --- | --- |
| Afghanistan | 2015 | 66\.3 \[52\.7\-83\.9] |
| Afghanistan | 2014 | 68\.1 \[55\.7\-83\.6] |
| Afghanistan | 2013 | 69\.9 \[58\.7\-83\.5] |
| Afghanistan | 2012 | 71\.7 \[61\.6\-83\.7] |
| Afghanistan | 2011 | 73\.4 \[64\.4\-84\.2] |
| Afghanistan | 2010 | 75\.1 \[66\.9\-85\.1] |
```
data("matmort", package = "dataskills")
head(matmort)
```
| Country | 1990 | 2000 | 2015 |
| --- | --- | --- | --- |
| Afghanistan | 1 340 \[ 878 \- 1 950] | 1 100 \[ 745 \- 1 570] | 396 \[ 253 \- 620] |
| Albania | 71 \[ 58 \- 88] | 43 \[ 33 \- 56] | 29 \[ 16 \- 46] |
| Algeria | 216 \[ 141 \- 327] | 170 \[ 118 \- 241] | 140 \[ 82 \- 244] |
| Angola | 1 160 \[ 627 \- 2 020] | 924 \[ 472 \- 1 730] | 477 \[ 221 \- 988] |
| Argentina | 72 \[ 64 \- 80] | 60 \[ 54 \- 65] | 52 \[ 44 \- 63] |
| Armenia | 58 \[ 51 \- 65] | 40 \[ 35 \- 46] | 25 \[ 21 \- 31] |
### 4\.8\.2 Wide to Long
`matmort` is in wide format, with a separate column for each year. Change it to long format, with a row for each Country/Year observation.
This example is complicated because the column names to gather *are* numbers. If the column names are non\-standard (e.g., have spaces, start with numbers, or have special characters), you can enclose them in backticks (\`) like the example below.
```
matmort_long <- matmort %>%
pivot_longer(cols = `1990`:`2015`,
names_to = "Year",
values_to = "stats") %>%
glimpse()
```
```
## Rows: 543
## Columns: 3
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Albania", "Alban…
## $ Year <chr> "1990", "2000", "2015", "1990", "2000", "2015", "1990", "2000"…
## $ stats <chr> "1 340 [ 878 - 1 950]", "1 100 [ 745 - 1 570]", "396 [ 253 - …
```
You can put `matmort` at the first argument to `pivot_longer()`; you don’t have to pipe it in. But when I’m working on data processing I often find myself needing to insert or rearrange steps and I constantly introduce errors by forgetting to take the first argument out of a pipe chain, so now I start with the original data table and pipe from there.
Alternatively, you can use the `gather()` function.
```
matmort_long <- matmort %>%
gather("Year", "stats", `1990`:`2015`) %>%
glimpse()
```
```
## Rows: 543
## Columns: 3
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ stats <chr> "1 340 [ 878 - 1 950]", "71 [ 58 - 88]", "216 [ 141 - 327]",…
```
### 4\.8\.3 One Piece of Data per Column
The data in the `stats` column is in an unusual format with some sort of confidence interval in brackets and lots of extra spaces. We don’t need any of the spaces, so first we’ll remove them with `mutate()`, which we’ll learn more about in the next lesson.
The `separate` function will separate your data on anything that is not a number or letter, so try it first without specifying the `sep` argument. The `into` argument is a list of the new column names.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi")) %>%
glimpse()
```
```
## Warning: Expected 3 pieces. Additional pieces discarded in 543 rows [1, 2, 3, 4,
## 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, ...].
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
The `gsub(pattern, replacement, x)` function is a flexible way to do search and replace. The example above replaces all occurances of the `pattern` " " (a space), with the `replacement` "" (nothing), in the string `x` (the `stats` column). Use `sub()` instead if you only want to replace the first occurance of a pattern. We only used a simple pattern here, but you can use more complicated [regex](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) patterns to replace, for example, all even numbers (e.g., `gsub(“[:02468:],” "“,”id = 123456“)`) or all occurances of the word colour in US or UK spelling (e.g., `gsub(”colo(u)?r“,”**“,”replace color, colour, or colours, but not collors")`).
#### 4\.8\.3\.1 Handle spare columns with `extra`
The previous example should have given you an error warning about “Additional pieces discarded in 543 rows.” This is because `separate` splits the column at the brackets and dashes, so the text `100[90-110]` would split into four values `c(“100,” “90,” “110,” "“)`, but we only specified 3 new columns. The fourth value is always empty (just the part after the last bracket), so we are happy to drop it, but `separate` generates a warning so you don’t do that accidentally. You can turn off the warning by adding the `extra` argument and setting it to “drop.” Look at the help for `??tidyr::separate` to see what the other options do.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
#### 4\.8\.3\.2 Set delimiters with `sep`
Now do the same with `infmort`. It’s already in long format, so you don’t need to use `gather`, but the third column has a ridiculously long name, so we can just refer to it by its column number (3\).
```
infmort_split <- infmort %>%
separate(3, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66", "68", "69", "71", "73", "75", "76", "78", "80", "82", "8…
## $ ci_low <chr> "3", "1", "9", "7", "4", "1", "8", "6", "4", "3", "4", "7", "0…
## $ ci_hi <chr> "52", "55", "58", "61", "64", "66", "69", "71", "73", "75", "7…
```
**Wait, that didn’t work at all!** It split the column on spaces, brackets, *and* full stops. We just want to split on the spaces, brackets and dashes. So we need to manually set `sep` to what the delimiters are. Also, once there are more than a few arguments specified for a function, it’s easier to read them if you put one argument on each line.
{\#regex}
You can use [regular expressions](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) to separate complex columns. Here, we want to separate on dashes and brackets. You can separate on a list of delimiters by putting them in parentheses, separated by “\|.” It’s a little more complicated because brackets have a special meaning in regex, so you need to “escape” the left one with two backslashes “\\.”
```
infmort_split <- infmort %>%
separate(
col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])"
) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66.3 ", "68.1 ", "69.9 ", "71.7 ", "73.4 ", "75.1 ", "76.8 ",…
## $ ci_low <chr> "52.7", "55.7", "58.7", "61.6", "64.4", "66.9", "69.0", "71.2"…
## $ ci_hi <chr> "83.9", "83.6", "83.5", "83.7", "84.2", "85.1", "86.1", "87.3"…
```
#### 4\.8\.3\.3 Fix data types with `convert`
That’s better. Notice the next to `Year`, `rate`, `ci_low` and `ci_hi`. That means these columns hold characters (like words), not numbers or integers. This can cause problems when you try to do thigs like average the numbers (you can’t average words), so we can fix it by adding the argument `convert` and setting it to `TRUE`.
```
infmort_split <- infmort %>%
separate(col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <dbl> 66.3, 68.1, 69.9, 71.7, 73.4, 75.1, 76.8, 78.6, 80.4, 82.3, 84…
## $ ci_low <dbl> 52.7, 55.7, 58.7, 61.6, 64.4, 66.9, 69.0, 71.2, 73.4, 75.5, 77…
## $ ci_hi <dbl> 83.9, 83.6, 83.5, 83.7, 84.2, 85.1, 86.1, 87.3, 88.9, 90.7, 92…
```
Do the same for `matmort`.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
#### 4\.8\.3\.1 Handle spare columns with `extra`
The previous example should have given you an error warning about “Additional pieces discarded in 543 rows.” This is because `separate` splits the column at the brackets and dashes, so the text `100[90-110]` would split into four values `c(“100,” “90,” “110,” "“)`, but we only specified 3 new columns. The fourth value is always empty (just the part after the last bracket), so we are happy to drop it, but `separate` generates a warning so you don’t do that accidentally. You can turn off the warning by adding the `extra` argument and setting it to “drop.” Look at the help for `??tidyr::separate` to see what the other options do.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
#### 4\.8\.3\.2 Set delimiters with `sep`
Now do the same with `infmort`. It’s already in long format, so you don’t need to use `gather`, but the third column has a ridiculously long name, so we can just refer to it by its column number (3\).
```
infmort_split <- infmort %>%
separate(3, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66", "68", "69", "71", "73", "75", "76", "78", "80", "82", "8…
## $ ci_low <chr> "3", "1", "9", "7", "4", "1", "8", "6", "4", "3", "4", "7", "0…
## $ ci_hi <chr> "52", "55", "58", "61", "64", "66", "69", "71", "73", "75", "7…
```
**Wait, that didn’t work at all!** It split the column on spaces, brackets, *and* full stops. We just want to split on the spaces, brackets and dashes. So we need to manually set `sep` to what the delimiters are. Also, once there are more than a few arguments specified for a function, it’s easier to read them if you put one argument on each line.
{\#regex}
You can use [regular expressions](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) to separate complex columns. Here, we want to separate on dashes and brackets. You can separate on a list of delimiters by putting them in parentheses, separated by “\|.” It’s a little more complicated because brackets have a special meaning in regex, so you need to “escape” the left one with two backslashes “\\.”
```
infmort_split <- infmort %>%
separate(
col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])"
) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66.3 ", "68.1 ", "69.9 ", "71.7 ", "73.4 ", "75.1 ", "76.8 ",…
## $ ci_low <chr> "52.7", "55.7", "58.7", "61.6", "64.4", "66.9", "69.0", "71.2"…
## $ ci_hi <chr> "83.9", "83.6", "83.5", "83.7", "84.2", "85.1", "86.1", "87.3"…
```
#### 4\.8\.3\.3 Fix data types with `convert`
That’s better. Notice the next to `Year`, `rate`, `ci_low` and `ci_hi`. That means these columns hold characters (like words), not numbers or integers. This can cause problems when you try to do thigs like average the numbers (you can’t average words), so we can fix it by adding the argument `convert` and setting it to `TRUE`.
```
infmort_split <- infmort %>%
separate(col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <dbl> 66.3, 68.1, 69.9, 71.7, 73.4, 75.1, 76.8, 78.6, 80.4, 82.3, 84…
## $ ci_low <dbl> 52.7, 55.7, 58.7, 61.6, 64.4, 66.9, 69.0, 71.2, 73.4, 75.5, 77…
## $ ci_hi <dbl> 83.9, 83.6, 83.5, 83.7, 84.2, 85.1, 86.1, 87.3, 88.9, 90.7, 92…
```
Do the same for `matmort`.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
### 4\.8\.4 All in one step
We can chain all the steps for `matmort` above together, since we don’t need those intermediate data tables.
```
matmort2<- dataskills::matmort %>%
gather("Year", "stats", `1990`:`2015`) %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(
col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE
) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
### 4\.8\.5 Columns by Year
Spread out the maternal mortality rate by year.
```
matmort_wide <- matmort2 %>%
spread(key = Year, value = rate) %>%
print()
```
```
## # A tibble: 542 x 6
## Country ci_low ci_hi `1990` `2000` `2015`
## <chr> <int> <int> <int> <int> <int>
## 1 Afghanistan 253 620 NA NA 396
## 2 Afghanistan 745 1570 NA 1100 NA
## 3 Afghanistan 878 1950 1340 NA NA
## 4 Albania 16 46 NA NA 29
## 5 Albania 33 56 NA 43 NA
## 6 Albania 58 88 71 NA NA
## 7 Algeria 82 244 NA NA 140
## 8 Algeria 118 241 NA 170 NA
## 9 Algeria 141 327 216 NA NA
## 10 Angola 221 988 NA NA 477
## # … with 532 more rows
```
Nope, that didn’t work at all, but it’s a really common mistake when spreading data. This is because `spread` matches on all the remaining columns, so Afghanistan with `ci_low` of 253 is treated as a different observation than Afghanistan with `ci_low` of 745\.
This is where `pivot_wider()` can be very useful. You can set `values_from` to multiple column names and their names will be added to the `names_from` values.
```
matmort_wide <- matmort2 %>%
pivot_wider(
names_from = Year,
values_from = c(rate, ci_low, ci_hi)
)
glimpse(matmort_wide)
```
```
## Rows: 181
## Columns: 10
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina"…
## $ rate_1990 <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33…
## $ rate_2000 <int> 1100, 43, 170, 924, 60, 40, 9, 5, 48, 61, 21, 399, 48, 26,…
## $ rate_2015 <int> 396, 29, 140, 477, 52, 25, 6, 4, 25, 80, 15, 176, 27, 4, 7…
## $ ci_low_1990 <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, …
## $ ci_low_2000 <int> 745, 33, 118, 472, 54, 35, 8, 4, 42, 50, 18, 322, 38, 22, …
## $ ci_low_2015 <int> 253, 16, 82, 221, 44, 21, 5, 3, 17, 53, 12, 125, 19, 3, 5,…
## $ ci_hi_1990 <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 3…
## $ ci_hi_2000 <int> 1570, 56, 241, 1730, 65, 46, 10, 6, 55, 74, 26, 496, 58, 3…
## $ ci_hi_2015 <int> 620, 46, 244, 988, 63, 31, 7, 5, 35, 124, 19, 280, 37, 6, …
```
### 4\.8\.6 Experimentum Data
Students in the Institute of Neuroscience and Psychology at the University of Glasgow can use the online experiment builder platform, [Experimentum](https://debruine.github.io/experimentum/). The platform is also [open source on github](https://github.com/debruine/experimentum) for anyone who can install it on a web server. It allows you to group questionnaires and experiments into **projects** with randomisation and counterbalancing. Data for questionnaires and experiments are downloadable in long format, but researchers often need to put them in wide format for analysis.
Look at the help menu for built\-in dataset `dataskills::experimentum_quests` to learn what each column is. Subjects are asked questions about dogs to test the different questionnaire response types.
* current: Do you own a dog? (yes/no)
* past: Have you ever owned a dog? (yes/no)
* name: What is the best name for a dog? (free short text)
* good: How good are dogs? (1\=pretty good:7\=very good)
* country: What country do borzois come from?
* good\_borzoi: How good are borzois? (0\=pretty good:100\=very good)
* text: Write some text about dogs. (free long text)
* time: What time is it? (time)
To get the dataset into wide format, where each question is in a separate column, use the following code:
```
q <- dataskills::experimentum_quests %>%
pivot_wider(id_cols = session_id:user_age,
names_from = q_name,
values_from = dv) %>%
type.convert(as.is = TRUE) %>%
print()
```
```
## # A tibble: 24 x 15
## session_id project_id quest_id user_id user_sex user_status user_age current
## <int> <int> <int> <int> <chr> <chr> <dbl> <int>
## 1 34034 1 1 31105 female guest 28.2 1
## 2 34104 1 1 31164 male registered 19.4 1
## 3 34326 1 1 31392 female guest 17 0
## 4 34343 1 1 31397 male guest 22 1
## 5 34765 1 1 31770 female guest 44 1
## 6 34796 1 1 31796 female guest 35.9 0
## 7 34806 1 1 31798 female guest 35 0
## 8 34822 1 1 31802 female guest 58 1
## 9 34864 1 1 31820 male guest 20 0
## 10 35014 1 1 31921 female student 39.2 1
## # … with 14 more rows, and 7 more variables: past <int>, name <chr>,
## # good <int>, country <chr>, text <chr>, good_borzoi <int>, time <chr>
```
The responses in the `dv` column have multiple types (e.g., `r glossary(“integer”)`, `r glossary(“double”)`, and `r glossary(“character”)`), but they are all represented as character strings when they’re in the same column. After you spread the data to wide format, each column should be given the ocrrect data type. The function `type.convert()` makes a best guess at what type each new column should be and converts it. The argument `as.is = TRUE` converts columns where none of the numbers have decimal places to integers.
4\.9 Glossary
-------------
| term | definition |
| --- | --- |
| [long](https://psyteachr.github.io/glossary/l#long) | Data where each observation is on a separate row |
| [observation](https://psyteachr.github.io/glossary/o#observation) | All of the data about a single trial or question. |
| [value](https://psyteachr.github.io/glossary/v#value) | A single number or piece of data. |
| [variable](https://psyteachr.github.io/glossary/v#variable) | A word that identifies and stores the value of some data for later use. |
| [wide](https://psyteachr.github.io/glossary/w#wide) | Data where all of the observations about one subject are in the same row |
4\.10 Exercises
---------------
Download the [exercises](exercises/04_tidyr_exercise.Rmd). See the [answers](exercises/04_tidyr_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(4)
# run this to access the answers
dataskills::exercise(4, answers = TRUE)
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/tidyr.html |
Chapter 4 Tidy Data
===================
4\.1 Learning Objectives
------------------------
### 4\.1\.1 Basic
1. Understand the concept of [tidy data](tidyr.html#tidy-data) [(video)](https://youtu.be/EsSN4OdsNpc)
2. Be able to convert between long and wide formats using pivot functions [(video)](https://youtu.be/4dvLmjhwN8I)
* [`pivot_longer()`](tidyr.html#pivot_longer)
* [`pivot_wider()`](tidyr.html#pivot_wider)
3. Be able to use the 4 basic `tidyr` verbs [(video)](https://youtu.be/oUWjb0JC8zM)
* [`gather()`](tidyr.html#gather)
* [`separate()`](tidyr.html#separate)
* [`spread()`](tidyr.html#spread)
* [`unite()`](tidyr.html#unite)
4. Be able to chain functions using [pipes](tidyr.html#pipes) [(video)](https://youtu.be/itfrlLaN4SE)
### 4\.1\.2 Advanced
5. Be able to use [regular expressions](#regex) to separate complex columns
4\.2 Resources
--------------
* [Tidy Data](http://vita.had.co.nz/papers/tidy-data.html)
* [Chapter 12: Tidy Data](http://r4ds.had.co.nz/tidy-data.html) in *R for Data Science*
* [Chapter 18: Pipes](http://r4ds.had.co.nz/pipes.html) in *R for Data Science*
* [Data wrangling cheat sheet](https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf)
4\.3 Setup
----------
```
# libraries needed
library(tidyverse)
library(dataskills)
set.seed(8675309) # makes sure random numbers are reproducible
```
4\.4 Tidy Data
--------------
### 4\.4\.1 Three Rules
* Each [variable](https://psyteachr.github.io/glossary/v#variable "A word that identifies and stores the value of some data for later use.") must have its own column
* Each [observation](https://psyteachr.github.io/glossary/o#observation "All of the data about a single trial or question.") must have its own row
* Each [value](https://psyteachr.github.io/glossary/v#value "A single number or piece of data.") must have its own cell
This table has three observations per row and the `total_meanRT` column contains two values.
Table 4\.1: Untidy table
| id | score\_1 | score\_2 | score\_3 | rt\_1 | rt\_2 | rt\_3 | total\_meanRT |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 4 | 3 | 7 | 857 | 890 | 859 | 14 (869\) |
| 2 | 3 | 1 | 1 | 902 | 900 | 959 | 5 (920\) |
| 3 | 2 | 5 | 4 | 757 | 823 | 901 | 11 (827\) |
| 4 | 6 | 2 | 6 | 844 | 788 | 624 | 14 (752\) |
| 5 | 1 | 7 | 2 | 659 | 764 | 690 | 10 (704\) |
This is the tidy version.
Table 4\.1: Tidy table
| id | trial | rt | score | total | mean\_rt |
| --- | --- | --- | --- | --- | --- |
| 1 | 1 | 857 | 4 | 14 | 869 |
| 1 | 2 | 890 | 3 | 14 | 869 |
| 1 | 3 | 859 | 7 | 14 | 869 |
| 2 | 1 | 902 | 3 | 5 | 920 |
| 2 | 2 | 900 | 1 | 5 | 920 |
| 2 | 3 | 959 | 1 | 5 | 920 |
| 3 | 1 | 757 | 2 | 11 | 827 |
| 3 | 2 | 823 | 5 | 11 | 827 |
| 3 | 3 | 901 | 4 | 11 | 827 |
| 4 | 1 | 844 | 6 | 14 | 752 |
| 4 | 2 | 788 | 2 | 14 | 752 |
| 4 | 3 | 624 | 6 | 14 | 752 |
| 5 | 1 | 659 | 1 | 10 | 704 |
| 5 | 2 | 764 | 7 | 10 | 704 |
| 5 | 3 | 690 | 2 | 10 | 704 |
### 4\.4\.2 Wide versus long
Data tables can be in [wide](https://psyteachr.github.io/glossary/w#wide "Data where all of the observations about one subject are in the same row") format or [long](https://psyteachr.github.io/glossary/l#long "Data where each observation is on a separate row") format (and sometimes a mix of the two). Wide data are where all of the observations about one subject are in the same row, while long data are where each observation is on a separate row. You often need to convert between these formats to do different types of analyses or data processing.
Imagine a study where each subject completes a questionnaire with three items. Each answer is an [observation](https://psyteachr.github.io/glossary/o#observation "All of the data about a single trial or question.") of that subject. You are probably most familiar with data like this in a wide format, where the subject `id` is in one column, and each of the three item responses is in its own column.
Table 4\.2: Wide data
| id | Q1 | Q2 | Q3 |
| --- | --- | --- | --- |
| A | 1 | 2 | 3 |
| B | 4 | 5 | 6 |
The same data can be represented in a long format by creating a new column that specifies what `item` the observation is from and a new column that specifies the `value` of that observation.
Table 4\.3: Long data
| id | item | value |
| --- | --- | --- |
| A | Q1 | 1 |
| B | Q1 | 4 |
| A | Q2 | 2 |
| B | Q2 | 5 |
| A | Q3 | 3 |
| B | Q3 | 6 |
Create a long version of the following table.
| id | fav\_colour | fav\_animal |
| --- | --- | --- |
| Lisa | red | echidna |
| Robbie | orange | babirusa |
| Steven | green | frog |
Answer
Your answer doesn’t need to have the same column headers or be in the same order.
| id | fav | answer |
| --- | --- | --- |
| Lisa | colour | red |
| Lisa | animal | echidna |
| Robbie | colour | orange |
| Robbie | animal | babirusa |
| Steven | colour | green |
| Steven | animal | frog |
4\.5 Pivot Functions
--------------------
The pivot functions allow you to transform a data table from wide to long or long to wide in one step.
### 4\.5\.1 Load Data
We will used the dataset `personality` from the dataskills package (or download the data from [personality.csv](https://psyteachr.github.io/msc-data-skills/data/personality.csv)). These data are from a 5\-factor (personality) personality questionnaire. Each question is labelled with the domain (Op \= openness, Co \= conscientiousness, Ex \= extroversion, Ag \= agreeableness, and Ne \= neuroticism) and the question number.
```
data("personality", package = "dataskills")
```
| user\_id | date | Op1 | Ne1 | Ne2 | Op2 | Ex1 | Ex2 | Co1 | Co2 | Ne3 | Ag1 | Ag2 | Ne4 | Ex3 | Co3 | Op3 | Ex4 | Op4 | Ex5 | Ag3 | Co4 | Co5 | Ne5 | Op5 | Ag4 | Op6 | Co6 | Ex6 | Ne6 | Co7 | Ag5 | Co8 | Ex7 | Ne7 | Co9 | Op7 | Ne8 | Ag6 | Ag7 | Co10 | Ex8 | Ex9 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 2006\-03\-23 | 3 | 4 | 0 | 6 | 3 | 3 | 3 | 3 | 0 | 2 | 1 | 3 | 3 | 2 | 2 | 1 | 3 | 3 | 1 | 3 | 0 | 3 | 6 | 1 | 0 | 6 | 3 | 1 | 3 | 3 | 3 | 3 | NA | 3 | 0 | 2 | NA | 3 | 1 | 2 | 4 |
| 1 | 2006\-02\-08 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 6 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 6 | 6 | 0 | 6 | 0 | 6 | 0 | 6 | 6 | 6 | 6 | 0 | 6 | 0 | 6 | 6 | 0 | 6 | 0 | 6 | 0 | 6 |
| 2 | 2005\-10\-24 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 6 | 5 | 1 | 5 | 1 | 1 | 1 | 1 | 5 | 5 | 1 | 5 | 1 | 5 | 1 | 5 | 5 | 5 | 5 | 1 | 5 | 1 | 5 | 5 | 1 | 5 | 1 | 5 | 1 | 5 |
| 5 | 2005\-12\-07 | 6 | 4 | 4 | 4 | 2 | 3 | 3 | 3 | 1 | 4 | 0 | 2 | 5 | 3 | 5 | 3 | 6 | 6 | 1 | 5 | 5 | 4 | 2 | 4 | 1 | 4 | 3 | 1 | 1 | 0 | 1 | 4 | 2 | 4 | 5 | 1 | 2 | 1 | 5 | 4 | 5 |
| 8 | 2006\-07\-27 | 6 | 1 | 2 | 6 | 2 | 3 | 5 | 4 | 0 | 6 | 5 | 3 | 3 | 4 | 5 | 3 | 6 | 3 | 0 | 5 | 5 | 1 | 5 | 6 | 6 | 6 | 0 | 0 | 3 | 2 | 3 | 1 | 0 | 3 | 5 | 1 | 3 | 1 | 3 | 3 | 5 |
| 108 | 2006\-02\-28 | 3 | 2 | 1 | 4 | 4 | 4 | 4 | 3 | 1 | 5 | 4 | 2 | 3 | 4 | 4 | 3 | 3 | 3 | 4 | 3 | 3 | 1 | 4 | 5 | 4 | 5 | 4 | 1 | 4 | 5 | 4 | 2 | 2 | 4 | 4 | 1 | 4 | 3 | 5 | 4 | 2 |
### 4\.5\.2 pivot\_longer()
`pivot_longer()` converts a wide data table to long format by converting the headers from specified columns into the values of new columns, and combining the values of those columns into a new condensed column.
* `cols` refers to the columns you want to make long You can refer to them by their names, like `col1, col2, col3, col4` or `col1:col4` or by their numbers, like `8, 9, 10` or `8:10`.
* `names_to` is what you want to call the new columns that the gathered column headers will go into; it’s “domain” and “qnumber” in this example.
* `names_sep` is an optional argument if you have more than one value for `names_to`. It specifies the characters or position to split the values of the `cols` headers.
* `values_to` is what you want to call the values in the columns `...`; they’re “score” in this example.
```
personality_long <- pivot_longer(
data = personality,
cols = Op1:Ex9, # columns to make long
names_to = c("domain", "qnumber"), # new column names for headers
names_sep = 2, # how to split the headers
values_to = "score" # new column name for values
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 5
## $ user_id <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
## $ date <date> 2006-03-23, 2006-03-23, 2006-03-23, 2006-03-23, 2006-03-23, 2…
## $ domain <chr> "Op", "Ne", "Ne", "Op", "Ex", "Ex", "Co", "Co", "Ne", "Ag", "A…
## $ qnumber <chr> "1", "1", "2", "2", "1", "2", "1", "2", "3", "1", "2", "4", "3…
## $ score <dbl> 3, 4, 0, 6, 3, 3, 3, 3, 0, 2, 1, 3, 3, 2, 2, 1, 3, 3, 1, 3, 0,…
```
You can pipe a data table to `glimpse()` at the end to have a quick look at it. It will still save to the object.
What would you set `names_sep` to in order to split the `cols` headers listed below into the results?
| `cols` | `names_to` | `names_sep` |
| --- | --- | --- |
| `A_1`, `A_2`, `B_1`, `B_2` | `c("condition", "version")` | A B 1 2 \_ |
| `A1`, `A2`, `B1`, `B2` | `c("condition", "version")` | A B 1 2 \_ |
| `cat-day&pre`, `cat-day&post`, `cat-night&pre`, `cat-night&post`, `dog-day&pre`, `dog-day&post`, `dog-night&pre`, `dog-night&post` | `c("pet", "time", "condition")` | \- \& \- |
### 4\.5\.3 pivot\_wider()
We can also go from long to wide format using the `pivot_wider()` function.
* `names_from` is the columns that contain your new column headers.
* `values_from` is the column that contains the values for the new columns.
* `names_sep` is the character string used to join names if `names_from` is more than one column.
```
personality_wide <- pivot_wider(
data = personality_long,
names_from = c(domain, qnumber),
values_from = score,
names_sep = ""
) %>%
glimpse()
```
```
## Rows: 15,000
## Columns: 43
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ Op1 <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
## $ Ne1 <dbl> 4, 0, 0, 4, 1, 2, 3, 4, 0, 3, 3, 3, 2, 1, 1, 3, 4, 5, 2, 4, 5,…
## $ Ne2 <dbl> 0, 6, 6, 4, 2, 1, 2, 3, 1, 2, 5, 5, 3, 1, 1, 1, 1, 6, 1, 2, 5,…
## $ Op2 <dbl> 6, 0, 0, 4, 6, 4, 4, 0, 0, 3, 4, 3, 3, 4, 5, 3, 3, 4, 1, 6, 6,…
## $ Ex1 <dbl> 3, 0, 0, 2, 2, 4, 4, 3, 5, 4, 1, 1, 3, 3, 1, 3, 5, 1, 0, 4, 1,…
## $ Ex2 <dbl> 3, 0, 0, 3, 3, 4, 5, 2, 5, 3, 4, 1, 3, 2, 1, 6, 5, 3, 4, 4, 1,…
## $ Co1 <dbl> 3, 0, 0, 3, 5, 4, 3, 4, 5, 3, 3, 3, 1, 5, 5, 4, 4, 5, 6, 4, 2,…
## $ Co2 <dbl> 3, 0, 0, 3, 4, 3, 3, 4, 5, 3, 5, 3, 3, 4, 5, 1, 5, 4, 5, 2, 5,…
## $ Ne3 <dbl> 0, 0, 0, 1, 0, 1, 4, 4, 0, 4, 2, 5, 1, 2, 5, 5, 2, 2, 1, 2, 5,…
## $ Ag1 <dbl> 2, 0, 0, 4, 6, 5, 5, 4, 2, 5, 4, 3, 2, 4, 5, 3, 5, 5, 5, 4, 4,…
## $ Ag2 <dbl> 1, 6, 6, 0, 5, 4, 5, 3, 4, 3, 5, 1, 5, 4, 2, 6, 5, 5, 5, 5, 2,…
## $ Ne4 <dbl> 3, 6, 6, 2, 3, 2, 3, 3, 0, 4, 4, 5, 5, 4, 5, 3, 2, 5, 2, 4, 5,…
## $ Ex3 <dbl> 3, 6, 5, 5, 3, 3, 3, 0, 6, 1, 4, 2, 3, 2, 1, 2, 5, 1, 0, 5, 5,…
## $ Co3 <dbl> 2, 0, 1, 3, 4, 4, 5, 4, 5, 3, 4, 3, 4, 4, 5, 4, 2, 4, 5, 2, 2,…
## $ Op3 <dbl> 2, 6, 5, 5, 5, 4, 3, 2, 4, 3, 3, 6, 5, 5, 6, 5, 4, 4, 3, 6, 5,…
## $ Ex4 <dbl> 1, 0, 1, 3, 3, 3, 4, 3, 5, 3, 2, 0, 3, 3, 1, 2, NA, 4, 4, 4, 1…
## $ Op4 <dbl> 3, 0, 1, 6, 6, 3, 3, 0, 6, 3, 4, 5, 4, 5, 6, 6, 2, 2, 4, 5, 5,…
## $ Ex5 <dbl> 3, 0, 1, 6, 3, 3, 4, 2, 5, 2, 2, 4, 2, 3, 0, 4, 5, 2, 3, 1, 1,…
## $ Ag3 <dbl> 1, 0, 1, 1, 0, 4, 4, 4, 3, 3, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 4,…
## $ Co4 <dbl> 3, 6, 5, 5, 5, 3, 2, 4, 3, 1, 4, 3, 1, 2, 4, 2, NA, 5, 6, 1, 1…
## $ Co5 <dbl> 0, 6, 5, 5, 5, 3, 3, 1, 5, 1, 2, 4, 4, 4, 2, 1, 6, 4, 3, 1, 3,…
## $ Ne5 <dbl> 3, 0, 1, 4, 1, 1, 4, 5, 0, 3, 4, 6, 2, 0, 1, 1, 0, 4, 3, 1, 5,…
## $ Op5 <dbl> 6, 6, 5, 2, 5, 4, 3, 2, 6, 6, 2, 4, 3, 4, 6, 6, 6, 5, 3, 3, 5,…
## $ Ag4 <dbl> 1, 0, 1, 4, 6, 5, 5, 6, 6, 6, 4, 2, 4, 5, 4, 5, 6, 4, 5, 6, 5,…
## $ Op6 <dbl> 0, 6, 5, 1, 6, 4, 6, 0, 0, 3, 5, 3, 5, 5, 5, 2, 5, 1, 1, 6, 2,…
## $ Co6 <dbl> 6, 0, 1, 4, 6, 5, 6, 5, 4, 3, 5, 5, 4, 6, 6, 1, 3, 4, 5, 4, 6,…
## $ Ex6 <dbl> 3, 6, 5, 3, 0, 4, 3, 1, 6, 3, 2, 1, 4, 2, 1, 5, 6, 2, 1, 2, 1,…
## $ Ne6 <dbl> 1, 6, 5, 1, 0, 1, 3, 4, 0, 4, 4, 5, 2, 1, 5, 6, 1, 2, 2, 3, 5,…
## $ Co7 <dbl> 3, 6, 5, 1, 3, 4, NA, 2, 3, 3, 2, 2, 4, 2, 5, 2, 5, 5, 3, 1, 1…
## $ Ag5 <dbl> 3, 6, 5, 0, 2, 5, 6, 2, 2, 3, 4, 1, 3, 5, 2, 6, 5, 6, 5, 3, 3,…
## $ Co8 <dbl> 3, 0, 1, 1, 3, 4, 3, 0, 1, 3, 2, 2, 1, 2, 4, 3, 2, 4, 5, 2, 6,…
## $ Ex7 <dbl> 3, 6, 5, 4, 1, 2, 5, 3, 6, 3, 4, 3, 5, 1, 1, 6, 6, 3, 1, 1, 3,…
## $ Ne7 <dbl> NA, 0, 1, 2, 0, 2, 4, 4, 0, 3, 2, 5, 1, 2, 5, 2, 2, 4, 1, 3, 5…
## $ Co9 <dbl> 3, 6, 5, 4, 3, 4, 5, 3, 5, 3, 4, 3, 4, 4, 2, 4, 6, 5, 5, 2, 2,…
## $ Op7 <dbl> 0, 6, 5, 5, 5, 4, 6, 2, 1, 3, 2, 4, 5, 5, 6, 3, 6, 5, 2, 6, 5,…
## $ Ne8 <dbl> 2, 0, 1, 1, 1, 1, 5, 4, 0, 4, 4, 5, 1, 2, 5, 2, 1, 5, 1, 2, 5,…
## $ Ag6 <dbl> NA, 6, 5, 2, 3, 4, 5, 6, 1, 3, 4, 2, 3, 5, 1, 6, 2, 6, 6, 5, 3…
## $ Ag7 <dbl> 3, 0, 1, 1, 1, 3, 3, 5, 0, 3, 2, 1, 2, 3, 5, 6, 4, 4, 6, 6, 2,…
## $ Co10 <dbl> 1, 6, 5, 5, 3, 5, 1, 2, 5, 2, 4, 3, 4, 4, 3, 2, 5, 5, 5, 2, 2,…
## $ Ex8 <dbl> 2, 0, 1, 4, 3, 4, 2, 4, 6, 2, 4, 0, 4, 4, 1, 3, 5, 4, 3, 1, 1,…
## $ Ex9 <dbl> 4, 6, 5, 5, 5, 2, 3, 3, 6, 3, 3, 4, 4, 3, 2, 5, 5, 4, 4, 0, 4,…
```
4\.6 Tidy Verbs
---------------
The pivot functions above are relatively new functions that combine the four basic tidy verbs. You can also convert data between long and wide formats using these functions. Many researchers still use these functions and older code will not use the pivot functions, so it is useful to know how to interpret these.
### 4\.6\.1 gather()
Much like `pivot_longer()`, `gather()` makes a wide data table long by creating a column for the headers and a column for the values. The main difference is that you cannot turn the headers into more than one column.
* `key` is what you want to call the new column that the gathered column headers will go into; it’s “question” in this example. It is like `names_to` in `pivot_longer()`, but can only take one value (multiple values need to be separated after `separate()`).
* `value` is what you want to call the values in the gathered columns; they’re “score” in this example. It is like `values_to` in `pivot_longer()`.
* `...` refers to the columns you want to gather. It is like `cols` in `pivot_longer()`.
The `gather()` function converts `personality` from a wide data table to long format, with a row for each user/question observation. The resulting data table should have the columns: `user_id`, `date`, `question`, and `score`.
```
personality_gathered <- gather(
data = personality,
key = "question", # new column name for gathered headers
value = "score", # new column name for gathered values
Op1:Ex9 # columns to gather
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 4
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 9…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, …
## $ question <chr> "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1"…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4…
```
### 4\.6\.2 separate()
* `col` is the column you want to separate
* `into` is a vector of new column names
* `sep` is the character(s) that separate your new columns. This defaults to anything that isn’t alphanumeric, like .`,_-/:` and is like the `names_sep` argument in `pivot_longer()`.
Split the `question` column into two columns: `domain` and `qnumber`.
There is no character to split on, here, but you can separate a column after a specific number of characters by setting `sep` to an integer. For example, to split “abcde” after the third character, use `sep = 3`, which results in `c("abc", "de")`. You can also use negative number to split before the *n*th character from the right. For example, to split a column that has words of various lengths and 2\-digit suffixes (like “lisa03”“,”amanda38"), you can use `sep = -2`.
```
personality_sep <- separate(
data = personality_gathered,
col = question, # column to separate
into = c("domain", "qnumber"), # new column names
sep = 2 # where to separate
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 5
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ domain <chr> "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "O…
## $ qnumber <chr> "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
```
If you want to separate just at full stops, you need to use `sep = "\\."`, not `sep = "."`. The two slashes **escape** the full stop, making it interpreted as a literal full stop and not the regular expression for any character.
### 4\.6\.3 unite()
* `col` is your new united column
* `...` refers to the columns you want to unite
* `sep` is the character(s) that will separate your united columns
Put the domain and qnumber columns back together into a new column named `domain_n`. Make it in a format like “Op\_Q1\.”
```
personality_unite <- unite(
data = personality_sep,
col = "domain_n", # new column name
domain, qnumber, # columns to unite
sep = "_Q" # separation characters
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 4
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 9…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, …
## $ domain_n <chr> "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1"…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4…
```
### 4\.6\.4 spread()
You can reverse the processes above, as well. For example, you can convert data from long format into wide format.
* `key` is the column that contains your new column headers. It is like `names_from` in `pivot_wider()`, but can only take one value (multiple values need to be merged first using `unite()`).
* `value` is the column that contains the values in the new spread columns. It is like `values_from` in `pivot_wider()`.
```
personality_spread <- spread(
data = personality_unite,
key = domain_n, # column that contains new headers
value = score # column that contains new values
) %>%
glimpse()
```
```
## Rows: 15,000
## Columns: 43
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ Ag_Q1 <dbl> 2, 0, 0, 4, 6, 5, 5, 4, 2, 5, 4, 3, 2, 4, 5, 3, 5, 5, 5, 4, 4,…
## $ Ag_Q2 <dbl> 1, 6, 6, 0, 5, 4, 5, 3, 4, 3, 5, 1, 5, 4, 2, 6, 5, 5, 5, 5, 2,…
## $ Ag_Q3 <dbl> 1, 0, 1, 1, 0, 4, 4, 4, 3, 3, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 4,…
## $ Ag_Q4 <dbl> 1, 0, 1, 4, 6, 5, 5, 6, 6, 6, 4, 2, 4, 5, 4, 5, 6, 4, 5, 6, 5,…
## $ Ag_Q5 <dbl> 3, 6, 5, 0, 2, 5, 6, 2, 2, 3, 4, 1, 3, 5, 2, 6, 5, 6, 5, 3, 3,…
## $ Ag_Q6 <dbl> NA, 6, 5, 2, 3, 4, 5, 6, 1, 3, 4, 2, 3, 5, 1, 6, 2, 6, 6, 5, 3…
## $ Ag_Q7 <dbl> 3, 0, 1, 1, 1, 3, 3, 5, 0, 3, 2, 1, 2, 3, 5, 6, 4, 4, 6, 6, 2,…
## $ Co_Q1 <dbl> 3, 0, 0, 3, 5, 4, 3, 4, 5, 3, 3, 3, 1, 5, 5, 4, 4, 5, 6, 4, 2,…
## $ Co_Q10 <dbl> 1, 6, 5, 5, 3, 5, 1, 2, 5, 2, 4, 3, 4, 4, 3, 2, 5, 5, 5, 2, 2,…
## $ Co_Q2 <dbl> 3, 0, 0, 3, 4, 3, 3, 4, 5, 3, 5, 3, 3, 4, 5, 1, 5, 4, 5, 2, 5,…
## $ Co_Q3 <dbl> 2, 0, 1, 3, 4, 4, 5, 4, 5, 3, 4, 3, 4, 4, 5, 4, 2, 4, 5, 2, 2,…
## $ Co_Q4 <dbl> 3, 6, 5, 5, 5, 3, 2, 4, 3, 1, 4, 3, 1, 2, 4, 2, NA, 5, 6, 1, 1…
## $ Co_Q5 <dbl> 0, 6, 5, 5, 5, 3, 3, 1, 5, 1, 2, 4, 4, 4, 2, 1, 6, 4, 3, 1, 3,…
## $ Co_Q6 <dbl> 6, 0, 1, 4, 6, 5, 6, 5, 4, 3, 5, 5, 4, 6, 6, 1, 3, 4, 5, 4, 6,…
## $ Co_Q7 <dbl> 3, 6, 5, 1, 3, 4, NA, 2, 3, 3, 2, 2, 4, 2, 5, 2, 5, 5, 3, 1, 1…
## $ Co_Q8 <dbl> 3, 0, 1, 1, 3, 4, 3, 0, 1, 3, 2, 2, 1, 2, 4, 3, 2, 4, 5, 2, 6,…
## $ Co_Q9 <dbl> 3, 6, 5, 4, 3, 4, 5, 3, 5, 3, 4, 3, 4, 4, 2, 4, 6, 5, 5, 2, 2,…
## $ Ex_Q1 <dbl> 3, 0, 0, 2, 2, 4, 4, 3, 5, 4, 1, 1, 3, 3, 1, 3, 5, 1, 0, 4, 1,…
## $ Ex_Q2 <dbl> 3, 0, 0, 3, 3, 4, 5, 2, 5, 3, 4, 1, 3, 2, 1, 6, 5, 3, 4, 4, 1,…
## $ Ex_Q3 <dbl> 3, 6, 5, 5, 3, 3, 3, 0, 6, 1, 4, 2, 3, 2, 1, 2, 5, 1, 0, 5, 5,…
## $ Ex_Q4 <dbl> 1, 0, 1, 3, 3, 3, 4, 3, 5, 3, 2, 0, 3, 3, 1, 2, NA, 4, 4, 4, 1…
## $ Ex_Q5 <dbl> 3, 0, 1, 6, 3, 3, 4, 2, 5, 2, 2, 4, 2, 3, 0, 4, 5, 2, 3, 1, 1,…
## $ Ex_Q6 <dbl> 3, 6, 5, 3, 0, 4, 3, 1, 6, 3, 2, 1, 4, 2, 1, 5, 6, 2, 1, 2, 1,…
## $ Ex_Q7 <dbl> 3, 6, 5, 4, 1, 2, 5, 3, 6, 3, 4, 3, 5, 1, 1, 6, 6, 3, 1, 1, 3,…
## $ Ex_Q8 <dbl> 2, 0, 1, 4, 3, 4, 2, 4, 6, 2, 4, 0, 4, 4, 1, 3, 5, 4, 3, 1, 1,…
## $ Ex_Q9 <dbl> 4, 6, 5, 5, 5, 2, 3, 3, 6, 3, 3, 4, 4, 3, 2, 5, 5, 4, 4, 0, 4,…
## $ Ne_Q1 <dbl> 4, 0, 0, 4, 1, 2, 3, 4, 0, 3, 3, 3, 2, 1, 1, 3, 4, 5, 2, 4, 5,…
## $ Ne_Q2 <dbl> 0, 6, 6, 4, 2, 1, 2, 3, 1, 2, 5, 5, 3, 1, 1, 1, 1, 6, 1, 2, 5,…
## $ Ne_Q3 <dbl> 0, 0, 0, 1, 0, 1, 4, 4, 0, 4, 2, 5, 1, 2, 5, 5, 2, 2, 1, 2, 5,…
## $ Ne_Q4 <dbl> 3, 6, 6, 2, 3, 2, 3, 3, 0, 4, 4, 5, 5, 4, 5, 3, 2, 5, 2, 4, 5,…
## $ Ne_Q5 <dbl> 3, 0, 1, 4, 1, 1, 4, 5, 0, 3, 4, 6, 2, 0, 1, 1, 0, 4, 3, 1, 5,…
## $ Ne_Q6 <dbl> 1, 6, 5, 1, 0, 1, 3, 4, 0, 4, 4, 5, 2, 1, 5, 6, 1, 2, 2, 3, 5,…
## $ Ne_Q7 <dbl> NA, 0, 1, 2, 0, 2, 4, 4, 0, 3, 2, 5, 1, 2, 5, 2, 2, 4, 1, 3, 5…
## $ Ne_Q8 <dbl> 2, 0, 1, 1, 1, 1, 5, 4, 0, 4, 4, 5, 1, 2, 5, 2, 1, 5, 1, 2, 5,…
## $ Op_Q1 <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
## $ Op_Q2 <dbl> 6, 0, 0, 4, 6, 4, 4, 0, 0, 3, 4, 3, 3, 4, 5, 3, 3, 4, 1, 6, 6,…
## $ Op_Q3 <dbl> 2, 6, 5, 5, 5, 4, 3, 2, 4, 3, 3, 6, 5, 5, 6, 5, 4, 4, 3, 6, 5,…
## $ Op_Q4 <dbl> 3, 0, 1, 6, 6, 3, 3, 0, 6, 3, 4, 5, 4, 5, 6, 6, 2, 2, 4, 5, 5,…
## $ Op_Q5 <dbl> 6, 6, 5, 2, 5, 4, 3, 2, 6, 6, 2, 4, 3, 4, 6, 6, 6, 5, 3, 3, 5,…
## $ Op_Q6 <dbl> 0, 6, 5, 1, 6, 4, 6, 0, 0, 3, 5, 3, 5, 5, 5, 2, 5, 1, 1, 6, 2,…
## $ Op_Q7 <dbl> 0, 6, 5, 5, 5, 4, 6, 2, 1, 3, 2, 4, 5, 5, 6, 3, 6, 5, 2, 6, 5,…
```
4\.7 Pipes
----------
Pipes are a way to order your code in a more readable format.
Let’s say you have a small data table with 10 participant IDs, two columns with variable type A, and 2 columns with variable type B. You want to calculate the mean of the A variables and the mean of the B variables and return a table with 10 rows (1 for each participant) and 3 columns (`id`, `A_mean` and `B_mean`).
One way you could do this is by creating a new object at every step and using that object in the next step. This is pretty clear, but you’ve created 6 unnecessary data objects in your environment. This can get confusing in very long scripts.
```
# make a data table with 10 subjects
data_original <- tibble(
id = 1:10,
A1 = rnorm(10, 0),
A2 = rnorm(10, 1),
B1 = rnorm(10, 2),
B2 = rnorm(10, 3)
)
# gather columns A1 to B2 into "variable" and "value" columns
data_gathered <- gather(data_original, variable, value, A1:B2)
# separate the variable column at the _ into "var" and "var_n" columns
data_separated <- separate(data_gathered, variable, c("var", "var_n"), sep = 1)
# group the data by id and var
data_grouped <- group_by(data_separated, id, var)
# calculate the mean value for each id/var
data_summarised <- summarise(data_grouped, mean = mean(value), .groups = "drop")
# spread the mean column into A and B columns
data_spread <- spread(data_summarised, var, mean)
# rename A and B to A_mean and B_mean
data <- rename(data_spread, A_mean = A, B_mean = B)
data
```
| id | A\_mean | B\_mean |
| --- | --- | --- |
| 1 | \-0\.5938256 | 1\.0243046 |
| 2 | 0\.7440623 | 2\.7172046 |
| 3 | 0\.9309275 | 3\.9262358 |
| 4 | 0\.7197686 | 1\.9662632 |
| 5 | \-0\.0280832 | 1\.9473456 |
| 6 | \-0\.0982555 | 3\.2073687 |
| 7 | 0\.1256922 | 0\.9256321 |
| 8 | 1\.4526447 | 2\.3778116 |
| 9 | 0\.2976443 | 1\.6617481 |
| 10 | 0\.5589199 | 2\.1034679 |
You *can* name each object `data` and keep replacing the old data object with the new one at each step. This will keep your environment clean, but I don’t recommend it because it makes it too easy to accidentally run your code out of order when you are running line\-by\-line for development or debugging.
One way to avoid extra objects is to nest your functions, literally replacing each data object with the code that generated it in the previous step. This can be fine for very short chains.
```
mean_petal_width <- round(mean(iris$Petal.Width), 2)
```
But it gets extremely confusing for long chains:
```
# do not ever do this!!
data <- rename(
spread(
summarise(
group_by(
separate(
gather(
tibble(
id = 1:10,
A1 = rnorm(10, 0),
A2 = rnorm(10, 1),
B1 = rnorm(10, 2),
B2 = rnorm(10,3)),
variable, value, A1:B2),
variable, c("var", "var_n"), sep = 1),
id, var),
mean = mean(value), .groups = "drop"),
var, mean),
A_mean = A, B_mean = B)
```
The pipe lets you “pipe” the result of each function into the next function, allowing you to put your code in a logical order without creating too many extra objects.
```
# calculate mean of A and B variables for each participant
data <- tibble(
id = 1:10,
A1 = rnorm(10, 0),
A2 = rnorm(10, 1),
B1 = rnorm(10, 2),
B2 = rnorm(10,3)
) %>%
gather(variable, value, A1:B2) %>%
separate(variable, c("var", "var_n"), sep=1) %>%
group_by(id, var) %>%
summarise(mean = mean(value), .groups = "drop") %>%
spread(var, mean) %>%
rename(A_mean = A, B_mean = B)
```
You can read this code from top to bottom as follows:
1. Make a tibble called `data` with
* id of 1 to 10,
* A1 of 10 random numbers from a normal distribution,
* A2 of 10 random numbers from a normal distribution,
* B1 of 10 random numbers from a normal distribution,
* B2 of 10 random numbers from a normal distribution; and then
2. Gather to create `variable` and `value` column from columns `A_1` to `B_2`; and then
3. Separate the column `variable` into 2 new columns called `var`and `var_n`, separate at character 1; and then
4. Group by columns `id` and `var`; and then
5. Summarise and new column called `mean` as the mean of the `value` column for each group and drop the grouping; and then
6. Spread to make new columns with the key names in `var` and values in `mean`; and then
7. Rename to make columns called `A_mean` (old `A`) and `B_mean` (old `B`)
You can make intermediate objects whenever you need to break up your code because it’s getting too complicated or you need to debug something.
You can debug a pipe by highlighting from the beginning to just before the pipe you want to stop at. Try this by highlighting from `data <-` to the end of the `separate` function and typing cmd\-return. What does `data` look like now?
Chain all the steps above using pipes.
```
personality_reshaped <- personality %>%
gather("question", "score", Op1:Ex9) %>%
separate(question, c("domain", "qnumber"), sep = 2) %>%
unite("domain_n", domain, qnumber, sep = "_Q") %>%
spread(domain_n, score)
```
4\.8 More Complex Example
-------------------------
### 4\.8\.1 Load Data
Get data on infant and maternal mortality rates from the dataskills package. If you don’t have the package, you can download them here:
* [infant mortality](https://psyteachr.github.io/msc-data-skills/data/infmort.csv)
* [maternal mortality](https://psyteachr.github.io/msc-data-skills/data/matmort.xls)
```
data("infmort", package = "dataskills")
head(infmort)
```
| Country | Year | Infant mortality rate (probability of dying between birth and age 1 per 1000 live births) |
| --- | --- | --- |
| Afghanistan | 2015 | 66\.3 \[52\.7\-83\.9] |
| Afghanistan | 2014 | 68\.1 \[55\.7\-83\.6] |
| Afghanistan | 2013 | 69\.9 \[58\.7\-83\.5] |
| Afghanistan | 2012 | 71\.7 \[61\.6\-83\.7] |
| Afghanistan | 2011 | 73\.4 \[64\.4\-84\.2] |
| Afghanistan | 2010 | 75\.1 \[66\.9\-85\.1] |
```
data("matmort", package = "dataskills")
head(matmort)
```
| Country | 1990 | 2000 | 2015 |
| --- | --- | --- | --- |
| Afghanistan | 1 340 \[ 878 \- 1 950] | 1 100 \[ 745 \- 1 570] | 396 \[ 253 \- 620] |
| Albania | 71 \[ 58 \- 88] | 43 \[ 33 \- 56] | 29 \[ 16 \- 46] |
| Algeria | 216 \[ 141 \- 327] | 170 \[ 118 \- 241] | 140 \[ 82 \- 244] |
| Angola | 1 160 \[ 627 \- 2 020] | 924 \[ 472 \- 1 730] | 477 \[ 221 \- 988] |
| Argentina | 72 \[ 64 \- 80] | 60 \[ 54 \- 65] | 52 \[ 44 \- 63] |
| Armenia | 58 \[ 51 \- 65] | 40 \[ 35 \- 46] | 25 \[ 21 \- 31] |
### 4\.8\.2 Wide to Long
`matmort` is in wide format, with a separate column for each year. Change it to long format, with a row for each Country/Year observation.
This example is complicated because the column names to gather *are* numbers. If the column names are non\-standard (e.g., have spaces, start with numbers, or have special characters), you can enclose them in backticks (\`) like the example below.
```
matmort_long <- matmort %>%
pivot_longer(cols = `1990`:`2015`,
names_to = "Year",
values_to = "stats") %>%
glimpse()
```
```
## Rows: 543
## Columns: 3
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Albania", "Alban…
## $ Year <chr> "1990", "2000", "2015", "1990", "2000", "2015", "1990", "2000"…
## $ stats <chr> "1 340 [ 878 - 1 950]", "1 100 [ 745 - 1 570]", "396 [ 253 - …
```
You can put `matmort` at the first argument to `pivot_longer()`; you don’t have to pipe it in. But when I’m working on data processing I often find myself needing to insert or rearrange steps and I constantly introduce errors by forgetting to take the first argument out of a pipe chain, so now I start with the original data table and pipe from there.
Alternatively, you can use the `gather()` function.
```
matmort_long <- matmort %>%
gather("Year", "stats", `1990`:`2015`) %>%
glimpse()
```
```
## Rows: 543
## Columns: 3
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ stats <chr> "1 340 [ 878 - 1 950]", "71 [ 58 - 88]", "216 [ 141 - 327]",…
```
### 4\.8\.3 One Piece of Data per Column
The data in the `stats` column is in an unusual format with some sort of confidence interval in brackets and lots of extra spaces. We don’t need any of the spaces, so first we’ll remove them with `mutate()`, which we’ll learn more about in the next lesson.
The `separate` function will separate your data on anything that is not a number or letter, so try it first without specifying the `sep` argument. The `into` argument is a list of the new column names.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi")) %>%
glimpse()
```
```
## Warning: Expected 3 pieces. Additional pieces discarded in 543 rows [1, 2, 3, 4,
## 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, ...].
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
The `gsub(pattern, replacement, x)` function is a flexible way to do search and replace. The example above replaces all occurances of the `pattern` " " (a space), with the `replacement` "" (nothing), in the string `x` (the `stats` column). Use `sub()` instead if you only want to replace the first occurance of a pattern. We only used a simple pattern here, but you can use more complicated [regex](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) patterns to replace, for example, all even numbers (e.g., `gsub(“[:02468:],” "“,”id = 123456“)`) or all occurances of the word colour in US or UK spelling (e.g., `gsub(”colo(u)?r“,”**“,”replace color, colour, or colours, but not collors")`).
#### 4\.8\.3\.1 Handle spare columns with `extra`
The previous example should have given you an error warning about “Additional pieces discarded in 543 rows.” This is because `separate` splits the column at the brackets and dashes, so the text `100[90-110]` would split into four values `c(“100,” “90,” “110,” "“)`, but we only specified 3 new columns. The fourth value is always empty (just the part after the last bracket), so we are happy to drop it, but `separate` generates a warning so you don’t do that accidentally. You can turn off the warning by adding the `extra` argument and setting it to “drop.” Look at the help for `??tidyr::separate` to see what the other options do.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
#### 4\.8\.3\.2 Set delimiters with `sep`
Now do the same with `infmort`. It’s already in long format, so you don’t need to use `gather`, but the third column has a ridiculously long name, so we can just refer to it by its column number (3\).
```
infmort_split <- infmort %>%
separate(3, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66", "68", "69", "71", "73", "75", "76", "78", "80", "82", "8…
## $ ci_low <chr> "3", "1", "9", "7", "4", "1", "8", "6", "4", "3", "4", "7", "0…
## $ ci_hi <chr> "52", "55", "58", "61", "64", "66", "69", "71", "73", "75", "7…
```
**Wait, that didn’t work at all!** It split the column on spaces, brackets, *and* full stops. We just want to split on the spaces, brackets and dashes. So we need to manually set `sep` to what the delimiters are. Also, once there are more than a few arguments specified for a function, it’s easier to read them if you put one argument on each line.
{\#regex}
You can use [regular expressions](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) to separate complex columns. Here, we want to separate on dashes and brackets. You can separate on a list of delimiters by putting them in parentheses, separated by “\|.” It’s a little more complicated because brackets have a special meaning in regex, so you need to “escape” the left one with two backslashes “\\.”
```
infmort_split <- infmort %>%
separate(
col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])"
) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66.3 ", "68.1 ", "69.9 ", "71.7 ", "73.4 ", "75.1 ", "76.8 ",…
## $ ci_low <chr> "52.7", "55.7", "58.7", "61.6", "64.4", "66.9", "69.0", "71.2"…
## $ ci_hi <chr> "83.9", "83.6", "83.5", "83.7", "84.2", "85.1", "86.1", "87.3"…
```
#### 4\.8\.3\.3 Fix data types with `convert`
That’s better. Notice the next to `Year`, `rate`, `ci_low` and `ci_hi`. That means these columns hold characters (like words), not numbers or integers. This can cause problems when you try to do thigs like average the numbers (you can’t average words), so we can fix it by adding the argument `convert` and setting it to `TRUE`.
```
infmort_split <- infmort %>%
separate(col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <dbl> 66.3, 68.1, 69.9, 71.7, 73.4, 75.1, 76.8, 78.6, 80.4, 82.3, 84…
## $ ci_low <dbl> 52.7, 55.7, 58.7, 61.6, 64.4, 66.9, 69.0, 71.2, 73.4, 75.5, 77…
## $ ci_hi <dbl> 83.9, 83.6, 83.5, 83.7, 84.2, 85.1, 86.1, 87.3, 88.9, 90.7, 92…
```
Do the same for `matmort`.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
### 4\.8\.4 All in one step
We can chain all the steps for `matmort` above together, since we don’t need those intermediate data tables.
```
matmort2<- dataskills::matmort %>%
gather("Year", "stats", `1990`:`2015`) %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(
col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE
) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
### 4\.8\.5 Columns by Year
Spread out the maternal mortality rate by year.
```
matmort_wide <- matmort2 %>%
spread(key = Year, value = rate) %>%
print()
```
```
## # A tibble: 542 x 6
## Country ci_low ci_hi `1990` `2000` `2015`
## <chr> <int> <int> <int> <int> <int>
## 1 Afghanistan 253 620 NA NA 396
## 2 Afghanistan 745 1570 NA 1100 NA
## 3 Afghanistan 878 1950 1340 NA NA
## 4 Albania 16 46 NA NA 29
## 5 Albania 33 56 NA 43 NA
## 6 Albania 58 88 71 NA NA
## 7 Algeria 82 244 NA NA 140
## 8 Algeria 118 241 NA 170 NA
## 9 Algeria 141 327 216 NA NA
## 10 Angola 221 988 NA NA 477
## # … with 532 more rows
```
Nope, that didn’t work at all, but it’s a really common mistake when spreading data. This is because `spread` matches on all the remaining columns, so Afghanistan with `ci_low` of 253 is treated as a different observation than Afghanistan with `ci_low` of 745\.
This is where `pivot_wider()` can be very useful. You can set `values_from` to multiple column names and their names will be added to the `names_from` values.
```
matmort_wide <- matmort2 %>%
pivot_wider(
names_from = Year,
values_from = c(rate, ci_low, ci_hi)
)
glimpse(matmort_wide)
```
```
## Rows: 181
## Columns: 10
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina"…
## $ rate_1990 <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33…
## $ rate_2000 <int> 1100, 43, 170, 924, 60, 40, 9, 5, 48, 61, 21, 399, 48, 26,…
## $ rate_2015 <int> 396, 29, 140, 477, 52, 25, 6, 4, 25, 80, 15, 176, 27, 4, 7…
## $ ci_low_1990 <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, …
## $ ci_low_2000 <int> 745, 33, 118, 472, 54, 35, 8, 4, 42, 50, 18, 322, 38, 22, …
## $ ci_low_2015 <int> 253, 16, 82, 221, 44, 21, 5, 3, 17, 53, 12, 125, 19, 3, 5,…
## $ ci_hi_1990 <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 3…
## $ ci_hi_2000 <int> 1570, 56, 241, 1730, 65, 46, 10, 6, 55, 74, 26, 496, 58, 3…
## $ ci_hi_2015 <int> 620, 46, 244, 988, 63, 31, 7, 5, 35, 124, 19, 280, 37, 6, …
```
### 4\.8\.6 Experimentum Data
Students in the Institute of Neuroscience and Psychology at the University of Glasgow can use the online experiment builder platform, [Experimentum](https://debruine.github.io/experimentum/). The platform is also [open source on github](https://github.com/debruine/experimentum) for anyone who can install it on a web server. It allows you to group questionnaires and experiments into **projects** with randomisation and counterbalancing. Data for questionnaires and experiments are downloadable in long format, but researchers often need to put them in wide format for analysis.
Look at the help menu for built\-in dataset `dataskills::experimentum_quests` to learn what each column is. Subjects are asked questions about dogs to test the different questionnaire response types.
* current: Do you own a dog? (yes/no)
* past: Have you ever owned a dog? (yes/no)
* name: What is the best name for a dog? (free short text)
* good: How good are dogs? (1\=pretty good:7\=very good)
* country: What country do borzois come from?
* good\_borzoi: How good are borzois? (0\=pretty good:100\=very good)
* text: Write some text about dogs. (free long text)
* time: What time is it? (time)
To get the dataset into wide format, where each question is in a separate column, use the following code:
```
q <- dataskills::experimentum_quests %>%
pivot_wider(id_cols = session_id:user_age,
names_from = q_name,
values_from = dv) %>%
type.convert(as.is = TRUE) %>%
print()
```
```
## # A tibble: 24 x 15
## session_id project_id quest_id user_id user_sex user_status user_age current
## <int> <int> <int> <int> <chr> <chr> <dbl> <int>
## 1 34034 1 1 31105 female guest 28.2 1
## 2 34104 1 1 31164 male registered 19.4 1
## 3 34326 1 1 31392 female guest 17 0
## 4 34343 1 1 31397 male guest 22 1
## 5 34765 1 1 31770 female guest 44 1
## 6 34796 1 1 31796 female guest 35.9 0
## 7 34806 1 1 31798 female guest 35 0
## 8 34822 1 1 31802 female guest 58 1
## 9 34864 1 1 31820 male guest 20 0
## 10 35014 1 1 31921 female student 39.2 1
## # … with 14 more rows, and 7 more variables: past <int>, name <chr>,
## # good <int>, country <chr>, text <chr>, good_borzoi <int>, time <chr>
```
The responses in the `dv` column have multiple types (e.g., `r glossary(“integer”)`, `r glossary(“double”)`, and `r glossary(“character”)`), but they are all represented as character strings when they’re in the same column. After you spread the data to wide format, each column should be given the ocrrect data type. The function `type.convert()` makes a best guess at what type each new column should be and converts it. The argument `as.is = TRUE` converts columns where none of the numbers have decimal places to integers.
4\.9 Glossary
-------------
| term | definition |
| --- | --- |
| [long](https://psyteachr.github.io/glossary/l#long) | Data where each observation is on a separate row |
| [observation](https://psyteachr.github.io/glossary/o#observation) | All of the data about a single trial or question. |
| [value](https://psyteachr.github.io/glossary/v#value) | A single number or piece of data. |
| [variable](https://psyteachr.github.io/glossary/v#variable) | A word that identifies and stores the value of some data for later use. |
| [wide](https://psyteachr.github.io/glossary/w#wide) | Data where all of the observations about one subject are in the same row |
4\.10 Exercises
---------------
Download the [exercises](exercises/04_tidyr_exercise.Rmd). See the [answers](exercises/04_tidyr_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(4)
# run this to access the answers
dataskills::exercise(4, answers = TRUE)
```
4\.1 Learning Objectives
------------------------
### 4\.1\.1 Basic
1. Understand the concept of [tidy data](tidyr.html#tidy-data) [(video)](https://youtu.be/EsSN4OdsNpc)
2. Be able to convert between long and wide formats using pivot functions [(video)](https://youtu.be/4dvLmjhwN8I)
* [`pivot_longer()`](tidyr.html#pivot_longer)
* [`pivot_wider()`](tidyr.html#pivot_wider)
3. Be able to use the 4 basic `tidyr` verbs [(video)](https://youtu.be/oUWjb0JC8zM)
* [`gather()`](tidyr.html#gather)
* [`separate()`](tidyr.html#separate)
* [`spread()`](tidyr.html#spread)
* [`unite()`](tidyr.html#unite)
4. Be able to chain functions using [pipes](tidyr.html#pipes) [(video)](https://youtu.be/itfrlLaN4SE)
### 4\.1\.2 Advanced
5. Be able to use [regular expressions](#regex) to separate complex columns
### 4\.1\.1 Basic
1. Understand the concept of [tidy data](tidyr.html#tidy-data) [(video)](https://youtu.be/EsSN4OdsNpc)
2. Be able to convert between long and wide formats using pivot functions [(video)](https://youtu.be/4dvLmjhwN8I)
* [`pivot_longer()`](tidyr.html#pivot_longer)
* [`pivot_wider()`](tidyr.html#pivot_wider)
3. Be able to use the 4 basic `tidyr` verbs [(video)](https://youtu.be/oUWjb0JC8zM)
* [`gather()`](tidyr.html#gather)
* [`separate()`](tidyr.html#separate)
* [`spread()`](tidyr.html#spread)
* [`unite()`](tidyr.html#unite)
4. Be able to chain functions using [pipes](tidyr.html#pipes) [(video)](https://youtu.be/itfrlLaN4SE)
### 4\.1\.2 Advanced
5. Be able to use [regular expressions](#regex) to separate complex columns
4\.2 Resources
--------------
* [Tidy Data](http://vita.had.co.nz/papers/tidy-data.html)
* [Chapter 12: Tidy Data](http://r4ds.had.co.nz/tidy-data.html) in *R for Data Science*
* [Chapter 18: Pipes](http://r4ds.had.co.nz/pipes.html) in *R for Data Science*
* [Data wrangling cheat sheet](https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf)
4\.3 Setup
----------
```
# libraries needed
library(tidyverse)
library(dataskills)
set.seed(8675309) # makes sure random numbers are reproducible
```
4\.4 Tidy Data
--------------
### 4\.4\.1 Three Rules
* Each [variable](https://psyteachr.github.io/glossary/v#variable "A word that identifies and stores the value of some data for later use.") must have its own column
* Each [observation](https://psyteachr.github.io/glossary/o#observation "All of the data about a single trial or question.") must have its own row
* Each [value](https://psyteachr.github.io/glossary/v#value "A single number or piece of data.") must have its own cell
This table has three observations per row and the `total_meanRT` column contains two values.
Table 4\.1: Untidy table
| id | score\_1 | score\_2 | score\_3 | rt\_1 | rt\_2 | rt\_3 | total\_meanRT |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 4 | 3 | 7 | 857 | 890 | 859 | 14 (869\) |
| 2 | 3 | 1 | 1 | 902 | 900 | 959 | 5 (920\) |
| 3 | 2 | 5 | 4 | 757 | 823 | 901 | 11 (827\) |
| 4 | 6 | 2 | 6 | 844 | 788 | 624 | 14 (752\) |
| 5 | 1 | 7 | 2 | 659 | 764 | 690 | 10 (704\) |
This is the tidy version.
Table 4\.1: Tidy table
| id | trial | rt | score | total | mean\_rt |
| --- | --- | --- | --- | --- | --- |
| 1 | 1 | 857 | 4 | 14 | 869 |
| 1 | 2 | 890 | 3 | 14 | 869 |
| 1 | 3 | 859 | 7 | 14 | 869 |
| 2 | 1 | 902 | 3 | 5 | 920 |
| 2 | 2 | 900 | 1 | 5 | 920 |
| 2 | 3 | 959 | 1 | 5 | 920 |
| 3 | 1 | 757 | 2 | 11 | 827 |
| 3 | 2 | 823 | 5 | 11 | 827 |
| 3 | 3 | 901 | 4 | 11 | 827 |
| 4 | 1 | 844 | 6 | 14 | 752 |
| 4 | 2 | 788 | 2 | 14 | 752 |
| 4 | 3 | 624 | 6 | 14 | 752 |
| 5 | 1 | 659 | 1 | 10 | 704 |
| 5 | 2 | 764 | 7 | 10 | 704 |
| 5 | 3 | 690 | 2 | 10 | 704 |
### 4\.4\.2 Wide versus long
Data tables can be in [wide](https://psyteachr.github.io/glossary/w#wide "Data where all of the observations about one subject are in the same row") format or [long](https://psyteachr.github.io/glossary/l#long "Data where each observation is on a separate row") format (and sometimes a mix of the two). Wide data are where all of the observations about one subject are in the same row, while long data are where each observation is on a separate row. You often need to convert between these formats to do different types of analyses or data processing.
Imagine a study where each subject completes a questionnaire with three items. Each answer is an [observation](https://psyteachr.github.io/glossary/o#observation "All of the data about a single trial or question.") of that subject. You are probably most familiar with data like this in a wide format, where the subject `id` is in one column, and each of the three item responses is in its own column.
Table 4\.2: Wide data
| id | Q1 | Q2 | Q3 |
| --- | --- | --- | --- |
| A | 1 | 2 | 3 |
| B | 4 | 5 | 6 |
The same data can be represented in a long format by creating a new column that specifies what `item` the observation is from and a new column that specifies the `value` of that observation.
Table 4\.3: Long data
| id | item | value |
| --- | --- | --- |
| A | Q1 | 1 |
| B | Q1 | 4 |
| A | Q2 | 2 |
| B | Q2 | 5 |
| A | Q3 | 3 |
| B | Q3 | 6 |
Create a long version of the following table.
| id | fav\_colour | fav\_animal |
| --- | --- | --- |
| Lisa | red | echidna |
| Robbie | orange | babirusa |
| Steven | green | frog |
Answer
Your answer doesn’t need to have the same column headers or be in the same order.
| id | fav | answer |
| --- | --- | --- |
| Lisa | colour | red |
| Lisa | animal | echidna |
| Robbie | colour | orange |
| Robbie | animal | babirusa |
| Steven | colour | green |
| Steven | animal | frog |
### 4\.4\.1 Three Rules
* Each [variable](https://psyteachr.github.io/glossary/v#variable "A word that identifies and stores the value of some data for later use.") must have its own column
* Each [observation](https://psyteachr.github.io/glossary/o#observation "All of the data about a single trial or question.") must have its own row
* Each [value](https://psyteachr.github.io/glossary/v#value "A single number or piece of data.") must have its own cell
This table has three observations per row and the `total_meanRT` column contains two values.
Table 4\.1: Untidy table
| id | score\_1 | score\_2 | score\_3 | rt\_1 | rt\_2 | rt\_3 | total\_meanRT |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 4 | 3 | 7 | 857 | 890 | 859 | 14 (869\) |
| 2 | 3 | 1 | 1 | 902 | 900 | 959 | 5 (920\) |
| 3 | 2 | 5 | 4 | 757 | 823 | 901 | 11 (827\) |
| 4 | 6 | 2 | 6 | 844 | 788 | 624 | 14 (752\) |
| 5 | 1 | 7 | 2 | 659 | 764 | 690 | 10 (704\) |
This is the tidy version.
Table 4\.1: Tidy table
| id | trial | rt | score | total | mean\_rt |
| --- | --- | --- | --- | --- | --- |
| 1 | 1 | 857 | 4 | 14 | 869 |
| 1 | 2 | 890 | 3 | 14 | 869 |
| 1 | 3 | 859 | 7 | 14 | 869 |
| 2 | 1 | 902 | 3 | 5 | 920 |
| 2 | 2 | 900 | 1 | 5 | 920 |
| 2 | 3 | 959 | 1 | 5 | 920 |
| 3 | 1 | 757 | 2 | 11 | 827 |
| 3 | 2 | 823 | 5 | 11 | 827 |
| 3 | 3 | 901 | 4 | 11 | 827 |
| 4 | 1 | 844 | 6 | 14 | 752 |
| 4 | 2 | 788 | 2 | 14 | 752 |
| 4 | 3 | 624 | 6 | 14 | 752 |
| 5 | 1 | 659 | 1 | 10 | 704 |
| 5 | 2 | 764 | 7 | 10 | 704 |
| 5 | 3 | 690 | 2 | 10 | 704 |
### 4\.4\.2 Wide versus long
Data tables can be in [wide](https://psyteachr.github.io/glossary/w#wide "Data where all of the observations about one subject are in the same row") format or [long](https://psyteachr.github.io/glossary/l#long "Data where each observation is on a separate row") format (and sometimes a mix of the two). Wide data are where all of the observations about one subject are in the same row, while long data are where each observation is on a separate row. You often need to convert between these formats to do different types of analyses or data processing.
Imagine a study where each subject completes a questionnaire with three items. Each answer is an [observation](https://psyteachr.github.io/glossary/o#observation "All of the data about a single trial or question.") of that subject. You are probably most familiar with data like this in a wide format, where the subject `id` is in one column, and each of the three item responses is in its own column.
Table 4\.2: Wide data
| id | Q1 | Q2 | Q3 |
| --- | --- | --- | --- |
| A | 1 | 2 | 3 |
| B | 4 | 5 | 6 |
The same data can be represented in a long format by creating a new column that specifies what `item` the observation is from and a new column that specifies the `value` of that observation.
Table 4\.3: Long data
| id | item | value |
| --- | --- | --- |
| A | Q1 | 1 |
| B | Q1 | 4 |
| A | Q2 | 2 |
| B | Q2 | 5 |
| A | Q3 | 3 |
| B | Q3 | 6 |
Create a long version of the following table.
| id | fav\_colour | fav\_animal |
| --- | --- | --- |
| Lisa | red | echidna |
| Robbie | orange | babirusa |
| Steven | green | frog |
Answer
Your answer doesn’t need to have the same column headers or be in the same order.
| id | fav | answer |
| --- | --- | --- |
| Lisa | colour | red |
| Lisa | animal | echidna |
| Robbie | colour | orange |
| Robbie | animal | babirusa |
| Steven | colour | green |
| Steven | animal | frog |
4\.5 Pivot Functions
--------------------
The pivot functions allow you to transform a data table from wide to long or long to wide in one step.
### 4\.5\.1 Load Data
We will used the dataset `personality` from the dataskills package (or download the data from [personality.csv](https://psyteachr.github.io/msc-data-skills/data/personality.csv)). These data are from a 5\-factor (personality) personality questionnaire. Each question is labelled with the domain (Op \= openness, Co \= conscientiousness, Ex \= extroversion, Ag \= agreeableness, and Ne \= neuroticism) and the question number.
```
data("personality", package = "dataskills")
```
| user\_id | date | Op1 | Ne1 | Ne2 | Op2 | Ex1 | Ex2 | Co1 | Co2 | Ne3 | Ag1 | Ag2 | Ne4 | Ex3 | Co3 | Op3 | Ex4 | Op4 | Ex5 | Ag3 | Co4 | Co5 | Ne5 | Op5 | Ag4 | Op6 | Co6 | Ex6 | Ne6 | Co7 | Ag5 | Co8 | Ex7 | Ne7 | Co9 | Op7 | Ne8 | Ag6 | Ag7 | Co10 | Ex8 | Ex9 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 2006\-03\-23 | 3 | 4 | 0 | 6 | 3 | 3 | 3 | 3 | 0 | 2 | 1 | 3 | 3 | 2 | 2 | 1 | 3 | 3 | 1 | 3 | 0 | 3 | 6 | 1 | 0 | 6 | 3 | 1 | 3 | 3 | 3 | 3 | NA | 3 | 0 | 2 | NA | 3 | 1 | 2 | 4 |
| 1 | 2006\-02\-08 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 6 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 6 | 6 | 0 | 6 | 0 | 6 | 0 | 6 | 6 | 6 | 6 | 0 | 6 | 0 | 6 | 6 | 0 | 6 | 0 | 6 | 0 | 6 |
| 2 | 2005\-10\-24 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 6 | 5 | 1 | 5 | 1 | 1 | 1 | 1 | 5 | 5 | 1 | 5 | 1 | 5 | 1 | 5 | 5 | 5 | 5 | 1 | 5 | 1 | 5 | 5 | 1 | 5 | 1 | 5 | 1 | 5 |
| 5 | 2005\-12\-07 | 6 | 4 | 4 | 4 | 2 | 3 | 3 | 3 | 1 | 4 | 0 | 2 | 5 | 3 | 5 | 3 | 6 | 6 | 1 | 5 | 5 | 4 | 2 | 4 | 1 | 4 | 3 | 1 | 1 | 0 | 1 | 4 | 2 | 4 | 5 | 1 | 2 | 1 | 5 | 4 | 5 |
| 8 | 2006\-07\-27 | 6 | 1 | 2 | 6 | 2 | 3 | 5 | 4 | 0 | 6 | 5 | 3 | 3 | 4 | 5 | 3 | 6 | 3 | 0 | 5 | 5 | 1 | 5 | 6 | 6 | 6 | 0 | 0 | 3 | 2 | 3 | 1 | 0 | 3 | 5 | 1 | 3 | 1 | 3 | 3 | 5 |
| 108 | 2006\-02\-28 | 3 | 2 | 1 | 4 | 4 | 4 | 4 | 3 | 1 | 5 | 4 | 2 | 3 | 4 | 4 | 3 | 3 | 3 | 4 | 3 | 3 | 1 | 4 | 5 | 4 | 5 | 4 | 1 | 4 | 5 | 4 | 2 | 2 | 4 | 4 | 1 | 4 | 3 | 5 | 4 | 2 |
### 4\.5\.2 pivot\_longer()
`pivot_longer()` converts a wide data table to long format by converting the headers from specified columns into the values of new columns, and combining the values of those columns into a new condensed column.
* `cols` refers to the columns you want to make long You can refer to them by their names, like `col1, col2, col3, col4` or `col1:col4` or by their numbers, like `8, 9, 10` or `8:10`.
* `names_to` is what you want to call the new columns that the gathered column headers will go into; it’s “domain” and “qnumber” in this example.
* `names_sep` is an optional argument if you have more than one value for `names_to`. It specifies the characters or position to split the values of the `cols` headers.
* `values_to` is what you want to call the values in the columns `...`; they’re “score” in this example.
```
personality_long <- pivot_longer(
data = personality,
cols = Op1:Ex9, # columns to make long
names_to = c("domain", "qnumber"), # new column names for headers
names_sep = 2, # how to split the headers
values_to = "score" # new column name for values
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 5
## $ user_id <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
## $ date <date> 2006-03-23, 2006-03-23, 2006-03-23, 2006-03-23, 2006-03-23, 2…
## $ domain <chr> "Op", "Ne", "Ne", "Op", "Ex", "Ex", "Co", "Co", "Ne", "Ag", "A…
## $ qnumber <chr> "1", "1", "2", "2", "1", "2", "1", "2", "3", "1", "2", "4", "3…
## $ score <dbl> 3, 4, 0, 6, 3, 3, 3, 3, 0, 2, 1, 3, 3, 2, 2, 1, 3, 3, 1, 3, 0,…
```
You can pipe a data table to `glimpse()` at the end to have a quick look at it. It will still save to the object.
What would you set `names_sep` to in order to split the `cols` headers listed below into the results?
| `cols` | `names_to` | `names_sep` |
| --- | --- | --- |
| `A_1`, `A_2`, `B_1`, `B_2` | `c("condition", "version")` | A B 1 2 \_ |
| `A1`, `A2`, `B1`, `B2` | `c("condition", "version")` | A B 1 2 \_ |
| `cat-day&pre`, `cat-day&post`, `cat-night&pre`, `cat-night&post`, `dog-day&pre`, `dog-day&post`, `dog-night&pre`, `dog-night&post` | `c("pet", "time", "condition")` | \- \& \- |
### 4\.5\.3 pivot\_wider()
We can also go from long to wide format using the `pivot_wider()` function.
* `names_from` is the columns that contain your new column headers.
* `values_from` is the column that contains the values for the new columns.
* `names_sep` is the character string used to join names if `names_from` is more than one column.
```
personality_wide <- pivot_wider(
data = personality_long,
names_from = c(domain, qnumber),
values_from = score,
names_sep = ""
) %>%
glimpse()
```
```
## Rows: 15,000
## Columns: 43
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ Op1 <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
## $ Ne1 <dbl> 4, 0, 0, 4, 1, 2, 3, 4, 0, 3, 3, 3, 2, 1, 1, 3, 4, 5, 2, 4, 5,…
## $ Ne2 <dbl> 0, 6, 6, 4, 2, 1, 2, 3, 1, 2, 5, 5, 3, 1, 1, 1, 1, 6, 1, 2, 5,…
## $ Op2 <dbl> 6, 0, 0, 4, 6, 4, 4, 0, 0, 3, 4, 3, 3, 4, 5, 3, 3, 4, 1, 6, 6,…
## $ Ex1 <dbl> 3, 0, 0, 2, 2, 4, 4, 3, 5, 4, 1, 1, 3, 3, 1, 3, 5, 1, 0, 4, 1,…
## $ Ex2 <dbl> 3, 0, 0, 3, 3, 4, 5, 2, 5, 3, 4, 1, 3, 2, 1, 6, 5, 3, 4, 4, 1,…
## $ Co1 <dbl> 3, 0, 0, 3, 5, 4, 3, 4, 5, 3, 3, 3, 1, 5, 5, 4, 4, 5, 6, 4, 2,…
## $ Co2 <dbl> 3, 0, 0, 3, 4, 3, 3, 4, 5, 3, 5, 3, 3, 4, 5, 1, 5, 4, 5, 2, 5,…
## $ Ne3 <dbl> 0, 0, 0, 1, 0, 1, 4, 4, 0, 4, 2, 5, 1, 2, 5, 5, 2, 2, 1, 2, 5,…
## $ Ag1 <dbl> 2, 0, 0, 4, 6, 5, 5, 4, 2, 5, 4, 3, 2, 4, 5, 3, 5, 5, 5, 4, 4,…
## $ Ag2 <dbl> 1, 6, 6, 0, 5, 4, 5, 3, 4, 3, 5, 1, 5, 4, 2, 6, 5, 5, 5, 5, 2,…
## $ Ne4 <dbl> 3, 6, 6, 2, 3, 2, 3, 3, 0, 4, 4, 5, 5, 4, 5, 3, 2, 5, 2, 4, 5,…
## $ Ex3 <dbl> 3, 6, 5, 5, 3, 3, 3, 0, 6, 1, 4, 2, 3, 2, 1, 2, 5, 1, 0, 5, 5,…
## $ Co3 <dbl> 2, 0, 1, 3, 4, 4, 5, 4, 5, 3, 4, 3, 4, 4, 5, 4, 2, 4, 5, 2, 2,…
## $ Op3 <dbl> 2, 6, 5, 5, 5, 4, 3, 2, 4, 3, 3, 6, 5, 5, 6, 5, 4, 4, 3, 6, 5,…
## $ Ex4 <dbl> 1, 0, 1, 3, 3, 3, 4, 3, 5, 3, 2, 0, 3, 3, 1, 2, NA, 4, 4, 4, 1…
## $ Op4 <dbl> 3, 0, 1, 6, 6, 3, 3, 0, 6, 3, 4, 5, 4, 5, 6, 6, 2, 2, 4, 5, 5,…
## $ Ex5 <dbl> 3, 0, 1, 6, 3, 3, 4, 2, 5, 2, 2, 4, 2, 3, 0, 4, 5, 2, 3, 1, 1,…
## $ Ag3 <dbl> 1, 0, 1, 1, 0, 4, 4, 4, 3, 3, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 4,…
## $ Co4 <dbl> 3, 6, 5, 5, 5, 3, 2, 4, 3, 1, 4, 3, 1, 2, 4, 2, NA, 5, 6, 1, 1…
## $ Co5 <dbl> 0, 6, 5, 5, 5, 3, 3, 1, 5, 1, 2, 4, 4, 4, 2, 1, 6, 4, 3, 1, 3,…
## $ Ne5 <dbl> 3, 0, 1, 4, 1, 1, 4, 5, 0, 3, 4, 6, 2, 0, 1, 1, 0, 4, 3, 1, 5,…
## $ Op5 <dbl> 6, 6, 5, 2, 5, 4, 3, 2, 6, 6, 2, 4, 3, 4, 6, 6, 6, 5, 3, 3, 5,…
## $ Ag4 <dbl> 1, 0, 1, 4, 6, 5, 5, 6, 6, 6, 4, 2, 4, 5, 4, 5, 6, 4, 5, 6, 5,…
## $ Op6 <dbl> 0, 6, 5, 1, 6, 4, 6, 0, 0, 3, 5, 3, 5, 5, 5, 2, 5, 1, 1, 6, 2,…
## $ Co6 <dbl> 6, 0, 1, 4, 6, 5, 6, 5, 4, 3, 5, 5, 4, 6, 6, 1, 3, 4, 5, 4, 6,…
## $ Ex6 <dbl> 3, 6, 5, 3, 0, 4, 3, 1, 6, 3, 2, 1, 4, 2, 1, 5, 6, 2, 1, 2, 1,…
## $ Ne6 <dbl> 1, 6, 5, 1, 0, 1, 3, 4, 0, 4, 4, 5, 2, 1, 5, 6, 1, 2, 2, 3, 5,…
## $ Co7 <dbl> 3, 6, 5, 1, 3, 4, NA, 2, 3, 3, 2, 2, 4, 2, 5, 2, 5, 5, 3, 1, 1…
## $ Ag5 <dbl> 3, 6, 5, 0, 2, 5, 6, 2, 2, 3, 4, 1, 3, 5, 2, 6, 5, 6, 5, 3, 3,…
## $ Co8 <dbl> 3, 0, 1, 1, 3, 4, 3, 0, 1, 3, 2, 2, 1, 2, 4, 3, 2, 4, 5, 2, 6,…
## $ Ex7 <dbl> 3, 6, 5, 4, 1, 2, 5, 3, 6, 3, 4, 3, 5, 1, 1, 6, 6, 3, 1, 1, 3,…
## $ Ne7 <dbl> NA, 0, 1, 2, 0, 2, 4, 4, 0, 3, 2, 5, 1, 2, 5, 2, 2, 4, 1, 3, 5…
## $ Co9 <dbl> 3, 6, 5, 4, 3, 4, 5, 3, 5, 3, 4, 3, 4, 4, 2, 4, 6, 5, 5, 2, 2,…
## $ Op7 <dbl> 0, 6, 5, 5, 5, 4, 6, 2, 1, 3, 2, 4, 5, 5, 6, 3, 6, 5, 2, 6, 5,…
## $ Ne8 <dbl> 2, 0, 1, 1, 1, 1, 5, 4, 0, 4, 4, 5, 1, 2, 5, 2, 1, 5, 1, 2, 5,…
## $ Ag6 <dbl> NA, 6, 5, 2, 3, 4, 5, 6, 1, 3, 4, 2, 3, 5, 1, 6, 2, 6, 6, 5, 3…
## $ Ag7 <dbl> 3, 0, 1, 1, 1, 3, 3, 5, 0, 3, 2, 1, 2, 3, 5, 6, 4, 4, 6, 6, 2,…
## $ Co10 <dbl> 1, 6, 5, 5, 3, 5, 1, 2, 5, 2, 4, 3, 4, 4, 3, 2, 5, 5, 5, 2, 2,…
## $ Ex8 <dbl> 2, 0, 1, 4, 3, 4, 2, 4, 6, 2, 4, 0, 4, 4, 1, 3, 5, 4, 3, 1, 1,…
## $ Ex9 <dbl> 4, 6, 5, 5, 5, 2, 3, 3, 6, 3, 3, 4, 4, 3, 2, 5, 5, 4, 4, 0, 4,…
```
### 4\.5\.1 Load Data
We will used the dataset `personality` from the dataskills package (or download the data from [personality.csv](https://psyteachr.github.io/msc-data-skills/data/personality.csv)). These data are from a 5\-factor (personality) personality questionnaire. Each question is labelled with the domain (Op \= openness, Co \= conscientiousness, Ex \= extroversion, Ag \= agreeableness, and Ne \= neuroticism) and the question number.
```
data("personality", package = "dataskills")
```
| user\_id | date | Op1 | Ne1 | Ne2 | Op2 | Ex1 | Ex2 | Co1 | Co2 | Ne3 | Ag1 | Ag2 | Ne4 | Ex3 | Co3 | Op3 | Ex4 | Op4 | Ex5 | Ag3 | Co4 | Co5 | Ne5 | Op5 | Ag4 | Op6 | Co6 | Ex6 | Ne6 | Co7 | Ag5 | Co8 | Ex7 | Ne7 | Co9 | Op7 | Ne8 | Ag6 | Ag7 | Co10 | Ex8 | Ex9 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 0 | 2006\-03\-23 | 3 | 4 | 0 | 6 | 3 | 3 | 3 | 3 | 0 | 2 | 1 | 3 | 3 | 2 | 2 | 1 | 3 | 3 | 1 | 3 | 0 | 3 | 6 | 1 | 0 | 6 | 3 | 1 | 3 | 3 | 3 | 3 | NA | 3 | 0 | 2 | NA | 3 | 1 | 2 | 4 |
| 1 | 2006\-02\-08 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 6 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 6 | 6 | 0 | 6 | 0 | 6 | 0 | 6 | 6 | 6 | 6 | 0 | 6 | 0 | 6 | 6 | 0 | 6 | 0 | 6 | 0 | 6 |
| 2 | 2005\-10\-24 | 6 | 0 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 6 | 6 | 5 | 1 | 5 | 1 | 1 | 1 | 1 | 5 | 5 | 1 | 5 | 1 | 5 | 1 | 5 | 5 | 5 | 5 | 1 | 5 | 1 | 5 | 5 | 1 | 5 | 1 | 5 | 1 | 5 |
| 5 | 2005\-12\-07 | 6 | 4 | 4 | 4 | 2 | 3 | 3 | 3 | 1 | 4 | 0 | 2 | 5 | 3 | 5 | 3 | 6 | 6 | 1 | 5 | 5 | 4 | 2 | 4 | 1 | 4 | 3 | 1 | 1 | 0 | 1 | 4 | 2 | 4 | 5 | 1 | 2 | 1 | 5 | 4 | 5 |
| 8 | 2006\-07\-27 | 6 | 1 | 2 | 6 | 2 | 3 | 5 | 4 | 0 | 6 | 5 | 3 | 3 | 4 | 5 | 3 | 6 | 3 | 0 | 5 | 5 | 1 | 5 | 6 | 6 | 6 | 0 | 0 | 3 | 2 | 3 | 1 | 0 | 3 | 5 | 1 | 3 | 1 | 3 | 3 | 5 |
| 108 | 2006\-02\-28 | 3 | 2 | 1 | 4 | 4 | 4 | 4 | 3 | 1 | 5 | 4 | 2 | 3 | 4 | 4 | 3 | 3 | 3 | 4 | 3 | 3 | 1 | 4 | 5 | 4 | 5 | 4 | 1 | 4 | 5 | 4 | 2 | 2 | 4 | 4 | 1 | 4 | 3 | 5 | 4 | 2 |
### 4\.5\.2 pivot\_longer()
`pivot_longer()` converts a wide data table to long format by converting the headers from specified columns into the values of new columns, and combining the values of those columns into a new condensed column.
* `cols` refers to the columns you want to make long You can refer to them by their names, like `col1, col2, col3, col4` or `col1:col4` or by their numbers, like `8, 9, 10` or `8:10`.
* `names_to` is what you want to call the new columns that the gathered column headers will go into; it’s “domain” and “qnumber” in this example.
* `names_sep` is an optional argument if you have more than one value for `names_to`. It specifies the characters or position to split the values of the `cols` headers.
* `values_to` is what you want to call the values in the columns `...`; they’re “score” in this example.
```
personality_long <- pivot_longer(
data = personality,
cols = Op1:Ex9, # columns to make long
names_to = c("domain", "qnumber"), # new column names for headers
names_sep = 2, # how to split the headers
values_to = "score" # new column name for values
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 5
## $ user_id <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0,…
## $ date <date> 2006-03-23, 2006-03-23, 2006-03-23, 2006-03-23, 2006-03-23, 2…
## $ domain <chr> "Op", "Ne", "Ne", "Op", "Ex", "Ex", "Co", "Co", "Ne", "Ag", "A…
## $ qnumber <chr> "1", "1", "2", "2", "1", "2", "1", "2", "3", "1", "2", "4", "3…
## $ score <dbl> 3, 4, 0, 6, 3, 3, 3, 3, 0, 2, 1, 3, 3, 2, 2, 1, 3, 3, 1, 3, 0,…
```
You can pipe a data table to `glimpse()` at the end to have a quick look at it. It will still save to the object.
What would you set `names_sep` to in order to split the `cols` headers listed below into the results?
| `cols` | `names_to` | `names_sep` |
| --- | --- | --- |
| `A_1`, `A_2`, `B_1`, `B_2` | `c("condition", "version")` | A B 1 2 \_ |
| `A1`, `A2`, `B1`, `B2` | `c("condition", "version")` | A B 1 2 \_ |
| `cat-day&pre`, `cat-day&post`, `cat-night&pre`, `cat-night&post`, `dog-day&pre`, `dog-day&post`, `dog-night&pre`, `dog-night&post` | `c("pet", "time", "condition")` | \- \& \- |
### 4\.5\.3 pivot\_wider()
We can also go from long to wide format using the `pivot_wider()` function.
* `names_from` is the columns that contain your new column headers.
* `values_from` is the column that contains the values for the new columns.
* `names_sep` is the character string used to join names if `names_from` is more than one column.
```
personality_wide <- pivot_wider(
data = personality_long,
names_from = c(domain, qnumber),
values_from = score,
names_sep = ""
) %>%
glimpse()
```
```
## Rows: 15,000
## Columns: 43
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ Op1 <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
## $ Ne1 <dbl> 4, 0, 0, 4, 1, 2, 3, 4, 0, 3, 3, 3, 2, 1, 1, 3, 4, 5, 2, 4, 5,…
## $ Ne2 <dbl> 0, 6, 6, 4, 2, 1, 2, 3, 1, 2, 5, 5, 3, 1, 1, 1, 1, 6, 1, 2, 5,…
## $ Op2 <dbl> 6, 0, 0, 4, 6, 4, 4, 0, 0, 3, 4, 3, 3, 4, 5, 3, 3, 4, 1, 6, 6,…
## $ Ex1 <dbl> 3, 0, 0, 2, 2, 4, 4, 3, 5, 4, 1, 1, 3, 3, 1, 3, 5, 1, 0, 4, 1,…
## $ Ex2 <dbl> 3, 0, 0, 3, 3, 4, 5, 2, 5, 3, 4, 1, 3, 2, 1, 6, 5, 3, 4, 4, 1,…
## $ Co1 <dbl> 3, 0, 0, 3, 5, 4, 3, 4, 5, 3, 3, 3, 1, 5, 5, 4, 4, 5, 6, 4, 2,…
## $ Co2 <dbl> 3, 0, 0, 3, 4, 3, 3, 4, 5, 3, 5, 3, 3, 4, 5, 1, 5, 4, 5, 2, 5,…
## $ Ne3 <dbl> 0, 0, 0, 1, 0, 1, 4, 4, 0, 4, 2, 5, 1, 2, 5, 5, 2, 2, 1, 2, 5,…
## $ Ag1 <dbl> 2, 0, 0, 4, 6, 5, 5, 4, 2, 5, 4, 3, 2, 4, 5, 3, 5, 5, 5, 4, 4,…
## $ Ag2 <dbl> 1, 6, 6, 0, 5, 4, 5, 3, 4, 3, 5, 1, 5, 4, 2, 6, 5, 5, 5, 5, 2,…
## $ Ne4 <dbl> 3, 6, 6, 2, 3, 2, 3, 3, 0, 4, 4, 5, 5, 4, 5, 3, 2, 5, 2, 4, 5,…
## $ Ex3 <dbl> 3, 6, 5, 5, 3, 3, 3, 0, 6, 1, 4, 2, 3, 2, 1, 2, 5, 1, 0, 5, 5,…
## $ Co3 <dbl> 2, 0, 1, 3, 4, 4, 5, 4, 5, 3, 4, 3, 4, 4, 5, 4, 2, 4, 5, 2, 2,…
## $ Op3 <dbl> 2, 6, 5, 5, 5, 4, 3, 2, 4, 3, 3, 6, 5, 5, 6, 5, 4, 4, 3, 6, 5,…
## $ Ex4 <dbl> 1, 0, 1, 3, 3, 3, 4, 3, 5, 3, 2, 0, 3, 3, 1, 2, NA, 4, 4, 4, 1…
## $ Op4 <dbl> 3, 0, 1, 6, 6, 3, 3, 0, 6, 3, 4, 5, 4, 5, 6, 6, 2, 2, 4, 5, 5,…
## $ Ex5 <dbl> 3, 0, 1, 6, 3, 3, 4, 2, 5, 2, 2, 4, 2, 3, 0, 4, 5, 2, 3, 1, 1,…
## $ Ag3 <dbl> 1, 0, 1, 1, 0, 4, 4, 4, 3, 3, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 4,…
## $ Co4 <dbl> 3, 6, 5, 5, 5, 3, 2, 4, 3, 1, 4, 3, 1, 2, 4, 2, NA, 5, 6, 1, 1…
## $ Co5 <dbl> 0, 6, 5, 5, 5, 3, 3, 1, 5, 1, 2, 4, 4, 4, 2, 1, 6, 4, 3, 1, 3,…
## $ Ne5 <dbl> 3, 0, 1, 4, 1, 1, 4, 5, 0, 3, 4, 6, 2, 0, 1, 1, 0, 4, 3, 1, 5,…
## $ Op5 <dbl> 6, 6, 5, 2, 5, 4, 3, 2, 6, 6, 2, 4, 3, 4, 6, 6, 6, 5, 3, 3, 5,…
## $ Ag4 <dbl> 1, 0, 1, 4, 6, 5, 5, 6, 6, 6, 4, 2, 4, 5, 4, 5, 6, 4, 5, 6, 5,…
## $ Op6 <dbl> 0, 6, 5, 1, 6, 4, 6, 0, 0, 3, 5, 3, 5, 5, 5, 2, 5, 1, 1, 6, 2,…
## $ Co6 <dbl> 6, 0, 1, 4, 6, 5, 6, 5, 4, 3, 5, 5, 4, 6, 6, 1, 3, 4, 5, 4, 6,…
## $ Ex6 <dbl> 3, 6, 5, 3, 0, 4, 3, 1, 6, 3, 2, 1, 4, 2, 1, 5, 6, 2, 1, 2, 1,…
## $ Ne6 <dbl> 1, 6, 5, 1, 0, 1, 3, 4, 0, 4, 4, 5, 2, 1, 5, 6, 1, 2, 2, 3, 5,…
## $ Co7 <dbl> 3, 6, 5, 1, 3, 4, NA, 2, 3, 3, 2, 2, 4, 2, 5, 2, 5, 5, 3, 1, 1…
## $ Ag5 <dbl> 3, 6, 5, 0, 2, 5, 6, 2, 2, 3, 4, 1, 3, 5, 2, 6, 5, 6, 5, 3, 3,…
## $ Co8 <dbl> 3, 0, 1, 1, 3, 4, 3, 0, 1, 3, 2, 2, 1, 2, 4, 3, 2, 4, 5, 2, 6,…
## $ Ex7 <dbl> 3, 6, 5, 4, 1, 2, 5, 3, 6, 3, 4, 3, 5, 1, 1, 6, 6, 3, 1, 1, 3,…
## $ Ne7 <dbl> NA, 0, 1, 2, 0, 2, 4, 4, 0, 3, 2, 5, 1, 2, 5, 2, 2, 4, 1, 3, 5…
## $ Co9 <dbl> 3, 6, 5, 4, 3, 4, 5, 3, 5, 3, 4, 3, 4, 4, 2, 4, 6, 5, 5, 2, 2,…
## $ Op7 <dbl> 0, 6, 5, 5, 5, 4, 6, 2, 1, 3, 2, 4, 5, 5, 6, 3, 6, 5, 2, 6, 5,…
## $ Ne8 <dbl> 2, 0, 1, 1, 1, 1, 5, 4, 0, 4, 4, 5, 1, 2, 5, 2, 1, 5, 1, 2, 5,…
## $ Ag6 <dbl> NA, 6, 5, 2, 3, 4, 5, 6, 1, 3, 4, 2, 3, 5, 1, 6, 2, 6, 6, 5, 3…
## $ Ag7 <dbl> 3, 0, 1, 1, 1, 3, 3, 5, 0, 3, 2, 1, 2, 3, 5, 6, 4, 4, 6, 6, 2,…
## $ Co10 <dbl> 1, 6, 5, 5, 3, 5, 1, 2, 5, 2, 4, 3, 4, 4, 3, 2, 5, 5, 5, 2, 2,…
## $ Ex8 <dbl> 2, 0, 1, 4, 3, 4, 2, 4, 6, 2, 4, 0, 4, 4, 1, 3, 5, 4, 3, 1, 1,…
## $ Ex9 <dbl> 4, 6, 5, 5, 5, 2, 3, 3, 6, 3, 3, 4, 4, 3, 2, 5, 5, 4, 4, 0, 4,…
```
4\.6 Tidy Verbs
---------------
The pivot functions above are relatively new functions that combine the four basic tidy verbs. You can also convert data between long and wide formats using these functions. Many researchers still use these functions and older code will not use the pivot functions, so it is useful to know how to interpret these.
### 4\.6\.1 gather()
Much like `pivot_longer()`, `gather()` makes a wide data table long by creating a column for the headers and a column for the values. The main difference is that you cannot turn the headers into more than one column.
* `key` is what you want to call the new column that the gathered column headers will go into; it’s “question” in this example. It is like `names_to` in `pivot_longer()`, but can only take one value (multiple values need to be separated after `separate()`).
* `value` is what you want to call the values in the gathered columns; they’re “score” in this example. It is like `values_to` in `pivot_longer()`.
* `...` refers to the columns you want to gather. It is like `cols` in `pivot_longer()`.
The `gather()` function converts `personality` from a wide data table to long format, with a row for each user/question observation. The resulting data table should have the columns: `user_id`, `date`, `question`, and `score`.
```
personality_gathered <- gather(
data = personality,
key = "question", # new column name for gathered headers
value = "score", # new column name for gathered values
Op1:Ex9 # columns to gather
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 4
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 9…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, …
## $ question <chr> "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1"…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4…
```
### 4\.6\.2 separate()
* `col` is the column you want to separate
* `into` is a vector of new column names
* `sep` is the character(s) that separate your new columns. This defaults to anything that isn’t alphanumeric, like .`,_-/:` and is like the `names_sep` argument in `pivot_longer()`.
Split the `question` column into two columns: `domain` and `qnumber`.
There is no character to split on, here, but you can separate a column after a specific number of characters by setting `sep` to an integer. For example, to split “abcde” after the third character, use `sep = 3`, which results in `c("abc", "de")`. You can also use negative number to split before the *n*th character from the right. For example, to split a column that has words of various lengths and 2\-digit suffixes (like “lisa03”“,”amanda38"), you can use `sep = -2`.
```
personality_sep <- separate(
data = personality_gathered,
col = question, # column to separate
into = c("domain", "qnumber"), # new column names
sep = 2 # where to separate
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 5
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ domain <chr> "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "O…
## $ qnumber <chr> "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
```
If you want to separate just at full stops, you need to use `sep = "\\."`, not `sep = "."`. The two slashes **escape** the full stop, making it interpreted as a literal full stop and not the regular expression for any character.
### 4\.6\.3 unite()
* `col` is your new united column
* `...` refers to the columns you want to unite
* `sep` is the character(s) that will separate your united columns
Put the domain and qnumber columns back together into a new column named `domain_n`. Make it in a format like “Op\_Q1\.”
```
personality_unite <- unite(
data = personality_sep,
col = "domain_n", # new column name
domain, qnumber, # columns to unite
sep = "_Q" # separation characters
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 4
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 9…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, …
## $ domain_n <chr> "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1"…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4…
```
### 4\.6\.4 spread()
You can reverse the processes above, as well. For example, you can convert data from long format into wide format.
* `key` is the column that contains your new column headers. It is like `names_from` in `pivot_wider()`, but can only take one value (multiple values need to be merged first using `unite()`).
* `value` is the column that contains the values in the new spread columns. It is like `values_from` in `pivot_wider()`.
```
personality_spread <- spread(
data = personality_unite,
key = domain_n, # column that contains new headers
value = score # column that contains new values
) %>%
glimpse()
```
```
## Rows: 15,000
## Columns: 43
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ Ag_Q1 <dbl> 2, 0, 0, 4, 6, 5, 5, 4, 2, 5, 4, 3, 2, 4, 5, 3, 5, 5, 5, 4, 4,…
## $ Ag_Q2 <dbl> 1, 6, 6, 0, 5, 4, 5, 3, 4, 3, 5, 1, 5, 4, 2, 6, 5, 5, 5, 5, 2,…
## $ Ag_Q3 <dbl> 1, 0, 1, 1, 0, 4, 4, 4, 3, 3, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 4,…
## $ Ag_Q4 <dbl> 1, 0, 1, 4, 6, 5, 5, 6, 6, 6, 4, 2, 4, 5, 4, 5, 6, 4, 5, 6, 5,…
## $ Ag_Q5 <dbl> 3, 6, 5, 0, 2, 5, 6, 2, 2, 3, 4, 1, 3, 5, 2, 6, 5, 6, 5, 3, 3,…
## $ Ag_Q6 <dbl> NA, 6, 5, 2, 3, 4, 5, 6, 1, 3, 4, 2, 3, 5, 1, 6, 2, 6, 6, 5, 3…
## $ Ag_Q7 <dbl> 3, 0, 1, 1, 1, 3, 3, 5, 0, 3, 2, 1, 2, 3, 5, 6, 4, 4, 6, 6, 2,…
## $ Co_Q1 <dbl> 3, 0, 0, 3, 5, 4, 3, 4, 5, 3, 3, 3, 1, 5, 5, 4, 4, 5, 6, 4, 2,…
## $ Co_Q10 <dbl> 1, 6, 5, 5, 3, 5, 1, 2, 5, 2, 4, 3, 4, 4, 3, 2, 5, 5, 5, 2, 2,…
## $ Co_Q2 <dbl> 3, 0, 0, 3, 4, 3, 3, 4, 5, 3, 5, 3, 3, 4, 5, 1, 5, 4, 5, 2, 5,…
## $ Co_Q3 <dbl> 2, 0, 1, 3, 4, 4, 5, 4, 5, 3, 4, 3, 4, 4, 5, 4, 2, 4, 5, 2, 2,…
## $ Co_Q4 <dbl> 3, 6, 5, 5, 5, 3, 2, 4, 3, 1, 4, 3, 1, 2, 4, 2, NA, 5, 6, 1, 1…
## $ Co_Q5 <dbl> 0, 6, 5, 5, 5, 3, 3, 1, 5, 1, 2, 4, 4, 4, 2, 1, 6, 4, 3, 1, 3,…
## $ Co_Q6 <dbl> 6, 0, 1, 4, 6, 5, 6, 5, 4, 3, 5, 5, 4, 6, 6, 1, 3, 4, 5, 4, 6,…
## $ Co_Q7 <dbl> 3, 6, 5, 1, 3, 4, NA, 2, 3, 3, 2, 2, 4, 2, 5, 2, 5, 5, 3, 1, 1…
## $ Co_Q8 <dbl> 3, 0, 1, 1, 3, 4, 3, 0, 1, 3, 2, 2, 1, 2, 4, 3, 2, 4, 5, 2, 6,…
## $ Co_Q9 <dbl> 3, 6, 5, 4, 3, 4, 5, 3, 5, 3, 4, 3, 4, 4, 2, 4, 6, 5, 5, 2, 2,…
## $ Ex_Q1 <dbl> 3, 0, 0, 2, 2, 4, 4, 3, 5, 4, 1, 1, 3, 3, 1, 3, 5, 1, 0, 4, 1,…
## $ Ex_Q2 <dbl> 3, 0, 0, 3, 3, 4, 5, 2, 5, 3, 4, 1, 3, 2, 1, 6, 5, 3, 4, 4, 1,…
## $ Ex_Q3 <dbl> 3, 6, 5, 5, 3, 3, 3, 0, 6, 1, 4, 2, 3, 2, 1, 2, 5, 1, 0, 5, 5,…
## $ Ex_Q4 <dbl> 1, 0, 1, 3, 3, 3, 4, 3, 5, 3, 2, 0, 3, 3, 1, 2, NA, 4, 4, 4, 1…
## $ Ex_Q5 <dbl> 3, 0, 1, 6, 3, 3, 4, 2, 5, 2, 2, 4, 2, 3, 0, 4, 5, 2, 3, 1, 1,…
## $ Ex_Q6 <dbl> 3, 6, 5, 3, 0, 4, 3, 1, 6, 3, 2, 1, 4, 2, 1, 5, 6, 2, 1, 2, 1,…
## $ Ex_Q7 <dbl> 3, 6, 5, 4, 1, 2, 5, 3, 6, 3, 4, 3, 5, 1, 1, 6, 6, 3, 1, 1, 3,…
## $ Ex_Q8 <dbl> 2, 0, 1, 4, 3, 4, 2, 4, 6, 2, 4, 0, 4, 4, 1, 3, 5, 4, 3, 1, 1,…
## $ Ex_Q9 <dbl> 4, 6, 5, 5, 5, 2, 3, 3, 6, 3, 3, 4, 4, 3, 2, 5, 5, 4, 4, 0, 4,…
## $ Ne_Q1 <dbl> 4, 0, 0, 4, 1, 2, 3, 4, 0, 3, 3, 3, 2, 1, 1, 3, 4, 5, 2, 4, 5,…
## $ Ne_Q2 <dbl> 0, 6, 6, 4, 2, 1, 2, 3, 1, 2, 5, 5, 3, 1, 1, 1, 1, 6, 1, 2, 5,…
## $ Ne_Q3 <dbl> 0, 0, 0, 1, 0, 1, 4, 4, 0, 4, 2, 5, 1, 2, 5, 5, 2, 2, 1, 2, 5,…
## $ Ne_Q4 <dbl> 3, 6, 6, 2, 3, 2, 3, 3, 0, 4, 4, 5, 5, 4, 5, 3, 2, 5, 2, 4, 5,…
## $ Ne_Q5 <dbl> 3, 0, 1, 4, 1, 1, 4, 5, 0, 3, 4, 6, 2, 0, 1, 1, 0, 4, 3, 1, 5,…
## $ Ne_Q6 <dbl> 1, 6, 5, 1, 0, 1, 3, 4, 0, 4, 4, 5, 2, 1, 5, 6, 1, 2, 2, 3, 5,…
## $ Ne_Q7 <dbl> NA, 0, 1, 2, 0, 2, 4, 4, 0, 3, 2, 5, 1, 2, 5, 2, 2, 4, 1, 3, 5…
## $ Ne_Q8 <dbl> 2, 0, 1, 1, 1, 1, 5, 4, 0, 4, 4, 5, 1, 2, 5, 2, 1, 5, 1, 2, 5,…
## $ Op_Q1 <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
## $ Op_Q2 <dbl> 6, 0, 0, 4, 6, 4, 4, 0, 0, 3, 4, 3, 3, 4, 5, 3, 3, 4, 1, 6, 6,…
## $ Op_Q3 <dbl> 2, 6, 5, 5, 5, 4, 3, 2, 4, 3, 3, 6, 5, 5, 6, 5, 4, 4, 3, 6, 5,…
## $ Op_Q4 <dbl> 3, 0, 1, 6, 6, 3, 3, 0, 6, 3, 4, 5, 4, 5, 6, 6, 2, 2, 4, 5, 5,…
## $ Op_Q5 <dbl> 6, 6, 5, 2, 5, 4, 3, 2, 6, 6, 2, 4, 3, 4, 6, 6, 6, 5, 3, 3, 5,…
## $ Op_Q6 <dbl> 0, 6, 5, 1, 6, 4, 6, 0, 0, 3, 5, 3, 5, 5, 5, 2, 5, 1, 1, 6, 2,…
## $ Op_Q7 <dbl> 0, 6, 5, 5, 5, 4, 6, 2, 1, 3, 2, 4, 5, 5, 6, 3, 6, 5, 2, 6, 5,…
```
### 4\.6\.1 gather()
Much like `pivot_longer()`, `gather()` makes a wide data table long by creating a column for the headers and a column for the values. The main difference is that you cannot turn the headers into more than one column.
* `key` is what you want to call the new column that the gathered column headers will go into; it’s “question” in this example. It is like `names_to` in `pivot_longer()`, but can only take one value (multiple values need to be separated after `separate()`).
* `value` is what you want to call the values in the gathered columns; they’re “score” in this example. It is like `values_to` in `pivot_longer()`.
* `...` refers to the columns you want to gather. It is like `cols` in `pivot_longer()`.
The `gather()` function converts `personality` from a wide data table to long format, with a row for each user/question observation. The resulting data table should have the columns: `user_id`, `date`, `question`, and `score`.
```
personality_gathered <- gather(
data = personality,
key = "question", # new column name for gathered headers
value = "score", # new column name for gathered values
Op1:Ex9 # columns to gather
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 4
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 9…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, …
## $ question <chr> "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1", "Op1"…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4…
```
### 4\.6\.2 separate()
* `col` is the column you want to separate
* `into` is a vector of new column names
* `sep` is the character(s) that separate your new columns. This defaults to anything that isn’t alphanumeric, like .`,_-/:` and is like the `names_sep` argument in `pivot_longer()`.
Split the `question` column into two columns: `domain` and `qnumber`.
There is no character to split on, here, but you can separate a column after a specific number of characters by setting `sep` to an integer. For example, to split “abcde” after the third character, use `sep = 3`, which results in `c("abc", "de")`. You can also use negative number to split before the *n*th character from the right. For example, to split a column that has words of various lengths and 2\-digit suffixes (like “lisa03”“,”amanda38"), you can use `sep = -2`.
```
personality_sep <- separate(
data = personality_gathered,
col = question, # column to separate
into = c("domain", "qnumber"), # new column names
sep = 2 # where to separate
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 5
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ domain <chr> "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "Op", "O…
## $ qnumber <chr> "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1", "1…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
```
If you want to separate just at full stops, you need to use `sep = "\\."`, not `sep = "."`. The two slashes **escape** the full stop, making it interpreted as a literal full stop and not the regular expression for any character.
### 4\.6\.3 unite()
* `col` is your new united column
* `...` refers to the columns you want to unite
* `sep` is the character(s) that will separate your united columns
Put the domain and qnumber columns back together into a new column named `domain_n`. Make it in a format like “Op\_Q1\.”
```
personality_unite <- unite(
data = personality_sep,
col = "domain_n", # new column name
domain, qnumber, # columns to unite
sep = "_Q" # separation characters
) %>%
glimpse()
```
```
## Rows: 615,000
## Columns: 4
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 9…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, …
## $ domain_n <chr> "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1", "Op_Q1"…
## $ score <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4…
```
### 4\.6\.4 spread()
You can reverse the processes above, as well. For example, you can convert data from long format into wide format.
* `key` is the column that contains your new column headers. It is like `names_from` in `pivot_wider()`, but can only take one value (multiple values need to be merged first using `unite()`).
* `value` is the column that contains the values in the new spread columns. It is like `values_from` in `pivot_wider()`.
```
personality_spread <- spread(
data = personality_unite,
key = domain_n, # column that contains new headers
value = score # column that contains new values
) %>%
glimpse()
```
```
## Rows: 15,000
## Columns: 43
## $ user_id <dbl> 0, 1, 2, 5, 8, 108, 233, 298, 426, 436, 685, 807, 871, 881, 94…
## $ date <date> 2006-03-23, 2006-02-08, 2005-10-24, 2005-12-07, 2006-07-27, 2…
## $ Ag_Q1 <dbl> 2, 0, 0, 4, 6, 5, 5, 4, 2, 5, 4, 3, 2, 4, 5, 3, 5, 5, 5, 4, 4,…
## $ Ag_Q2 <dbl> 1, 6, 6, 0, 5, 4, 5, 3, 4, 3, 5, 1, 5, 4, 2, 6, 5, 5, 5, 5, 2,…
## $ Ag_Q3 <dbl> 1, 0, 1, 1, 0, 4, 4, 4, 3, 3, 4, 4, 3, 4, 4, 5, 5, 4, 5, 3, 4,…
## $ Ag_Q4 <dbl> 1, 0, 1, 4, 6, 5, 5, 6, 6, 6, 4, 2, 4, 5, 4, 5, 6, 4, 5, 6, 5,…
## $ Ag_Q5 <dbl> 3, 6, 5, 0, 2, 5, 6, 2, 2, 3, 4, 1, 3, 5, 2, 6, 5, 6, 5, 3, 3,…
## $ Ag_Q6 <dbl> NA, 6, 5, 2, 3, 4, 5, 6, 1, 3, 4, 2, 3, 5, 1, 6, 2, 6, 6, 5, 3…
## $ Ag_Q7 <dbl> 3, 0, 1, 1, 1, 3, 3, 5, 0, 3, 2, 1, 2, 3, 5, 6, 4, 4, 6, 6, 2,…
## $ Co_Q1 <dbl> 3, 0, 0, 3, 5, 4, 3, 4, 5, 3, 3, 3, 1, 5, 5, 4, 4, 5, 6, 4, 2,…
## $ Co_Q10 <dbl> 1, 6, 5, 5, 3, 5, 1, 2, 5, 2, 4, 3, 4, 4, 3, 2, 5, 5, 5, 2, 2,…
## $ Co_Q2 <dbl> 3, 0, 0, 3, 4, 3, 3, 4, 5, 3, 5, 3, 3, 4, 5, 1, 5, 4, 5, 2, 5,…
## $ Co_Q3 <dbl> 2, 0, 1, 3, 4, 4, 5, 4, 5, 3, 4, 3, 4, 4, 5, 4, 2, 4, 5, 2, 2,…
## $ Co_Q4 <dbl> 3, 6, 5, 5, 5, 3, 2, 4, 3, 1, 4, 3, 1, 2, 4, 2, NA, 5, 6, 1, 1…
## $ Co_Q5 <dbl> 0, 6, 5, 5, 5, 3, 3, 1, 5, 1, 2, 4, 4, 4, 2, 1, 6, 4, 3, 1, 3,…
## $ Co_Q6 <dbl> 6, 0, 1, 4, 6, 5, 6, 5, 4, 3, 5, 5, 4, 6, 6, 1, 3, 4, 5, 4, 6,…
## $ Co_Q7 <dbl> 3, 6, 5, 1, 3, 4, NA, 2, 3, 3, 2, 2, 4, 2, 5, 2, 5, 5, 3, 1, 1…
## $ Co_Q8 <dbl> 3, 0, 1, 1, 3, 4, 3, 0, 1, 3, 2, 2, 1, 2, 4, 3, 2, 4, 5, 2, 6,…
## $ Co_Q9 <dbl> 3, 6, 5, 4, 3, 4, 5, 3, 5, 3, 4, 3, 4, 4, 2, 4, 6, 5, 5, 2, 2,…
## $ Ex_Q1 <dbl> 3, 0, 0, 2, 2, 4, 4, 3, 5, 4, 1, 1, 3, 3, 1, 3, 5, 1, 0, 4, 1,…
## $ Ex_Q2 <dbl> 3, 0, 0, 3, 3, 4, 5, 2, 5, 3, 4, 1, 3, 2, 1, 6, 5, 3, 4, 4, 1,…
## $ Ex_Q3 <dbl> 3, 6, 5, 5, 3, 3, 3, 0, 6, 1, 4, 2, 3, 2, 1, 2, 5, 1, 0, 5, 5,…
## $ Ex_Q4 <dbl> 1, 0, 1, 3, 3, 3, 4, 3, 5, 3, 2, 0, 3, 3, 1, 2, NA, 4, 4, 4, 1…
## $ Ex_Q5 <dbl> 3, 0, 1, 6, 3, 3, 4, 2, 5, 2, 2, 4, 2, 3, 0, 4, 5, 2, 3, 1, 1,…
## $ Ex_Q6 <dbl> 3, 6, 5, 3, 0, 4, 3, 1, 6, 3, 2, 1, 4, 2, 1, 5, 6, 2, 1, 2, 1,…
## $ Ex_Q7 <dbl> 3, 6, 5, 4, 1, 2, 5, 3, 6, 3, 4, 3, 5, 1, 1, 6, 6, 3, 1, 1, 3,…
## $ Ex_Q8 <dbl> 2, 0, 1, 4, 3, 4, 2, 4, 6, 2, 4, 0, 4, 4, 1, 3, 5, 4, 3, 1, 1,…
## $ Ex_Q9 <dbl> 4, 6, 5, 5, 5, 2, 3, 3, 6, 3, 3, 4, 4, 3, 2, 5, 5, 4, 4, 0, 4,…
## $ Ne_Q1 <dbl> 4, 0, 0, 4, 1, 2, 3, 4, 0, 3, 3, 3, 2, 1, 1, 3, 4, 5, 2, 4, 5,…
## $ Ne_Q2 <dbl> 0, 6, 6, 4, 2, 1, 2, 3, 1, 2, 5, 5, 3, 1, 1, 1, 1, 6, 1, 2, 5,…
## $ Ne_Q3 <dbl> 0, 0, 0, 1, 0, 1, 4, 4, 0, 4, 2, 5, 1, 2, 5, 5, 2, 2, 1, 2, 5,…
## $ Ne_Q4 <dbl> 3, 6, 6, 2, 3, 2, 3, 3, 0, 4, 4, 5, 5, 4, 5, 3, 2, 5, 2, 4, 5,…
## $ Ne_Q5 <dbl> 3, 0, 1, 4, 1, 1, 4, 5, 0, 3, 4, 6, 2, 0, 1, 1, 0, 4, 3, 1, 5,…
## $ Ne_Q6 <dbl> 1, 6, 5, 1, 0, 1, 3, 4, 0, 4, 4, 5, 2, 1, 5, 6, 1, 2, 2, 3, 5,…
## $ Ne_Q7 <dbl> NA, 0, 1, 2, 0, 2, 4, 4, 0, 3, 2, 5, 1, 2, 5, 2, 2, 4, 1, 3, 5…
## $ Ne_Q8 <dbl> 2, 0, 1, 1, 1, 1, 5, 4, 0, 4, 4, 5, 1, 2, 5, 2, 1, 5, 1, 2, 5,…
## $ Op_Q1 <dbl> 3, 6, 6, 6, 6, 3, 3, 6, 6, 3, 4, 5, 5, 5, 6, 4, 1, 2, 5, 6, 4,…
## $ Op_Q2 <dbl> 6, 0, 0, 4, 6, 4, 4, 0, 0, 3, 4, 3, 3, 4, 5, 3, 3, 4, 1, 6, 6,…
## $ Op_Q3 <dbl> 2, 6, 5, 5, 5, 4, 3, 2, 4, 3, 3, 6, 5, 5, 6, 5, 4, 4, 3, 6, 5,…
## $ Op_Q4 <dbl> 3, 0, 1, 6, 6, 3, 3, 0, 6, 3, 4, 5, 4, 5, 6, 6, 2, 2, 4, 5, 5,…
## $ Op_Q5 <dbl> 6, 6, 5, 2, 5, 4, 3, 2, 6, 6, 2, 4, 3, 4, 6, 6, 6, 5, 3, 3, 5,…
## $ Op_Q6 <dbl> 0, 6, 5, 1, 6, 4, 6, 0, 0, 3, 5, 3, 5, 5, 5, 2, 5, 1, 1, 6, 2,…
## $ Op_Q7 <dbl> 0, 6, 5, 5, 5, 4, 6, 2, 1, 3, 2, 4, 5, 5, 6, 3, 6, 5, 2, 6, 5,…
```
4\.7 Pipes
----------
Pipes are a way to order your code in a more readable format.
Let’s say you have a small data table with 10 participant IDs, two columns with variable type A, and 2 columns with variable type B. You want to calculate the mean of the A variables and the mean of the B variables and return a table with 10 rows (1 for each participant) and 3 columns (`id`, `A_mean` and `B_mean`).
One way you could do this is by creating a new object at every step and using that object in the next step. This is pretty clear, but you’ve created 6 unnecessary data objects in your environment. This can get confusing in very long scripts.
```
# make a data table with 10 subjects
data_original <- tibble(
id = 1:10,
A1 = rnorm(10, 0),
A2 = rnorm(10, 1),
B1 = rnorm(10, 2),
B2 = rnorm(10, 3)
)
# gather columns A1 to B2 into "variable" and "value" columns
data_gathered <- gather(data_original, variable, value, A1:B2)
# separate the variable column at the _ into "var" and "var_n" columns
data_separated <- separate(data_gathered, variable, c("var", "var_n"), sep = 1)
# group the data by id and var
data_grouped <- group_by(data_separated, id, var)
# calculate the mean value for each id/var
data_summarised <- summarise(data_grouped, mean = mean(value), .groups = "drop")
# spread the mean column into A and B columns
data_spread <- spread(data_summarised, var, mean)
# rename A and B to A_mean and B_mean
data <- rename(data_spread, A_mean = A, B_mean = B)
data
```
| id | A\_mean | B\_mean |
| --- | --- | --- |
| 1 | \-0\.5938256 | 1\.0243046 |
| 2 | 0\.7440623 | 2\.7172046 |
| 3 | 0\.9309275 | 3\.9262358 |
| 4 | 0\.7197686 | 1\.9662632 |
| 5 | \-0\.0280832 | 1\.9473456 |
| 6 | \-0\.0982555 | 3\.2073687 |
| 7 | 0\.1256922 | 0\.9256321 |
| 8 | 1\.4526447 | 2\.3778116 |
| 9 | 0\.2976443 | 1\.6617481 |
| 10 | 0\.5589199 | 2\.1034679 |
You *can* name each object `data` and keep replacing the old data object with the new one at each step. This will keep your environment clean, but I don’t recommend it because it makes it too easy to accidentally run your code out of order when you are running line\-by\-line for development or debugging.
One way to avoid extra objects is to nest your functions, literally replacing each data object with the code that generated it in the previous step. This can be fine for very short chains.
```
mean_petal_width <- round(mean(iris$Petal.Width), 2)
```
But it gets extremely confusing for long chains:
```
# do not ever do this!!
data <- rename(
spread(
summarise(
group_by(
separate(
gather(
tibble(
id = 1:10,
A1 = rnorm(10, 0),
A2 = rnorm(10, 1),
B1 = rnorm(10, 2),
B2 = rnorm(10,3)),
variable, value, A1:B2),
variable, c("var", "var_n"), sep = 1),
id, var),
mean = mean(value), .groups = "drop"),
var, mean),
A_mean = A, B_mean = B)
```
The pipe lets you “pipe” the result of each function into the next function, allowing you to put your code in a logical order without creating too many extra objects.
```
# calculate mean of A and B variables for each participant
data <- tibble(
id = 1:10,
A1 = rnorm(10, 0),
A2 = rnorm(10, 1),
B1 = rnorm(10, 2),
B2 = rnorm(10,3)
) %>%
gather(variable, value, A1:B2) %>%
separate(variable, c("var", "var_n"), sep=1) %>%
group_by(id, var) %>%
summarise(mean = mean(value), .groups = "drop") %>%
spread(var, mean) %>%
rename(A_mean = A, B_mean = B)
```
You can read this code from top to bottom as follows:
1. Make a tibble called `data` with
* id of 1 to 10,
* A1 of 10 random numbers from a normal distribution,
* A2 of 10 random numbers from a normal distribution,
* B1 of 10 random numbers from a normal distribution,
* B2 of 10 random numbers from a normal distribution; and then
2. Gather to create `variable` and `value` column from columns `A_1` to `B_2`; and then
3. Separate the column `variable` into 2 new columns called `var`and `var_n`, separate at character 1; and then
4. Group by columns `id` and `var`; and then
5. Summarise and new column called `mean` as the mean of the `value` column for each group and drop the grouping; and then
6. Spread to make new columns with the key names in `var` and values in `mean`; and then
7. Rename to make columns called `A_mean` (old `A`) and `B_mean` (old `B`)
You can make intermediate objects whenever you need to break up your code because it’s getting too complicated or you need to debug something.
You can debug a pipe by highlighting from the beginning to just before the pipe you want to stop at. Try this by highlighting from `data <-` to the end of the `separate` function and typing cmd\-return. What does `data` look like now?
Chain all the steps above using pipes.
```
personality_reshaped <- personality %>%
gather("question", "score", Op1:Ex9) %>%
separate(question, c("domain", "qnumber"), sep = 2) %>%
unite("domain_n", domain, qnumber, sep = "_Q") %>%
spread(domain_n, score)
```
4\.8 More Complex Example
-------------------------
### 4\.8\.1 Load Data
Get data on infant and maternal mortality rates from the dataskills package. If you don’t have the package, you can download them here:
* [infant mortality](https://psyteachr.github.io/msc-data-skills/data/infmort.csv)
* [maternal mortality](https://psyteachr.github.io/msc-data-skills/data/matmort.xls)
```
data("infmort", package = "dataskills")
head(infmort)
```
| Country | Year | Infant mortality rate (probability of dying between birth and age 1 per 1000 live births) |
| --- | --- | --- |
| Afghanistan | 2015 | 66\.3 \[52\.7\-83\.9] |
| Afghanistan | 2014 | 68\.1 \[55\.7\-83\.6] |
| Afghanistan | 2013 | 69\.9 \[58\.7\-83\.5] |
| Afghanistan | 2012 | 71\.7 \[61\.6\-83\.7] |
| Afghanistan | 2011 | 73\.4 \[64\.4\-84\.2] |
| Afghanistan | 2010 | 75\.1 \[66\.9\-85\.1] |
```
data("matmort", package = "dataskills")
head(matmort)
```
| Country | 1990 | 2000 | 2015 |
| --- | --- | --- | --- |
| Afghanistan | 1 340 \[ 878 \- 1 950] | 1 100 \[ 745 \- 1 570] | 396 \[ 253 \- 620] |
| Albania | 71 \[ 58 \- 88] | 43 \[ 33 \- 56] | 29 \[ 16 \- 46] |
| Algeria | 216 \[ 141 \- 327] | 170 \[ 118 \- 241] | 140 \[ 82 \- 244] |
| Angola | 1 160 \[ 627 \- 2 020] | 924 \[ 472 \- 1 730] | 477 \[ 221 \- 988] |
| Argentina | 72 \[ 64 \- 80] | 60 \[ 54 \- 65] | 52 \[ 44 \- 63] |
| Armenia | 58 \[ 51 \- 65] | 40 \[ 35 \- 46] | 25 \[ 21 \- 31] |
### 4\.8\.2 Wide to Long
`matmort` is in wide format, with a separate column for each year. Change it to long format, with a row for each Country/Year observation.
This example is complicated because the column names to gather *are* numbers. If the column names are non\-standard (e.g., have spaces, start with numbers, or have special characters), you can enclose them in backticks (\`) like the example below.
```
matmort_long <- matmort %>%
pivot_longer(cols = `1990`:`2015`,
names_to = "Year",
values_to = "stats") %>%
glimpse()
```
```
## Rows: 543
## Columns: 3
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Albania", "Alban…
## $ Year <chr> "1990", "2000", "2015", "1990", "2000", "2015", "1990", "2000"…
## $ stats <chr> "1 340 [ 878 - 1 950]", "1 100 [ 745 - 1 570]", "396 [ 253 - …
```
You can put `matmort` at the first argument to `pivot_longer()`; you don’t have to pipe it in. But when I’m working on data processing I often find myself needing to insert or rearrange steps and I constantly introduce errors by forgetting to take the first argument out of a pipe chain, so now I start with the original data table and pipe from there.
Alternatively, you can use the `gather()` function.
```
matmort_long <- matmort %>%
gather("Year", "stats", `1990`:`2015`) %>%
glimpse()
```
```
## Rows: 543
## Columns: 3
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ stats <chr> "1 340 [ 878 - 1 950]", "71 [ 58 - 88]", "216 [ 141 - 327]",…
```
### 4\.8\.3 One Piece of Data per Column
The data in the `stats` column is in an unusual format with some sort of confidence interval in brackets and lots of extra spaces. We don’t need any of the spaces, so first we’ll remove them with `mutate()`, which we’ll learn more about in the next lesson.
The `separate` function will separate your data on anything that is not a number or letter, so try it first without specifying the `sep` argument. The `into` argument is a list of the new column names.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi")) %>%
glimpse()
```
```
## Warning: Expected 3 pieces. Additional pieces discarded in 543 rows [1, 2, 3, 4,
## 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, ...].
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
The `gsub(pattern, replacement, x)` function is a flexible way to do search and replace. The example above replaces all occurances of the `pattern` " " (a space), with the `replacement` "" (nothing), in the string `x` (the `stats` column). Use `sub()` instead if you only want to replace the first occurance of a pattern. We only used a simple pattern here, but you can use more complicated [regex](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) patterns to replace, for example, all even numbers (e.g., `gsub(“[:02468:],” "“,”id = 123456“)`) or all occurances of the word colour in US or UK spelling (e.g., `gsub(”colo(u)?r“,”**“,”replace color, colour, or colours, but not collors")`).
#### 4\.8\.3\.1 Handle spare columns with `extra`
The previous example should have given you an error warning about “Additional pieces discarded in 543 rows.” This is because `separate` splits the column at the brackets and dashes, so the text `100[90-110]` would split into four values `c(“100,” “90,” “110,” "“)`, but we only specified 3 new columns. The fourth value is always empty (just the part after the last bracket), so we are happy to drop it, but `separate` generates a warning so you don’t do that accidentally. You can turn off the warning by adding the `extra` argument and setting it to “drop.” Look at the help for `??tidyr::separate` to see what the other options do.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
#### 4\.8\.3\.2 Set delimiters with `sep`
Now do the same with `infmort`. It’s already in long format, so you don’t need to use `gather`, but the third column has a ridiculously long name, so we can just refer to it by its column number (3\).
```
infmort_split <- infmort %>%
separate(3, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66", "68", "69", "71", "73", "75", "76", "78", "80", "82", "8…
## $ ci_low <chr> "3", "1", "9", "7", "4", "1", "8", "6", "4", "3", "4", "7", "0…
## $ ci_hi <chr> "52", "55", "58", "61", "64", "66", "69", "71", "73", "75", "7…
```
**Wait, that didn’t work at all!** It split the column on spaces, brackets, *and* full stops. We just want to split on the spaces, brackets and dashes. So we need to manually set `sep` to what the delimiters are. Also, once there are more than a few arguments specified for a function, it’s easier to read them if you put one argument on each line.
{\#regex}
You can use [regular expressions](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) to separate complex columns. Here, we want to separate on dashes and brackets. You can separate on a list of delimiters by putting them in parentheses, separated by “\|.” It’s a little more complicated because brackets have a special meaning in regex, so you need to “escape” the left one with two backslashes “\\.”
```
infmort_split <- infmort %>%
separate(
col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])"
) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66.3 ", "68.1 ", "69.9 ", "71.7 ", "73.4 ", "75.1 ", "76.8 ",…
## $ ci_low <chr> "52.7", "55.7", "58.7", "61.6", "64.4", "66.9", "69.0", "71.2"…
## $ ci_hi <chr> "83.9", "83.6", "83.5", "83.7", "84.2", "85.1", "86.1", "87.3"…
```
#### 4\.8\.3\.3 Fix data types with `convert`
That’s better. Notice the next to `Year`, `rate`, `ci_low` and `ci_hi`. That means these columns hold characters (like words), not numbers or integers. This can cause problems when you try to do thigs like average the numbers (you can’t average words), so we can fix it by adding the argument `convert` and setting it to `TRUE`.
```
infmort_split <- infmort %>%
separate(col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <dbl> 66.3, 68.1, 69.9, 71.7, 73.4, 75.1, 76.8, 78.6, 80.4, 82.3, 84…
## $ ci_low <dbl> 52.7, 55.7, 58.7, 61.6, 64.4, 66.9, 69.0, 71.2, 73.4, 75.5, 77…
## $ ci_hi <dbl> 83.9, 83.6, 83.5, 83.7, 84.2, 85.1, 86.1, 87.3, 88.9, 90.7, 92…
```
Do the same for `matmort`.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
### 4\.8\.4 All in one step
We can chain all the steps for `matmort` above together, since we don’t need those intermediate data tables.
```
matmort2<- dataskills::matmort %>%
gather("Year", "stats", `1990`:`2015`) %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(
col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE
) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
### 4\.8\.5 Columns by Year
Spread out the maternal mortality rate by year.
```
matmort_wide <- matmort2 %>%
spread(key = Year, value = rate) %>%
print()
```
```
## # A tibble: 542 x 6
## Country ci_low ci_hi `1990` `2000` `2015`
## <chr> <int> <int> <int> <int> <int>
## 1 Afghanistan 253 620 NA NA 396
## 2 Afghanistan 745 1570 NA 1100 NA
## 3 Afghanistan 878 1950 1340 NA NA
## 4 Albania 16 46 NA NA 29
## 5 Albania 33 56 NA 43 NA
## 6 Albania 58 88 71 NA NA
## 7 Algeria 82 244 NA NA 140
## 8 Algeria 118 241 NA 170 NA
## 9 Algeria 141 327 216 NA NA
## 10 Angola 221 988 NA NA 477
## # … with 532 more rows
```
Nope, that didn’t work at all, but it’s a really common mistake when spreading data. This is because `spread` matches on all the remaining columns, so Afghanistan with `ci_low` of 253 is treated as a different observation than Afghanistan with `ci_low` of 745\.
This is where `pivot_wider()` can be very useful. You can set `values_from` to multiple column names and their names will be added to the `names_from` values.
```
matmort_wide <- matmort2 %>%
pivot_wider(
names_from = Year,
values_from = c(rate, ci_low, ci_hi)
)
glimpse(matmort_wide)
```
```
## Rows: 181
## Columns: 10
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina"…
## $ rate_1990 <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33…
## $ rate_2000 <int> 1100, 43, 170, 924, 60, 40, 9, 5, 48, 61, 21, 399, 48, 26,…
## $ rate_2015 <int> 396, 29, 140, 477, 52, 25, 6, 4, 25, 80, 15, 176, 27, 4, 7…
## $ ci_low_1990 <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, …
## $ ci_low_2000 <int> 745, 33, 118, 472, 54, 35, 8, 4, 42, 50, 18, 322, 38, 22, …
## $ ci_low_2015 <int> 253, 16, 82, 221, 44, 21, 5, 3, 17, 53, 12, 125, 19, 3, 5,…
## $ ci_hi_1990 <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 3…
## $ ci_hi_2000 <int> 1570, 56, 241, 1730, 65, 46, 10, 6, 55, 74, 26, 496, 58, 3…
## $ ci_hi_2015 <int> 620, 46, 244, 988, 63, 31, 7, 5, 35, 124, 19, 280, 37, 6, …
```
### 4\.8\.6 Experimentum Data
Students in the Institute of Neuroscience and Psychology at the University of Glasgow can use the online experiment builder platform, [Experimentum](https://debruine.github.io/experimentum/). The platform is also [open source on github](https://github.com/debruine/experimentum) for anyone who can install it on a web server. It allows you to group questionnaires and experiments into **projects** with randomisation and counterbalancing. Data for questionnaires and experiments are downloadable in long format, but researchers often need to put them in wide format for analysis.
Look at the help menu for built\-in dataset `dataskills::experimentum_quests` to learn what each column is. Subjects are asked questions about dogs to test the different questionnaire response types.
* current: Do you own a dog? (yes/no)
* past: Have you ever owned a dog? (yes/no)
* name: What is the best name for a dog? (free short text)
* good: How good are dogs? (1\=pretty good:7\=very good)
* country: What country do borzois come from?
* good\_borzoi: How good are borzois? (0\=pretty good:100\=very good)
* text: Write some text about dogs. (free long text)
* time: What time is it? (time)
To get the dataset into wide format, where each question is in a separate column, use the following code:
```
q <- dataskills::experimentum_quests %>%
pivot_wider(id_cols = session_id:user_age,
names_from = q_name,
values_from = dv) %>%
type.convert(as.is = TRUE) %>%
print()
```
```
## # A tibble: 24 x 15
## session_id project_id quest_id user_id user_sex user_status user_age current
## <int> <int> <int> <int> <chr> <chr> <dbl> <int>
## 1 34034 1 1 31105 female guest 28.2 1
## 2 34104 1 1 31164 male registered 19.4 1
## 3 34326 1 1 31392 female guest 17 0
## 4 34343 1 1 31397 male guest 22 1
## 5 34765 1 1 31770 female guest 44 1
## 6 34796 1 1 31796 female guest 35.9 0
## 7 34806 1 1 31798 female guest 35 0
## 8 34822 1 1 31802 female guest 58 1
## 9 34864 1 1 31820 male guest 20 0
## 10 35014 1 1 31921 female student 39.2 1
## # … with 14 more rows, and 7 more variables: past <int>, name <chr>,
## # good <int>, country <chr>, text <chr>, good_borzoi <int>, time <chr>
```
The responses in the `dv` column have multiple types (e.g., `r glossary(“integer”)`, `r glossary(“double”)`, and `r glossary(“character”)`), but they are all represented as character strings when they’re in the same column. After you spread the data to wide format, each column should be given the ocrrect data type. The function `type.convert()` makes a best guess at what type each new column should be and converts it. The argument `as.is = TRUE` converts columns where none of the numbers have decimal places to integers.
### 4\.8\.1 Load Data
Get data on infant and maternal mortality rates from the dataskills package. If you don’t have the package, you can download them here:
* [infant mortality](https://psyteachr.github.io/msc-data-skills/data/infmort.csv)
* [maternal mortality](https://psyteachr.github.io/msc-data-skills/data/matmort.xls)
```
data("infmort", package = "dataskills")
head(infmort)
```
| Country | Year | Infant mortality rate (probability of dying between birth and age 1 per 1000 live births) |
| --- | --- | --- |
| Afghanistan | 2015 | 66\.3 \[52\.7\-83\.9] |
| Afghanistan | 2014 | 68\.1 \[55\.7\-83\.6] |
| Afghanistan | 2013 | 69\.9 \[58\.7\-83\.5] |
| Afghanistan | 2012 | 71\.7 \[61\.6\-83\.7] |
| Afghanistan | 2011 | 73\.4 \[64\.4\-84\.2] |
| Afghanistan | 2010 | 75\.1 \[66\.9\-85\.1] |
```
data("matmort", package = "dataskills")
head(matmort)
```
| Country | 1990 | 2000 | 2015 |
| --- | --- | --- | --- |
| Afghanistan | 1 340 \[ 878 \- 1 950] | 1 100 \[ 745 \- 1 570] | 396 \[ 253 \- 620] |
| Albania | 71 \[ 58 \- 88] | 43 \[ 33 \- 56] | 29 \[ 16 \- 46] |
| Algeria | 216 \[ 141 \- 327] | 170 \[ 118 \- 241] | 140 \[ 82 \- 244] |
| Angola | 1 160 \[ 627 \- 2 020] | 924 \[ 472 \- 1 730] | 477 \[ 221 \- 988] |
| Argentina | 72 \[ 64 \- 80] | 60 \[ 54 \- 65] | 52 \[ 44 \- 63] |
| Armenia | 58 \[ 51 \- 65] | 40 \[ 35 \- 46] | 25 \[ 21 \- 31] |
### 4\.8\.2 Wide to Long
`matmort` is in wide format, with a separate column for each year. Change it to long format, with a row for each Country/Year observation.
This example is complicated because the column names to gather *are* numbers. If the column names are non\-standard (e.g., have spaces, start with numbers, or have special characters), you can enclose them in backticks (\`) like the example below.
```
matmort_long <- matmort %>%
pivot_longer(cols = `1990`:`2015`,
names_to = "Year",
values_to = "stats") %>%
glimpse()
```
```
## Rows: 543
## Columns: 3
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Albania", "Alban…
## $ Year <chr> "1990", "2000", "2015", "1990", "2000", "2015", "1990", "2000"…
## $ stats <chr> "1 340 [ 878 - 1 950]", "1 100 [ 745 - 1 570]", "396 [ 253 - …
```
You can put `matmort` at the first argument to `pivot_longer()`; you don’t have to pipe it in. But when I’m working on data processing I often find myself needing to insert or rearrange steps and I constantly introduce errors by forgetting to take the first argument out of a pipe chain, so now I start with the original data table and pipe from there.
Alternatively, you can use the `gather()` function.
```
matmort_long <- matmort %>%
gather("Year", "stats", `1990`:`2015`) %>%
glimpse()
```
```
## Rows: 543
## Columns: 3
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ stats <chr> "1 340 [ 878 - 1 950]", "71 [ 58 - 88]", "216 [ 141 - 327]",…
```
### 4\.8\.3 One Piece of Data per Column
The data in the `stats` column is in an unusual format with some sort of confidence interval in brackets and lots of extra spaces. We don’t need any of the spaces, so first we’ll remove them with `mutate()`, which we’ll learn more about in the next lesson.
The `separate` function will separate your data on anything that is not a number or letter, so try it first without specifying the `sep` argument. The `into` argument is a list of the new column names.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi")) %>%
glimpse()
```
```
## Warning: Expected 3 pieces. Additional pieces discarded in 543 rows [1, 2, 3, 4,
## 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, ...].
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
The `gsub(pattern, replacement, x)` function is a flexible way to do search and replace. The example above replaces all occurances of the `pattern` " " (a space), with the `replacement` "" (nothing), in the string `x` (the `stats` column). Use `sub()` instead if you only want to replace the first occurance of a pattern. We only used a simple pattern here, but you can use more complicated [regex](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) patterns to replace, for example, all even numbers (e.g., `gsub(“[:02468:],” "“,”id = 123456“)`) or all occurances of the word colour in US or UK spelling (e.g., `gsub(”colo(u)?r“,”**“,”replace color, colour, or colours, but not collors")`).
#### 4\.8\.3\.1 Handle spare columns with `extra`
The previous example should have given you an error warning about “Additional pieces discarded in 543 rows.” This is because `separate` splits the column at the brackets and dashes, so the text `100[90-110]` would split into four values `c(“100,” “90,” “110,” "“)`, but we only specified 3 new columns. The fourth value is always empty (just the part after the last bracket), so we are happy to drop it, but `separate` generates a warning so you don’t do that accidentally. You can turn off the warning by adding the `extra` argument and setting it to “drop.” Look at the help for `??tidyr::separate` to see what the other options do.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
#### 4\.8\.3\.2 Set delimiters with `sep`
Now do the same with `infmort`. It’s already in long format, so you don’t need to use `gather`, but the third column has a ridiculously long name, so we can just refer to it by its column number (3\).
```
infmort_split <- infmort %>%
separate(3, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66", "68", "69", "71", "73", "75", "76", "78", "80", "82", "8…
## $ ci_low <chr> "3", "1", "9", "7", "4", "1", "8", "6", "4", "3", "4", "7", "0…
## $ ci_hi <chr> "52", "55", "58", "61", "64", "66", "69", "71", "73", "75", "7…
```
**Wait, that didn’t work at all!** It split the column on spaces, brackets, *and* full stops. We just want to split on the spaces, brackets and dashes. So we need to manually set `sep` to what the delimiters are. Also, once there are more than a few arguments specified for a function, it’s easier to read them if you put one argument on each line.
{\#regex}
You can use [regular expressions](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) to separate complex columns. Here, we want to separate on dashes and brackets. You can separate on a list of delimiters by putting them in parentheses, separated by “\|.” It’s a little more complicated because brackets have a special meaning in regex, so you need to “escape” the left one with two backslashes “\\.”
```
infmort_split <- infmort %>%
separate(
col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])"
) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66.3 ", "68.1 ", "69.9 ", "71.7 ", "73.4 ", "75.1 ", "76.8 ",…
## $ ci_low <chr> "52.7", "55.7", "58.7", "61.6", "64.4", "66.9", "69.0", "71.2"…
## $ ci_hi <chr> "83.9", "83.6", "83.5", "83.7", "84.2", "85.1", "86.1", "87.3"…
```
#### 4\.8\.3\.3 Fix data types with `convert`
That’s better. Notice the next to `Year`, `rate`, `ci_low` and `ci_hi`. That means these columns hold characters (like words), not numbers or integers. This can cause problems when you try to do thigs like average the numbers (you can’t average words), so we can fix it by adding the argument `convert` and setting it to `TRUE`.
```
infmort_split <- infmort %>%
separate(col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <dbl> 66.3, 68.1, 69.9, 71.7, 73.4, 75.1, 76.8, 78.6, 80.4, 82.3, 84…
## $ ci_low <dbl> 52.7, 55.7, 58.7, 61.6, 64.4, 66.9, 69.0, 71.2, 73.4, 75.5, 77…
## $ ci_hi <dbl> 83.9, 83.6, 83.5, 83.7, 84.2, 85.1, 86.1, 87.3, 88.9, 90.7, 92…
```
Do the same for `matmort`.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
#### 4\.8\.3\.1 Handle spare columns with `extra`
The previous example should have given you an error warning about “Additional pieces discarded in 543 rows.” This is because `separate` splits the column at the brackets and dashes, so the text `100[90-110]` would split into four values `c(“100,” “90,” “110,” "“)`, but we only specified 3 new columns. The fourth value is always empty (just the part after the last bracket), so we are happy to drop it, but `separate` generates a warning so you don’t do that accidentally. You can turn off the warning by adding the `extra` argument and setting it to “drop.” Look at the help for `??tidyr::separate` to see what the other options do.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(stats, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <chr> "1340", "71", "216", "1160", "72", "58", "8", "8", "64", "46",…
## $ ci_low <chr> "878", "58", "141", "627", "64", "51", "7", "7", "56", "34", "…
## $ ci_hi <chr> "1950", "88", "327", "2020", "80", "65", "9", "10", "74", "61"…
```
#### 4\.8\.3\.2 Set delimiters with `sep`
Now do the same with `infmort`. It’s already in long format, so you don’t need to use `gather`, but the third column has a ridiculously long name, so we can just refer to it by its column number (3\).
```
infmort_split <- infmort %>%
separate(3, c("rate", "ci_low", "ci_hi"), extra = "drop") %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66", "68", "69", "71", "73", "75", "76", "78", "80", "82", "8…
## $ ci_low <chr> "3", "1", "9", "7", "4", "1", "8", "6", "4", "3", "4", "7", "0…
## $ ci_hi <chr> "52", "55", "58", "61", "64", "66", "69", "71", "73", "75", "7…
```
**Wait, that didn’t work at all!** It split the column on spaces, brackets, *and* full stops. We just want to split on the spaces, brackets and dashes. So we need to manually set `sep` to what the delimiters are. Also, once there are more than a few arguments specified for a function, it’s easier to read them if you put one argument on each line.
{\#regex}
You can use [regular expressions](https://stat.ethz.ch/R-manual/R-devel/library/base/html/regex.html) to separate complex columns. Here, we want to separate on dashes and brackets. You can separate on a list of delimiters by putting them in parentheses, separated by “\|.” It’s a little more complicated because brackets have a special meaning in regex, so you need to “escape” the left one with two backslashes “\\.”
```
infmort_split <- infmort %>%
separate(
col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])"
) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <chr> "66.3 ", "68.1 ", "69.9 ", "71.7 ", "73.4 ", "75.1 ", "76.8 ",…
## $ ci_low <chr> "52.7", "55.7", "58.7", "61.6", "64.4", "66.9", "69.0", "71.2"…
## $ ci_hi <chr> "83.9", "83.6", "83.5", "83.7", "84.2", "85.1", "86.1", "87.3"…
```
#### 4\.8\.3\.3 Fix data types with `convert`
That’s better. Notice the next to `Year`, `rate`, `ci_low` and `ci_hi`. That means these columns hold characters (like words), not numbers or integers. This can cause problems when you try to do thigs like average the numbers (you can’t average words), so we can fix it by adding the argument `convert` and setting it to `TRUE`.
```
infmort_split <- infmort %>%
separate(col = 3,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
sep = "(\\[|-|])",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 5,044
## Columns: 5
## $ Country <chr> "Afghanistan", "Afghanistan", "Afghanistan", "Afghanistan", "A…
## $ Year <dbl> 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, 2007, 2006, 20…
## $ rate <dbl> 66.3, 68.1, 69.9, 71.7, 73.4, 75.1, 76.8, 78.6, 80.4, 82.3, 84…
## $ ci_low <dbl> 52.7, 55.7, 58.7, 61.6, 64.4, 66.9, 69.0, 71.2, 73.4, 75.5, 77…
## $ ci_hi <dbl> 83.9, 83.6, 83.5, 83.7, 84.2, 85.1, 86.1, 87.3, 88.9, 90.7, 92…
```
Do the same for `matmort`.
```
matmort_split <- matmort_long %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
### 4\.8\.4 All in one step
We can chain all the steps for `matmort` above together, since we don’t need those intermediate data tables.
```
matmort2<- dataskills::matmort %>%
gather("Year", "stats", `1990`:`2015`) %>%
mutate(stats = gsub(" ", "", stats)) %>%
separate(
col = stats,
into = c("rate", "ci_low", "ci_hi"),
extra = "drop",
convert = TRUE
) %>%
glimpse()
```
```
## Rows: 543
## Columns: 5
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina", "A…
## $ Year <chr> "1990", "1990", "1990", "1990", "1990", "1990", "1990", "1990"…
## $ rate <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33, 9,…
## $ ci_low <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, 7, 4…
## $ ci_hi <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 38, 1…
```
### 4\.8\.5 Columns by Year
Spread out the maternal mortality rate by year.
```
matmort_wide <- matmort2 %>%
spread(key = Year, value = rate) %>%
print()
```
```
## # A tibble: 542 x 6
## Country ci_low ci_hi `1990` `2000` `2015`
## <chr> <int> <int> <int> <int> <int>
## 1 Afghanistan 253 620 NA NA 396
## 2 Afghanistan 745 1570 NA 1100 NA
## 3 Afghanistan 878 1950 1340 NA NA
## 4 Albania 16 46 NA NA 29
## 5 Albania 33 56 NA 43 NA
## 6 Albania 58 88 71 NA NA
## 7 Algeria 82 244 NA NA 140
## 8 Algeria 118 241 NA 170 NA
## 9 Algeria 141 327 216 NA NA
## 10 Angola 221 988 NA NA 477
## # … with 532 more rows
```
Nope, that didn’t work at all, but it’s a really common mistake when spreading data. This is because `spread` matches on all the remaining columns, so Afghanistan with `ci_low` of 253 is treated as a different observation than Afghanistan with `ci_low` of 745\.
This is where `pivot_wider()` can be very useful. You can set `values_from` to multiple column names and their names will be added to the `names_from` values.
```
matmort_wide <- matmort2 %>%
pivot_wider(
names_from = Year,
values_from = c(rate, ci_low, ci_hi)
)
glimpse(matmort_wide)
```
```
## Rows: 181
## Columns: 10
## $ Country <chr> "Afghanistan", "Albania", "Algeria", "Angola", "Argentina"…
## $ rate_1990 <int> 1340, 71, 216, 1160, 72, 58, 8, 8, 64, 46, 26, 569, 58, 33…
## $ rate_2000 <int> 1100, 43, 170, 924, 60, 40, 9, 5, 48, 61, 21, 399, 48, 26,…
## $ rate_2015 <int> 396, 29, 140, 477, 52, 25, 6, 4, 25, 80, 15, 176, 27, 4, 7…
## $ ci_low_1990 <int> 878, 58, 141, 627, 64, 51, 7, 7, 56, 34, 20, 446, 47, 28, …
## $ ci_low_2000 <int> 745, 33, 118, 472, 54, 35, 8, 4, 42, 50, 18, 322, 38, 22, …
## $ ci_low_2015 <int> 253, 16, 82, 221, 44, 21, 5, 3, 17, 53, 12, 125, 19, 3, 5,…
## $ ci_hi_1990 <int> 1950, 88, 327, 2020, 80, 65, 9, 10, 74, 61, 33, 715, 72, 3…
## $ ci_hi_2000 <int> 1570, 56, 241, 1730, 65, 46, 10, 6, 55, 74, 26, 496, 58, 3…
## $ ci_hi_2015 <int> 620, 46, 244, 988, 63, 31, 7, 5, 35, 124, 19, 280, 37, 6, …
```
### 4\.8\.6 Experimentum Data
Students in the Institute of Neuroscience and Psychology at the University of Glasgow can use the online experiment builder platform, [Experimentum](https://debruine.github.io/experimentum/). The platform is also [open source on github](https://github.com/debruine/experimentum) for anyone who can install it on a web server. It allows you to group questionnaires and experiments into **projects** with randomisation and counterbalancing. Data for questionnaires and experiments are downloadable in long format, but researchers often need to put them in wide format for analysis.
Look at the help menu for built\-in dataset `dataskills::experimentum_quests` to learn what each column is. Subjects are asked questions about dogs to test the different questionnaire response types.
* current: Do you own a dog? (yes/no)
* past: Have you ever owned a dog? (yes/no)
* name: What is the best name for a dog? (free short text)
* good: How good are dogs? (1\=pretty good:7\=very good)
* country: What country do borzois come from?
* good\_borzoi: How good are borzois? (0\=pretty good:100\=very good)
* text: Write some text about dogs. (free long text)
* time: What time is it? (time)
To get the dataset into wide format, where each question is in a separate column, use the following code:
```
q <- dataskills::experimentum_quests %>%
pivot_wider(id_cols = session_id:user_age,
names_from = q_name,
values_from = dv) %>%
type.convert(as.is = TRUE) %>%
print()
```
```
## # A tibble: 24 x 15
## session_id project_id quest_id user_id user_sex user_status user_age current
## <int> <int> <int> <int> <chr> <chr> <dbl> <int>
## 1 34034 1 1 31105 female guest 28.2 1
## 2 34104 1 1 31164 male registered 19.4 1
## 3 34326 1 1 31392 female guest 17 0
## 4 34343 1 1 31397 male guest 22 1
## 5 34765 1 1 31770 female guest 44 1
## 6 34796 1 1 31796 female guest 35.9 0
## 7 34806 1 1 31798 female guest 35 0
## 8 34822 1 1 31802 female guest 58 1
## 9 34864 1 1 31820 male guest 20 0
## 10 35014 1 1 31921 female student 39.2 1
## # … with 14 more rows, and 7 more variables: past <int>, name <chr>,
## # good <int>, country <chr>, text <chr>, good_borzoi <int>, time <chr>
```
The responses in the `dv` column have multiple types (e.g., `r glossary(“integer”)`, `r glossary(“double”)`, and `r glossary(“character”)`), but they are all represented as character strings when they’re in the same column. After you spread the data to wide format, each column should be given the ocrrect data type. The function `type.convert()` makes a best guess at what type each new column should be and converts it. The argument `as.is = TRUE` converts columns where none of the numbers have decimal places to integers.
4\.9 Glossary
-------------
| term | definition |
| --- | --- |
| [long](https://psyteachr.github.io/glossary/l#long) | Data where each observation is on a separate row |
| [observation](https://psyteachr.github.io/glossary/o#observation) | All of the data about a single trial or question. |
| [value](https://psyteachr.github.io/glossary/v#value) | A single number or piece of data. |
| [variable](https://psyteachr.github.io/glossary/v#variable) | A word that identifies and stores the value of some data for later use. |
| [wide](https://psyteachr.github.io/glossary/w#wide) | Data where all of the observations about one subject are in the same row |
4\.10 Exercises
---------------
Download the [exercises](exercises/04_tidyr_exercise.Rmd). See the [answers](exercises/04_tidyr_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(4)
# run this to access the answers
dataskills::exercise(4, answers = TRUE)
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/dplyr.html |
Chapter 5 Data Wrangling
========================
5\.1 Learning Objectives
------------------------
### 5\.1\.1 Basic
1. Be able to use the 6 main dplyr one\-table verbs: [(video)](https://youtu.be/l12tNKClTR0)
* [`select()`](dplyr.html#select)
* [`filter()`](dplyr.html#filter)
* [`arrange()`](dplyr.html#arrange)
* [`mutate()`](dplyr.html#mutate)
* [`summarise()`](dplyr.html#summarise)
* [`group_by()`](dplyr.html#group_by)
2. Be able to [wrangle data by chaining tidyr and dplyr functions](dplyr.html#all-together) [(video)](https://youtu.be/hzFFAkwrkqA)
3. Be able to use these additional one\-table verbs: [(video)](https://youtu.be/GmfF162mq4g)
* [`rename()`](dplyr.html#rename)
* [`distinct()`](dplyr.html#distinct)
* [`count()`](dplyr.html#count)
* [`slice()`](dplyr.html#slice)
* [`pull()`](dplyr.html#pull)
### 5\.1\.2 Intermediate
4. Fine control of [`select()` operations](dplyr.html#select_helpers) [(video)](https://youtu.be/R1bi1QwF9t0)
5. Use [window functions](dplyr.html#window) [(video)](https://youtu.be/uo4b0W9mqPc)
5\.2 Resources
--------------
* [Chapter 5: Data Transformation](http://r4ds.had.co.nz/transform.html) in *R for Data Science*
* [Data transformation cheat sheet](https://github.com/rstudio/cheatsheets/raw/master/data-transformation.pdf)
* [Chapter 16: Date and times](http://r4ds.had.co.nz/dates-and-times.html) in *R for Data Science*
5\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(lubridate)
library(dataskills)
set.seed(8675309) # makes sure random numbers are reproducible
```
### 5\.3\.1 The `disgust` dataset
These examples will use data from `dataskills::disgust`, which contains data from the [Three Domain Disgust Scale](http://digitalrepository.unm.edu/cgi/viewcontent.cgi?article=1139&context=psy_etds). Each participant is identified by a unique `user_id` and each questionnaire completion has a unique `id`. Look at the Help for this dataset to see the individual questions.
```
data("disgust", package = "dataskills")
#disgust <- read_csv("https://psyteachr.github.io/msc-data-skills/data/disgust.csv")
```
5\.4 Six main dplyr verbs
-------------------------
Most of the [data wrangling](https://psyteachr.github.io/glossary/d#data-wrangling "The process of preparing data for visualisation and statistical analysis.") you’ll want to do with psychological data will involve the `tidyr` functions you learned in [Chapter 4](tidyr.html#tidyr) and the six main `dplyr` verbs: `select`, `filter`, `arrange`, `mutate`, `summarise`, and `group_by`.
### 5\.4\.1 select()
Select columns by name or number.
You can select each column individually, separated by commas (e.g., `col1, col2`). You can also select all columns between two columns by separating them with a colon (e.g., `start_col:end_col`).
```
moral <- disgust %>% select(user_id, moral1:moral7)
names(moral)
```
```
## [1] "user_id" "moral1" "moral2" "moral3" "moral4" "moral5" "moral6"
## [8] "moral7"
```
You can select columns by number, which is useful when the column names are long or complicated.
```
sexual <- disgust %>% select(2, 11:17)
names(sexual)
```
```
## [1] "user_id" "sexual1" "sexual2" "sexual3" "sexual4" "sexual5" "sexual6"
## [8] "sexual7"
```
You can use a minus symbol to unselect columns, leaving all of the other columns. If you want to exclude a span of columns, put parentheses around the span first (e.g., `-(moral1:moral7)`, not `-moral1:moral7`).
```
pathogen <- disgust %>% select(-id, -date, -(moral1:sexual7))
names(pathogen)
```
```
## [1] "user_id" "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5"
## [7] "pathogen6" "pathogen7"
```
#### 5\.4\.1\.1 Select helpers
You can select columns based on criteria about the column names.
##### 5\.4\.1\.1\.1 `starts_with()`
Select columns that start with a character string.
```
u <- disgust %>% select(starts_with("u"))
names(u)
```
```
## [1] "user_id"
```
##### 5\.4\.1\.1\.2 `ends_with()`
Select columns that end with a character string.
```
firstq <- disgust %>% select(ends_with("1"))
names(firstq)
```
```
## [1] "moral1" "sexual1" "pathogen1"
```
##### 5\.4\.1\.1\.3 `contains()`
Select columns that contain a character string.
```
pathogen <- disgust %>% select(contains("pathogen"))
names(pathogen)
```
```
## [1] "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5" "pathogen6"
## [7] "pathogen7"
```
##### 5\.4\.1\.1\.4 `num_range()`
Select columns with a name that matches the pattern `prefix`.
```
moral2_4 <- disgust %>% select(num_range("moral", 2:4))
names(moral2_4)
```
```
## [1] "moral2" "moral3" "moral4"
```
Use `width` to set the number of digits with leading zeros. For example, `num_range(‘var_,’ 8:10, width=2)` selects columns `var_08`, `var_09`, and `var_10`.
### 5\.4\.2 filter()
Select rows by matching column criteria.
Select all rows where the user\_id is 1 (that’s Lisa).
```
disgust %>% filter(user_id == 1)
```
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 |
Remember to use `==` and not `=` to check if two things are equivalent. A single `=` assigns the righthand value to the lefthand variable and (usually) evaluates to `TRUE`.
You can select on multiple criteria by separating them with commas.
```
amoral <- disgust %>% filter(
moral1 == 0,
moral2 == 0,
moral3 == 0,
moral4 == 0,
moral5 == 0,
moral6 == 0,
moral7 == 0
)
```
You can use the symbols `&`, `|`, and `!` to mean “and,” “or,” and “not.” You can also use other operators to make equations.
```
# everyone who chose either 0 or 7 for question moral1
moral_extremes <- disgust %>%
filter(moral1 == 0 | moral1 == 7)
# everyone who chose the same answer for all moral questions
moral_consistent <- disgust %>%
filter(
moral2 == moral1 &
moral3 == moral1 &
moral4 == moral1 &
moral5 == moral1 &
moral6 == moral1 &
moral7 == moral1
)
# everyone who did not answer 7 for all 7 moral questions
moral_no_ceiling <- disgust %>%
filter(moral1+moral2+moral3+moral4+moral5+moral6+moral7 != 7*7)
```
#### 5\.4\.2\.1 Match operator (%in%)
Sometimes you need to exclude some participant IDs for reasons that can’t be described in code. The match operator (`%in%`) is useful here for testing if a column value is in a list. Surround the equation with parentheses and put `!` in front to test that a value is not in the list.
```
no_researchers <- disgust %>%
filter(!(user_id %in% c(1,2)))
```
#### 5\.4\.2\.2 Dates
You can use the `lubridate` package to work with dates. For example, you can use the `year()` function to return just the year from the `date` column and then select only data collected in 2010\.
```
disgust2010 <- disgust %>%
filter(year(date) == 2010)
```
Table 5\.1: Rows 1\-6 from `disgust2010`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 6902 | 5469 | 2010\-12\-06 | 0 | 1 | 3 | 4 | 1 | 0 | 1 | 3 | 5 | 2 | 4 | 6 | 6 | 5 | 5 | 2 | 4 | 4 | 2 | 2 | 6 |
| 6158 | 6066 | 2010\-04\-18 | 4 | 5 | 6 | 5 | 5 | 4 | 4 | 3 | 0 | 1 | 6 | 3 | 5 | 3 | 6 | 5 | 5 | 5 | 5 | 5 | 5 |
| 6362 | 7129 | 2010\-06\-09 | 4 | 4 | 4 | 4 | 3 | 3 | 2 | 4 | 2 | 1 | 3 | 2 | 3 | 6 | 5 | 2 | 0 | 4 | 5 | 5 | 4 |
| 6302 | 39318 | 2010\-05\-20 | 2 | 4 | 1 | 4 | 5 | 6 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 3 | 2 | 3 | 2 | 3 | 2 | 4 |
| 5429 | 43029 | 2010\-01\-02 | 1 | 1 | 1 | 3 | 6 | 4 | 2 | 2 | 0 | 1 | 4 | 6 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 6 | 4 |
| 6732 | 71955 | 2010\-10\-15 | 2 | 5 | 3 | 6 | 3 | 2 | 5 | 4 | 3 | 3 | 6 | 6 | 6 | 5 | 4 | 2 | 6 | 5 | 6 | 6 | 3 |
Or select data from at least 5 years ago. You can use the `range` function to check the minimum and maximum dates in the resulting dataset.
```
disgust_5ago <- disgust %>%
filter(date < today() - dyears(5))
range(disgust_5ago$date)
```
```
## [1] "2008-07-10" "2016-08-04"
```
### 5\.4\.3 arrange()
Sort your dataset using `arrange()`. You will find yourself needing to sort data in R much less than you do in Excel, since you don’t need to have rows next to each other in order to, for example, calculate group means. But `arrange()` can be useful when preparing data from display in tables.
```
disgust_order <- disgust %>%
arrange(date, moral1)
```
Table 5\.2: Rows 1\-6 from `disgust_order`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 |
| 3 | 155324 | 2008\-07\-11 | 2 | 4 | 3 | 5 | 2 | 1 | 4 | 1 | 0 | 1 | 2 | 2 | 6 | 1 | 4 | 3 | 1 | 0 | 4 | 4 | 2 |
| 6 | 155386 | 2008\-07\-12 | 2 | 4 | 0 | 4 | 0 | 0 | 0 | 6 | 0 | 0 | 6 | 4 | 4 | 6 | 4 | 5 | 5 | 1 | 6 | 4 | 2 |
| 7 | 155409 | 2008\-07\-12 | 4 | 5 | 5 | 4 | 5 | 1 | 5 | 3 | 0 | 1 | 5 | 2 | 0 | 0 | 5 | 5 | 3 | 4 | 4 | 2 | 6 |
| 4 | 155366 | 2008\-07\-12 | 6 | 6 | 6 | 3 | 6 | 6 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 4 | 4 | 5 | 5 | 4 | 6 | 0 |
| 5 | 155370 | 2008\-07\-12 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 2 | 6 | 4 | 3 | 6 | 6 | 6 | 6 | 6 | 6 | 2 | 4 | 4 | 6 |
Reverse the order using `desc()`
```
disgust_order_desc <- disgust %>%
arrange(desc(date))
```
Table 5\.3: Rows 1\-6 from `disgust_order_desc`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 39456 | 356866 | 2017\-08\-21 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 39447 | 128727 | 2017\-08\-13 | 2 | 4 | 1 | 2 | 2 | 5 | 3 | 0 | 0 | 1 | 0 | 0 | 2 | 1 | 2 | 0 | 2 | 1 | 1 | 1 | 1 |
| 39371 | 152955 | 2017\-06\-13 | 6 | 6 | 3 | 6 | 6 | 6 | 6 | 1 | 0 | 0 | 2 | 1 | 4 | 4 | 5 | 0 | 5 | 4 | 3 | 6 | 3 |
| 39342 | 48303 | 2017\-05\-22 | 4 | 5 | 4 | 4 | 6 | 4 | 5 | 2 | 1 | 4 | 1 | 1 | 3 | 1 | 5 | 5 | 4 | 4 | 4 | 4 | 5 |
| 39159 | 151633 | 2017\-04\-04 | 4 | 5 | 6 | 5 | 3 | 6 | 2 | 6 | 4 | 0 | 4 | 0 | 3 | 6 | 4 | 4 | 6 | 6 | 6 | 6 | 4 |
| 38942 | 370464 | 2017\-02\-01 | 1 | 5 | 0 | 6 | 5 | 5 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 0 | 3 | 3 | 1 | 6 | 3 |
### 5\.4\.4 mutate()
Add new columns. This is one of the most useful functions in the tidyverse.
Refer to other columns by their names (unquoted). You can add more than one column in the same mutate function, just separate the columns with a comma. Once you make a new column, you can use it in further column definitions e.g., `total` below).
```
disgust_total <- disgust %>%
mutate(
pathogen = pathogen1 + pathogen2 + pathogen3 + pathogen4 + pathogen5 + pathogen6 + pathogen7,
moral = moral1 + moral2 + moral3 + moral4 + moral5 + moral6 + moral7,
sexual = sexual1 + sexual2 + sexual3 + sexual4 + sexual5 + sexual6 + sexual7,
total = pathogen + moral + sexual,
user_id = paste0("U", user_id)
)
```
Table 5\.4: Rows 1\-6 from `disgust_total`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 | pathogen | moral | sexual | total |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1199 | U0 | 2008\-10\-07 | 5 | 6 | 4 | 6 | 5 | 5 | 6 | 4 | 0 | 1 | 0 | 1 | 4 | 5 | 6 | 1 | 6 | 5 | 4 | 5 | 6 | 33 | 37 | 15 | 85 |
| 1 | U1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 | 19 | 10 | 12 | 41 |
| 1599 | U2 | 2008\-10\-27 | 1 | 1 | 1 | 1 | NA | NA | 1 | 1 | NA | 1 | NA | 1 | NA | NA | NA | NA | 1 | NA | NA | NA | NA | NA | NA | NA | NA |
| 13332 | U2118 | 2012\-01\-02 | 0 | 1 | 1 | 1 | 1 | 2 | 1 | 4 | 3 | 0 | 6 | 0 | 3 | 5 | 5 | 6 | 4 | 6 | 5 | 5 | 4 | 35 | 7 | 21 | 63 |
| 23 | U2311 | 2008\-07\-15 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 2 | 1 | 2 | 1 | 1 | 1 | 5 | 5 | 5 | 4 | 4 | 5 | 4 | 3 | 30 | 28 | 13 | 71 |
| 1160 | U3630 | 2008\-10\-06 | 1 | 5 | NA | 5 | 5 | 5 | 1 | 0 | 5 | 0 | 2 | 0 | 1 | 0 | 6 | 3 | 1 | 1 | 3 | 1 | 0 | 15 | NA | 8 | NA |
You can overwrite a column by giving a new column the same name as the old column (see `user_id`) above. Make sure that you mean to do this and that you aren’t trying to use the old column value after you redefine it.
### 5\.4\.5 summarise()
Create summary statistics for the dataset. Check the [Data Wrangling Cheat Sheet](https://www.rstudio.org/links/data_wrangling_cheat_sheet) or the [Data Transformation Cheat Sheet](https://github.com/rstudio/cheatsheets/raw/master/source/pdfs/data-transformation-cheatsheet.pdf) for various summary functions. Some common ones are: `mean()`, `sd()`, `n()`, `sum()`, and `quantile()`.
```
disgust_summary<- disgust_total %>%
summarise(
n = n(),
q25 = quantile(total, .25, na.rm = TRUE),
q50 = quantile(total, .50, na.rm = TRUE),
q75 = quantile(total, .75, na.rm = TRUE),
avg_total = mean(total, na.rm = TRUE),
sd_total = sd(total, na.rm = TRUE),
min_total = min(total, na.rm = TRUE),
max_total = max(total, na.rm = TRUE)
)
```
Table 5\.5: All rows from `disgust_summary`
| n | q25 | q50 | q75 | avg\_total | sd\_total | min\_total | max\_total |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 20000 | 59 | 71 | 83 | 70\.6868 | 18\.24253 | 0 | 126 |
### 5\.4\.6 group\_by()
Create subsets of the data. You can use this to create summaries,
like the mean value for all of your experimental groups.
Here, we’ll use `mutate` to create a new column called `year`, group by `year`, and calculate the average scores.
```
disgust_groups <- disgust_total %>%
mutate(year = year(date)) %>%
group_by(year) %>%
summarise(
n = n(),
avg_total = mean(total, na.rm = TRUE),
sd_total = sd(total, na.rm = TRUE),
min_total = min(total, na.rm = TRUE),
max_total = max(total, na.rm = TRUE),
.groups = "drop"
)
```
Table 5\.6: All rows from `disgust_groups`
| year | n | avg\_total | sd\_total | min\_total | max\_total |
| --- | --- | --- | --- | --- | --- |
| 2008 | 2578 | 70\.29975 | 18\.46251 | 0 | 126 |
| 2009 | 2580 | 69\.74481 | 18\.61959 | 3 | 126 |
| 2010 | 1514 | 70\.59238 | 18\.86846 | 6 | 126 |
| 2011 | 6046 | 71\.34425 | 17\.79446 | 0 | 126 |
| 2012 | 5938 | 70\.42530 | 18\.35782 | 0 | 126 |
| 2013 | 1251 | 71\.59574 | 17\.61375 | 0 | 126 |
| 2014 | 58 | 70\.46296 | 17\.23502 | 19 | 113 |
| 2015 | 21 | 74\.26316 | 16\.89787 | 43 | 107 |
| 2016 | 8 | 67\.87500 | 32\.62531 | 0 | 110 |
| 2017 | 6 | 57\.16667 | 27\.93862 | 21 | 90 |
If you don’t add `.groups = “drop”` at the end of the `summarise()` function, you will get the following message: “`summarise()` ungrouping output (override with `.groups` argument).” This just reminds you that the groups are still in effect and any further functions will also be grouped.
Older versions of dplyr didn’t do this, so older code will generate this warning if you run it with newer version of dplyr. Older code might `ungroup()` after `summarise()` to indicate that groupings should be dropped. The default behaviour is usually correct, so you don’t need to worry, but it’s best to explicitly set `.groups` in a `summarise()` function after `group_by()` if you want to “keep” or “drop” the groupings.
You can use `filter` after `group_by`. The following example returns the lowest total score from each year (i.e., the row where the `rank()` of the value in the column `total` is equivalent to `1`).
```
disgust_lowest <- disgust_total %>%
mutate(year = year(date)) %>%
select(user_id, year, total) %>%
group_by(year) %>%
filter(rank(total) == 1) %>%
arrange(year)
```
Table 5\.7: All rows from `disgust_lowest`
| user\_id | year | total |
| --- | --- | --- |
| U236585 | 2009 | 3 |
| U292359 | 2010 | 6 |
| U245384 | 2013 | 0 |
| U206293 | 2014 | 19 |
| U407089 | 2015 | 43 |
| U453237 | 2016 | 0 |
| U356866 | 2017 | 21 |
You can also use `mutate` after `group_by`. The following example calculates subject\-mean\-centered scores by grouping the scores by `user_id` and then subtracting the group\-specific mean from each score. Note the use of `gather` to tidy the data into a long format first.
```
disgust_smc <- disgust %>%
gather("question", "score", moral1:pathogen7) %>%
group_by(user_id) %>%
mutate(score_smc = score - mean(score, na.rm = TRUE)) %>%
ungroup()
```
Use `ungroup()` as soon as you are done with grouped functions, otherwise the data table will still be grouped when you use it in the future.
Table 5\.8: Rows 1\-6 from `disgust_smc`
| id | user\_id | date | question | score | score\_smc |
| --- | --- | --- | --- | --- | --- |
| 1199 | 0 | 2008\-10\-07 | moral1 | 5 | 0\.9523810 |
| 1 | 1 | 2008\-07\-10 | moral1 | 2 | 0\.0476190 |
| 1599 | 2 | 2008\-10\-27 | moral1 | 1 | 0\.0000000 |
| 13332 | 2118 | 2012\-01\-02 | moral1 | 0 | \-3\.0000000 |
| 23 | 2311 | 2008\-07\-15 | moral1 | 4 | 0\.6190476 |
| 1160 | 3630 | 2008\-10\-06 | moral1 | 1 | \-1\.2500000 |
### 5\.4\.7 All Together
A lot of what we did above would be easier if the data were tidy, so let’s do that first. Then we can use `group_by` to calculate the domain scores.
After that, we can spread out the 3 domains, calculate the total score, remove any rows with a missing (`NA`) total, and calculate mean values by year.
```
disgust_tidy <- dataskills::disgust %>%
gather("question", "score", moral1:pathogen7) %>%
separate(question, c("domain","q_num"), sep = -1) %>%
group_by(id, user_id, date, domain) %>%
summarise(score = mean(score), .groups = "drop")
```
Table 5\.9: Rows 1\-6 from `disgust_tidy`
| id | user\_id | date | domain | score |
| --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | moral | 1\.428571 |
| 1 | 1 | 2008\-07\-10 | pathogen | 2\.714286 |
| 1 | 1 | 2008\-07\-10 | sexual | 1\.714286 |
| 3 | 155324 | 2008\-07\-11 | moral | 3\.000000 |
| 3 | 155324 | 2008\-07\-11 | pathogen | 2\.571429 |
| 3 | 155324 | 2008\-07\-11 | sexual | 1\.857143 |
```
disgust_scored <- disgust_tidy %>%
spread(domain, score) %>%
mutate(
total = moral + sexual + pathogen,
year = year(date)
) %>%
filter(!is.na(total)) %>%
arrange(user_id)
```
Table 5\.10: Rows 1\-6 from `disgust_scored`
| id | user\_id | date | moral | pathogen | sexual | total | year |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1199 | 0 | 2008\-10\-07 | 5\.285714 | 4\.714286 | 2\.142857 | 12\.142857 | 2008 |
| 1 | 1 | 2008\-07\-10 | 1\.428571 | 2\.714286 | 1\.714286 | 5\.857143 | 2008 |
| 13332 | 2118 | 2012\-01\-02 | 1\.000000 | 5\.000000 | 3\.000000 | 9\.000000 | 2012 |
| 23 | 2311 | 2008\-07\-15 | 4\.000000 | 4\.285714 | 1\.857143 | 10\.142857 | 2008 |
| 7980 | 4458 | 2011\-09\-05 | 3\.428571 | 3\.571429 | 3\.000000 | 10\.000000 | 2011 |
| 552 | 4651 | 2008\-08\-23 | 3\.857143 | 4\.857143 | 4\.285714 | 13\.000000 | 2008 |
```
disgust_summarised <- disgust_scored %>%
group_by(year) %>%
summarise(
n = n(),
avg_pathogen = mean(pathogen),
avg_moral = mean(moral),
avg_sexual = mean(sexual),
first_user = first(user_id),
last_user = last(user_id),
.groups = "drop"
)
```
Table 5\.11: Rows 1\-6 from `disgust_summarised`
| year | n | avg\_pathogen | avg\_moral | avg\_sexual | first\_user | last\_user |
| --- | --- | --- | --- | --- | --- | --- |
| 2008 | 2392 | 3\.697265 | 3\.806259 | 2\.539298 | 0 | 188708 |
| 2009 | 2410 | 3\.674333 | 3\.760937 | 2\.528275 | 6093 | 251959 |
| 2010 | 1418 | 3\.731412 | 3\.843139 | 2\.510075 | 5469 | 319641 |
| 2011 | 5586 | 3\.756918 | 3\.806506 | 2\.628612 | 4458 | 406569 |
| 2012 | 5375 | 3\.740465 | 3\.774591 | 2\.545701 | 2118 | 458194 |
| 2013 | 1222 | 3\.771920 | 3\.906944 | 2\.549100 | 7646 | 462428 |
| 2014 | 54 | 3\.759259 | 4\.000000 | 2\.306878 | 11090 | 461307 |
| 2015 | 19 | 3\.781955 | 4\.451128 | 2\.375940 | 102699 | 460283 |
| 2016 | 8 | 3\.696429 | 3\.625000 | 2\.375000 | 4976 | 453237 |
| 2017 | 6 | 3\.071429 | 3\.690476 | 1\.404762 | 48303 | 370464 |
5\.5 Additional dplyr one\-table verbs
--------------------------------------
Use the code examples below and the help pages to figure out what the following one\-table verbs do. Most have pretty self\-explanatory names.
### 5\.5\.1 rename()
You can rename columns with `rename()`. Set the argument name to the new name, and the value to the old name. You need to put a name in quotes or backticks if it doesn’t follow the rules for a good variable name (contains only letter, numbers, underscores, and full stops; and doesn’t start with a number).
```
sw <- starwars %>%
rename(Name = name,
Height = height,
Mass = mass,
`Hair Colour` = hair_color,
`Skin Colour` = skin_color,
`Eye Colour` = eye_color,
`Birth Year` = birth_year)
names(sw)
```
```
## [1] "Name" "Height" "Mass" "Hair Colour" "Skin Colour"
## [6] "Eye Colour" "Birth Year" "sex" "gender" "homeworld"
## [11] "species" "films" "vehicles" "starships"
```
Almost everyone gets confused at some point with `rename()` and tries to put the original names on the left and the new names on the right. Try it and see what the error message looks like.
### 5\.5\.2 distinct()
Get rid of exactly duplicate rows with `distinct()`. This can be helpful if, for example, you are merging data from multiple computers and some of the data got copied from one computer to another, creating duplicate rows.
```
# create a data table with duplicated values
dupes <- tibble(
id = c( 1, 2, 1, 2, 1, 2),
dv = c("A", "B", "C", "D", "A", "B")
)
distinct(dupes)
```
| id | dv |
| --- | --- |
| 1 | A |
| 2 | B |
| 1 | C |
| 2 | D |
### 5\.5\.3 count()
The function `count()` is a quick shortcut for the common combination of `group_by()` and `summarise()` used to count the number of rows per group.
```
starwars %>%
group_by(sex) %>%
summarise(n = n(), .groups = "drop")
```
| sex | n |
| --- | --- |
| female | 16 |
| hermaphroditic | 1 |
| male | 60 |
| none | 6 |
| NA | 4 |
```
count(starwars, sex)
```
| sex | n |
| --- | --- |
| female | 16 |
| hermaphroditic | 1 |
| male | 60 |
| none | 6 |
| NA | 4 |
### 5\.5\.4 slice()
```
slice(starwars, 1:3, 10)
```
| name | height | mass | hair\_color | skin\_color | eye\_color | birth\_year | sex | gender | homeworld | species | films | vehicles | starships |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Luke Skywalker | 172 | 77 | blond | fair | blue | 19 | male | masculine | Tatooine | Human | The Empire Strikes Back, Revenge of the Sith , Return of the Jedi , A New Hope , The Force Awakens | Snowspeeder , Imperial Speeder Bike | X\-wing , Imperial shuttle |
| C\-3PO | 167 | 75 | NA | gold | yellow | 112 | none | masculine | Tatooine | Droid | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope | | |
| R2\-D2 | 96 | 32 | NA | white, blue | red | 33 | none | masculine | Naboo | Droid | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope , The Force Awakens | | |
| Obi\-Wan Kenobi | 182 | 77 | auburn, white | fair | blue\-gray | 57 | male | masculine | Stewjon | Human | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope | Tribubble bongo | Jedi starfighter , Trade Federation cruiser, Naboo star skiff , Jedi Interceptor , Belbullab\-22 starfighter |
### 5\.5\.5 pull()
```
starwars %>%
filter(species == "Droid") %>%
pull(name)
```
```
## [1] "C-3PO" "R2-D2" "R5-D4" "IG-88" "R4-P17" "BB8"
```
5\.6 Window functions
---------------------
Window functions use the order of rows to calculate values. You can use them to do things that require ranking or ordering, like choose the top scores in each class, or accessing the previous and next rows, like calculating cumulative sums or means.
The [dplyr window functions vignette](https://dplyr.tidyverse.org/articles/window-functions.html) has very good detailed explanations of these functions, but we’ve described a few of the most useful ones below.
### 5\.6\.1 Ranking functions
```
grades <- tibble(
id = 1:5,
"Data Skills" = c(16, 17, 17, 19, 20),
"Statistics" = c(14, 16, 18, 18, 19)
) %>%
gather(class, grade, 2:3) %>%
group_by(class) %>%
mutate(row_number = row_number(),
rank = rank(grade),
min_rank = min_rank(grade),
dense_rank = dense_rank(grade),
quartile = ntile(grade, 4),
percentile = ntile(grade, 100))
```
Table 5\.12: All rows from `grades`
| id | class | grade | row\_number | rank | min\_rank | dense\_rank | quartile | percentile |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | Data Skills | 16 | 1 | 1\.0 | 1 | 1 | 1 | 1 |
| 2 | Data Skills | 17 | 2 | 2\.5 | 2 | 2 | 1 | 2 |
| 3 | Data Skills | 17 | 3 | 2\.5 | 2 | 2 | 2 | 3 |
| 4 | Data Skills | 19 | 4 | 4\.0 | 4 | 3 | 3 | 4 |
| 5 | Data Skills | 20 | 5 | 5\.0 | 5 | 4 | 4 | 5 |
| 1 | Statistics | 14 | 1 | 1\.0 | 1 | 1 | 1 | 1 |
| 2 | Statistics | 16 | 2 | 2\.0 | 2 | 2 | 1 | 2 |
| 3 | Statistics | 18 | 3 | 3\.5 | 3 | 3 | 2 | 3 |
| 4 | Statistics | 18 | 4 | 3\.5 | 3 | 3 | 3 | 4 |
| 5 | Statistics | 19 | 5 | 5\.0 | 5 | 4 | 4 | 5 |
* What are the differences among `row_number()`, `rank()`, `min_rank()`, `dense_rank()`, and `ntile()`?
* Why doesn’t `row_number()` need an argument?
* What would happen if you gave it the argument `grade` or `class`?
* What do you think would happen if you removed the `group_by(class)` line above?
* What if you added `id` to the grouping?
* What happens if you change the order of the rows?
* What does the second argument in `ntile()` do?
You can use window functions to group your data into quantiles.
```
sw_mass <- starwars %>%
group_by(tertile = ntile(mass, 3)) %>%
summarise(min = min(mass),
max = max(mass),
mean = mean(mass),
.groups = "drop")
```
Table 5\.13: All rows from `sw_mass`
| tertile | min | max | mean |
| --- | --- | --- | --- |
| 1 | 15 | 68 | 45\.6600 |
| 2 | 74 | 82 | 78\.4100 |
| 3 | 83 | 1358 | 171\.5789 |
| NA | NA | NA | NA |
Why is there a row of `NA` values? How would you get rid of them?
### 5\.6\.2 Offset functions
The function `lag()` gives a previous row’s value. It defaults to 1 row back, but you can change that with the `n` argument. The function `lead()` gives values ahead of the current row.
```
lag_lead <- tibble(x = 1:6) %>%
mutate(lag = lag(x),
lag2 = lag(x, n = 2),
lead = lead(x, default = 0))
```
Table 5\.14: All rows from `lag_lead`
| x | lag | lag2 | lead |
| --- | --- | --- | --- |
| 1 | NA | NA | 2 |
| 2 | 1 | NA | 3 |
| 3 | 2 | 1 | 4 |
| 4 | 3 | 2 | 5 |
| 5 | 4 | 3 | 6 |
| 6 | 5 | 4 | 0 |
You can use offset functions to calculate change between trials or where a value changes. Use the `order_by` argument to specify the order of the rows. Alternatively, you can use `arrange()` before the offset functions.
```
trials <- tibble(
trial = sample(1:10, 10),
cond = sample(c("exp", "ctrl"), 10, T),
score = rpois(10, 4)
) %>%
mutate(
score_change = score - lag(score, order_by = trial),
change_cond = cond != lag(cond, order_by = trial,
default = "no condition")
) %>%
arrange(trial)
```
Table 5\.15: All rows from `trials`
| trial | cond | score | score\_change | change\_cond |
| --- | --- | --- | --- | --- |
| 1 | ctrl | 8 | NA | TRUE |
| 2 | ctrl | 4 | \-4 | FALSE |
| 3 | exp | 6 | 2 | TRUE |
| 4 | ctrl | 2 | \-4 | TRUE |
| 5 | ctrl | 3 | 1 | FALSE |
| 6 | ctrl | 6 | 3 | FALSE |
| 7 | ctrl | 2 | \-4 | FALSE |
| 8 | exp | 4 | 2 | TRUE |
| 9 | ctrl | 4 | 0 | TRUE |
| 10 | exp | 3 | \-1 | TRUE |
Look at the help pages for `lag()` and `lead()`.
* What happens if you remove the `order_by` argument or change it to `cond`?
* What does the `default` argument do?
* Can you think of circumstances in your own data where you might need to use `lag()` or `lead()`?
### 5\.6\.3 Cumulative aggregates
`cumsum()`, `cummin()`, and `cummax()` are base R functions for calculating cumulative means, minimums, and maximums. The dplyr package introduces `cumany()` and `cumall()`, which return `TRUE` if any or all of the previous values meet their criteria.
```
cumulative <- tibble(
time = 1:10,
obs = c(2, 2, 1, 2, 4, 3, 1, 0, 3, 5)
) %>%
mutate(
cumsum = cumsum(obs),
cummin = cummin(obs),
cummax = cummax(obs),
cumany = cumany(obs == 3),
cumall = cumall(obs < 4)
)
```
Table 5\.16: All rows from `cumulative`
| time | obs | cumsum | cummin | cummax | cumany | cumall |
| --- | --- | --- | --- | --- | --- | --- |
| 1 | 2 | 2 | 2 | 2 | FALSE | TRUE |
| 2 | 2 | 4 | 2 | 2 | FALSE | TRUE |
| 3 | 1 | 5 | 1 | 2 | FALSE | TRUE |
| 4 | 2 | 7 | 1 | 2 | FALSE | TRUE |
| 5 | 4 | 11 | 1 | 4 | FALSE | FALSE |
| 6 | 3 | 14 | 1 | 4 | TRUE | FALSE |
| 7 | 1 | 15 | 1 | 4 | TRUE | FALSE |
| 8 | 0 | 15 | 0 | 4 | TRUE | FALSE |
| 9 | 3 | 18 | 0 | 4 | TRUE | FALSE |
| 10 | 5 | 23 | 0 | 5 | TRUE | FALSE |
* What would happen if you change `cumany(obs == 3)` to `cumany(obs > 2)`?
* What would happen if you change `cumall(obs < 4)` to `cumall(obs < 2)`?
* Can you think of circumstances in your own data where you might need to use `cumany()` or `cumall()`?
5\.7 Glossary
-------------
| term | definition |
| --- | --- |
| [data wrangling](https://psyteachr.github.io/glossary/d#data.wrangling) | The process of preparing data for visualisation and statistical analysis. |
5\.8 Exercises
--------------
Download the [exercises](exercises/05_dplyr_exercise.Rmd). See the [answers](exercises/05_dplyr_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(5)
# run this to access the answers
dataskills::exercise(5, answers = TRUE)
```
5\.1 Learning Objectives
------------------------
### 5\.1\.1 Basic
1. Be able to use the 6 main dplyr one\-table verbs: [(video)](https://youtu.be/l12tNKClTR0)
* [`select()`](dplyr.html#select)
* [`filter()`](dplyr.html#filter)
* [`arrange()`](dplyr.html#arrange)
* [`mutate()`](dplyr.html#mutate)
* [`summarise()`](dplyr.html#summarise)
* [`group_by()`](dplyr.html#group_by)
2. Be able to [wrangle data by chaining tidyr and dplyr functions](dplyr.html#all-together) [(video)](https://youtu.be/hzFFAkwrkqA)
3. Be able to use these additional one\-table verbs: [(video)](https://youtu.be/GmfF162mq4g)
* [`rename()`](dplyr.html#rename)
* [`distinct()`](dplyr.html#distinct)
* [`count()`](dplyr.html#count)
* [`slice()`](dplyr.html#slice)
* [`pull()`](dplyr.html#pull)
### 5\.1\.2 Intermediate
4. Fine control of [`select()` operations](dplyr.html#select_helpers) [(video)](https://youtu.be/R1bi1QwF9t0)
5. Use [window functions](dplyr.html#window) [(video)](https://youtu.be/uo4b0W9mqPc)
### 5\.1\.1 Basic
1. Be able to use the 6 main dplyr one\-table verbs: [(video)](https://youtu.be/l12tNKClTR0)
* [`select()`](dplyr.html#select)
* [`filter()`](dplyr.html#filter)
* [`arrange()`](dplyr.html#arrange)
* [`mutate()`](dplyr.html#mutate)
* [`summarise()`](dplyr.html#summarise)
* [`group_by()`](dplyr.html#group_by)
2. Be able to [wrangle data by chaining tidyr and dplyr functions](dplyr.html#all-together) [(video)](https://youtu.be/hzFFAkwrkqA)
3. Be able to use these additional one\-table verbs: [(video)](https://youtu.be/GmfF162mq4g)
* [`rename()`](dplyr.html#rename)
* [`distinct()`](dplyr.html#distinct)
* [`count()`](dplyr.html#count)
* [`slice()`](dplyr.html#slice)
* [`pull()`](dplyr.html#pull)
### 5\.1\.2 Intermediate
4. Fine control of [`select()` operations](dplyr.html#select_helpers) [(video)](https://youtu.be/R1bi1QwF9t0)
5. Use [window functions](dplyr.html#window) [(video)](https://youtu.be/uo4b0W9mqPc)
5\.2 Resources
--------------
* [Chapter 5: Data Transformation](http://r4ds.had.co.nz/transform.html) in *R for Data Science*
* [Data transformation cheat sheet](https://github.com/rstudio/cheatsheets/raw/master/data-transformation.pdf)
* [Chapter 16: Date and times](http://r4ds.had.co.nz/dates-and-times.html) in *R for Data Science*
5\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(lubridate)
library(dataskills)
set.seed(8675309) # makes sure random numbers are reproducible
```
### 5\.3\.1 The `disgust` dataset
These examples will use data from `dataskills::disgust`, which contains data from the [Three Domain Disgust Scale](http://digitalrepository.unm.edu/cgi/viewcontent.cgi?article=1139&context=psy_etds). Each participant is identified by a unique `user_id` and each questionnaire completion has a unique `id`. Look at the Help for this dataset to see the individual questions.
```
data("disgust", package = "dataskills")
#disgust <- read_csv("https://psyteachr.github.io/msc-data-skills/data/disgust.csv")
```
### 5\.3\.1 The `disgust` dataset
These examples will use data from `dataskills::disgust`, which contains data from the [Three Domain Disgust Scale](http://digitalrepository.unm.edu/cgi/viewcontent.cgi?article=1139&context=psy_etds). Each participant is identified by a unique `user_id` and each questionnaire completion has a unique `id`. Look at the Help for this dataset to see the individual questions.
```
data("disgust", package = "dataskills")
#disgust <- read_csv("https://psyteachr.github.io/msc-data-skills/data/disgust.csv")
```
5\.4 Six main dplyr verbs
-------------------------
Most of the [data wrangling](https://psyteachr.github.io/glossary/d#data-wrangling "The process of preparing data for visualisation and statistical analysis.") you’ll want to do with psychological data will involve the `tidyr` functions you learned in [Chapter 4](tidyr.html#tidyr) and the six main `dplyr` verbs: `select`, `filter`, `arrange`, `mutate`, `summarise`, and `group_by`.
### 5\.4\.1 select()
Select columns by name or number.
You can select each column individually, separated by commas (e.g., `col1, col2`). You can also select all columns between two columns by separating them with a colon (e.g., `start_col:end_col`).
```
moral <- disgust %>% select(user_id, moral1:moral7)
names(moral)
```
```
## [1] "user_id" "moral1" "moral2" "moral3" "moral4" "moral5" "moral6"
## [8] "moral7"
```
You can select columns by number, which is useful when the column names are long or complicated.
```
sexual <- disgust %>% select(2, 11:17)
names(sexual)
```
```
## [1] "user_id" "sexual1" "sexual2" "sexual3" "sexual4" "sexual5" "sexual6"
## [8] "sexual7"
```
You can use a minus symbol to unselect columns, leaving all of the other columns. If you want to exclude a span of columns, put parentheses around the span first (e.g., `-(moral1:moral7)`, not `-moral1:moral7`).
```
pathogen <- disgust %>% select(-id, -date, -(moral1:sexual7))
names(pathogen)
```
```
## [1] "user_id" "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5"
## [7] "pathogen6" "pathogen7"
```
#### 5\.4\.1\.1 Select helpers
You can select columns based on criteria about the column names.
##### 5\.4\.1\.1\.1 `starts_with()`
Select columns that start with a character string.
```
u <- disgust %>% select(starts_with("u"))
names(u)
```
```
## [1] "user_id"
```
##### 5\.4\.1\.1\.2 `ends_with()`
Select columns that end with a character string.
```
firstq <- disgust %>% select(ends_with("1"))
names(firstq)
```
```
## [1] "moral1" "sexual1" "pathogen1"
```
##### 5\.4\.1\.1\.3 `contains()`
Select columns that contain a character string.
```
pathogen <- disgust %>% select(contains("pathogen"))
names(pathogen)
```
```
## [1] "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5" "pathogen6"
## [7] "pathogen7"
```
##### 5\.4\.1\.1\.4 `num_range()`
Select columns with a name that matches the pattern `prefix`.
```
moral2_4 <- disgust %>% select(num_range("moral", 2:4))
names(moral2_4)
```
```
## [1] "moral2" "moral3" "moral4"
```
Use `width` to set the number of digits with leading zeros. For example, `num_range(‘var_,’ 8:10, width=2)` selects columns `var_08`, `var_09`, and `var_10`.
### 5\.4\.2 filter()
Select rows by matching column criteria.
Select all rows where the user\_id is 1 (that’s Lisa).
```
disgust %>% filter(user_id == 1)
```
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 |
Remember to use `==` and not `=` to check if two things are equivalent. A single `=` assigns the righthand value to the lefthand variable and (usually) evaluates to `TRUE`.
You can select on multiple criteria by separating them with commas.
```
amoral <- disgust %>% filter(
moral1 == 0,
moral2 == 0,
moral3 == 0,
moral4 == 0,
moral5 == 0,
moral6 == 0,
moral7 == 0
)
```
You can use the symbols `&`, `|`, and `!` to mean “and,” “or,” and “not.” You can also use other operators to make equations.
```
# everyone who chose either 0 or 7 for question moral1
moral_extremes <- disgust %>%
filter(moral1 == 0 | moral1 == 7)
# everyone who chose the same answer for all moral questions
moral_consistent <- disgust %>%
filter(
moral2 == moral1 &
moral3 == moral1 &
moral4 == moral1 &
moral5 == moral1 &
moral6 == moral1 &
moral7 == moral1
)
# everyone who did not answer 7 for all 7 moral questions
moral_no_ceiling <- disgust %>%
filter(moral1+moral2+moral3+moral4+moral5+moral6+moral7 != 7*7)
```
#### 5\.4\.2\.1 Match operator (%in%)
Sometimes you need to exclude some participant IDs for reasons that can’t be described in code. The match operator (`%in%`) is useful here for testing if a column value is in a list. Surround the equation with parentheses and put `!` in front to test that a value is not in the list.
```
no_researchers <- disgust %>%
filter(!(user_id %in% c(1,2)))
```
#### 5\.4\.2\.2 Dates
You can use the `lubridate` package to work with dates. For example, you can use the `year()` function to return just the year from the `date` column and then select only data collected in 2010\.
```
disgust2010 <- disgust %>%
filter(year(date) == 2010)
```
Table 5\.1: Rows 1\-6 from `disgust2010`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 6902 | 5469 | 2010\-12\-06 | 0 | 1 | 3 | 4 | 1 | 0 | 1 | 3 | 5 | 2 | 4 | 6 | 6 | 5 | 5 | 2 | 4 | 4 | 2 | 2 | 6 |
| 6158 | 6066 | 2010\-04\-18 | 4 | 5 | 6 | 5 | 5 | 4 | 4 | 3 | 0 | 1 | 6 | 3 | 5 | 3 | 6 | 5 | 5 | 5 | 5 | 5 | 5 |
| 6362 | 7129 | 2010\-06\-09 | 4 | 4 | 4 | 4 | 3 | 3 | 2 | 4 | 2 | 1 | 3 | 2 | 3 | 6 | 5 | 2 | 0 | 4 | 5 | 5 | 4 |
| 6302 | 39318 | 2010\-05\-20 | 2 | 4 | 1 | 4 | 5 | 6 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 3 | 2 | 3 | 2 | 3 | 2 | 4 |
| 5429 | 43029 | 2010\-01\-02 | 1 | 1 | 1 | 3 | 6 | 4 | 2 | 2 | 0 | 1 | 4 | 6 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 6 | 4 |
| 6732 | 71955 | 2010\-10\-15 | 2 | 5 | 3 | 6 | 3 | 2 | 5 | 4 | 3 | 3 | 6 | 6 | 6 | 5 | 4 | 2 | 6 | 5 | 6 | 6 | 3 |
Or select data from at least 5 years ago. You can use the `range` function to check the minimum and maximum dates in the resulting dataset.
```
disgust_5ago <- disgust %>%
filter(date < today() - dyears(5))
range(disgust_5ago$date)
```
```
## [1] "2008-07-10" "2016-08-04"
```
### 5\.4\.3 arrange()
Sort your dataset using `arrange()`. You will find yourself needing to sort data in R much less than you do in Excel, since you don’t need to have rows next to each other in order to, for example, calculate group means. But `arrange()` can be useful when preparing data from display in tables.
```
disgust_order <- disgust %>%
arrange(date, moral1)
```
Table 5\.2: Rows 1\-6 from `disgust_order`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 |
| 3 | 155324 | 2008\-07\-11 | 2 | 4 | 3 | 5 | 2 | 1 | 4 | 1 | 0 | 1 | 2 | 2 | 6 | 1 | 4 | 3 | 1 | 0 | 4 | 4 | 2 |
| 6 | 155386 | 2008\-07\-12 | 2 | 4 | 0 | 4 | 0 | 0 | 0 | 6 | 0 | 0 | 6 | 4 | 4 | 6 | 4 | 5 | 5 | 1 | 6 | 4 | 2 |
| 7 | 155409 | 2008\-07\-12 | 4 | 5 | 5 | 4 | 5 | 1 | 5 | 3 | 0 | 1 | 5 | 2 | 0 | 0 | 5 | 5 | 3 | 4 | 4 | 2 | 6 |
| 4 | 155366 | 2008\-07\-12 | 6 | 6 | 6 | 3 | 6 | 6 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 4 | 4 | 5 | 5 | 4 | 6 | 0 |
| 5 | 155370 | 2008\-07\-12 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 2 | 6 | 4 | 3 | 6 | 6 | 6 | 6 | 6 | 6 | 2 | 4 | 4 | 6 |
Reverse the order using `desc()`
```
disgust_order_desc <- disgust %>%
arrange(desc(date))
```
Table 5\.3: Rows 1\-6 from `disgust_order_desc`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 39456 | 356866 | 2017\-08\-21 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 39447 | 128727 | 2017\-08\-13 | 2 | 4 | 1 | 2 | 2 | 5 | 3 | 0 | 0 | 1 | 0 | 0 | 2 | 1 | 2 | 0 | 2 | 1 | 1 | 1 | 1 |
| 39371 | 152955 | 2017\-06\-13 | 6 | 6 | 3 | 6 | 6 | 6 | 6 | 1 | 0 | 0 | 2 | 1 | 4 | 4 | 5 | 0 | 5 | 4 | 3 | 6 | 3 |
| 39342 | 48303 | 2017\-05\-22 | 4 | 5 | 4 | 4 | 6 | 4 | 5 | 2 | 1 | 4 | 1 | 1 | 3 | 1 | 5 | 5 | 4 | 4 | 4 | 4 | 5 |
| 39159 | 151633 | 2017\-04\-04 | 4 | 5 | 6 | 5 | 3 | 6 | 2 | 6 | 4 | 0 | 4 | 0 | 3 | 6 | 4 | 4 | 6 | 6 | 6 | 6 | 4 |
| 38942 | 370464 | 2017\-02\-01 | 1 | 5 | 0 | 6 | 5 | 5 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 0 | 3 | 3 | 1 | 6 | 3 |
### 5\.4\.4 mutate()
Add new columns. This is one of the most useful functions in the tidyverse.
Refer to other columns by their names (unquoted). You can add more than one column in the same mutate function, just separate the columns with a comma. Once you make a new column, you can use it in further column definitions e.g., `total` below).
```
disgust_total <- disgust %>%
mutate(
pathogen = pathogen1 + pathogen2 + pathogen3 + pathogen4 + pathogen5 + pathogen6 + pathogen7,
moral = moral1 + moral2 + moral3 + moral4 + moral5 + moral6 + moral7,
sexual = sexual1 + sexual2 + sexual3 + sexual4 + sexual5 + sexual6 + sexual7,
total = pathogen + moral + sexual,
user_id = paste0("U", user_id)
)
```
Table 5\.4: Rows 1\-6 from `disgust_total`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 | pathogen | moral | sexual | total |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1199 | U0 | 2008\-10\-07 | 5 | 6 | 4 | 6 | 5 | 5 | 6 | 4 | 0 | 1 | 0 | 1 | 4 | 5 | 6 | 1 | 6 | 5 | 4 | 5 | 6 | 33 | 37 | 15 | 85 |
| 1 | U1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 | 19 | 10 | 12 | 41 |
| 1599 | U2 | 2008\-10\-27 | 1 | 1 | 1 | 1 | NA | NA | 1 | 1 | NA | 1 | NA | 1 | NA | NA | NA | NA | 1 | NA | NA | NA | NA | NA | NA | NA | NA |
| 13332 | U2118 | 2012\-01\-02 | 0 | 1 | 1 | 1 | 1 | 2 | 1 | 4 | 3 | 0 | 6 | 0 | 3 | 5 | 5 | 6 | 4 | 6 | 5 | 5 | 4 | 35 | 7 | 21 | 63 |
| 23 | U2311 | 2008\-07\-15 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 2 | 1 | 2 | 1 | 1 | 1 | 5 | 5 | 5 | 4 | 4 | 5 | 4 | 3 | 30 | 28 | 13 | 71 |
| 1160 | U3630 | 2008\-10\-06 | 1 | 5 | NA | 5 | 5 | 5 | 1 | 0 | 5 | 0 | 2 | 0 | 1 | 0 | 6 | 3 | 1 | 1 | 3 | 1 | 0 | 15 | NA | 8 | NA |
You can overwrite a column by giving a new column the same name as the old column (see `user_id`) above. Make sure that you mean to do this and that you aren’t trying to use the old column value after you redefine it.
### 5\.4\.5 summarise()
Create summary statistics for the dataset. Check the [Data Wrangling Cheat Sheet](https://www.rstudio.org/links/data_wrangling_cheat_sheet) or the [Data Transformation Cheat Sheet](https://github.com/rstudio/cheatsheets/raw/master/source/pdfs/data-transformation-cheatsheet.pdf) for various summary functions. Some common ones are: `mean()`, `sd()`, `n()`, `sum()`, and `quantile()`.
```
disgust_summary<- disgust_total %>%
summarise(
n = n(),
q25 = quantile(total, .25, na.rm = TRUE),
q50 = quantile(total, .50, na.rm = TRUE),
q75 = quantile(total, .75, na.rm = TRUE),
avg_total = mean(total, na.rm = TRUE),
sd_total = sd(total, na.rm = TRUE),
min_total = min(total, na.rm = TRUE),
max_total = max(total, na.rm = TRUE)
)
```
Table 5\.5: All rows from `disgust_summary`
| n | q25 | q50 | q75 | avg\_total | sd\_total | min\_total | max\_total |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 20000 | 59 | 71 | 83 | 70\.6868 | 18\.24253 | 0 | 126 |
### 5\.4\.6 group\_by()
Create subsets of the data. You can use this to create summaries,
like the mean value for all of your experimental groups.
Here, we’ll use `mutate` to create a new column called `year`, group by `year`, and calculate the average scores.
```
disgust_groups <- disgust_total %>%
mutate(year = year(date)) %>%
group_by(year) %>%
summarise(
n = n(),
avg_total = mean(total, na.rm = TRUE),
sd_total = sd(total, na.rm = TRUE),
min_total = min(total, na.rm = TRUE),
max_total = max(total, na.rm = TRUE),
.groups = "drop"
)
```
Table 5\.6: All rows from `disgust_groups`
| year | n | avg\_total | sd\_total | min\_total | max\_total |
| --- | --- | --- | --- | --- | --- |
| 2008 | 2578 | 70\.29975 | 18\.46251 | 0 | 126 |
| 2009 | 2580 | 69\.74481 | 18\.61959 | 3 | 126 |
| 2010 | 1514 | 70\.59238 | 18\.86846 | 6 | 126 |
| 2011 | 6046 | 71\.34425 | 17\.79446 | 0 | 126 |
| 2012 | 5938 | 70\.42530 | 18\.35782 | 0 | 126 |
| 2013 | 1251 | 71\.59574 | 17\.61375 | 0 | 126 |
| 2014 | 58 | 70\.46296 | 17\.23502 | 19 | 113 |
| 2015 | 21 | 74\.26316 | 16\.89787 | 43 | 107 |
| 2016 | 8 | 67\.87500 | 32\.62531 | 0 | 110 |
| 2017 | 6 | 57\.16667 | 27\.93862 | 21 | 90 |
If you don’t add `.groups = “drop”` at the end of the `summarise()` function, you will get the following message: “`summarise()` ungrouping output (override with `.groups` argument).” This just reminds you that the groups are still in effect and any further functions will also be grouped.
Older versions of dplyr didn’t do this, so older code will generate this warning if you run it with newer version of dplyr. Older code might `ungroup()` after `summarise()` to indicate that groupings should be dropped. The default behaviour is usually correct, so you don’t need to worry, but it’s best to explicitly set `.groups` in a `summarise()` function after `group_by()` if you want to “keep” or “drop” the groupings.
You can use `filter` after `group_by`. The following example returns the lowest total score from each year (i.e., the row where the `rank()` of the value in the column `total` is equivalent to `1`).
```
disgust_lowest <- disgust_total %>%
mutate(year = year(date)) %>%
select(user_id, year, total) %>%
group_by(year) %>%
filter(rank(total) == 1) %>%
arrange(year)
```
Table 5\.7: All rows from `disgust_lowest`
| user\_id | year | total |
| --- | --- | --- |
| U236585 | 2009 | 3 |
| U292359 | 2010 | 6 |
| U245384 | 2013 | 0 |
| U206293 | 2014 | 19 |
| U407089 | 2015 | 43 |
| U453237 | 2016 | 0 |
| U356866 | 2017 | 21 |
You can also use `mutate` after `group_by`. The following example calculates subject\-mean\-centered scores by grouping the scores by `user_id` and then subtracting the group\-specific mean from each score. Note the use of `gather` to tidy the data into a long format first.
```
disgust_smc <- disgust %>%
gather("question", "score", moral1:pathogen7) %>%
group_by(user_id) %>%
mutate(score_smc = score - mean(score, na.rm = TRUE)) %>%
ungroup()
```
Use `ungroup()` as soon as you are done with grouped functions, otherwise the data table will still be grouped when you use it in the future.
Table 5\.8: Rows 1\-6 from `disgust_smc`
| id | user\_id | date | question | score | score\_smc |
| --- | --- | --- | --- | --- | --- |
| 1199 | 0 | 2008\-10\-07 | moral1 | 5 | 0\.9523810 |
| 1 | 1 | 2008\-07\-10 | moral1 | 2 | 0\.0476190 |
| 1599 | 2 | 2008\-10\-27 | moral1 | 1 | 0\.0000000 |
| 13332 | 2118 | 2012\-01\-02 | moral1 | 0 | \-3\.0000000 |
| 23 | 2311 | 2008\-07\-15 | moral1 | 4 | 0\.6190476 |
| 1160 | 3630 | 2008\-10\-06 | moral1 | 1 | \-1\.2500000 |
### 5\.4\.7 All Together
A lot of what we did above would be easier if the data were tidy, so let’s do that first. Then we can use `group_by` to calculate the domain scores.
After that, we can spread out the 3 domains, calculate the total score, remove any rows with a missing (`NA`) total, and calculate mean values by year.
```
disgust_tidy <- dataskills::disgust %>%
gather("question", "score", moral1:pathogen7) %>%
separate(question, c("domain","q_num"), sep = -1) %>%
group_by(id, user_id, date, domain) %>%
summarise(score = mean(score), .groups = "drop")
```
Table 5\.9: Rows 1\-6 from `disgust_tidy`
| id | user\_id | date | domain | score |
| --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | moral | 1\.428571 |
| 1 | 1 | 2008\-07\-10 | pathogen | 2\.714286 |
| 1 | 1 | 2008\-07\-10 | sexual | 1\.714286 |
| 3 | 155324 | 2008\-07\-11 | moral | 3\.000000 |
| 3 | 155324 | 2008\-07\-11 | pathogen | 2\.571429 |
| 3 | 155324 | 2008\-07\-11 | sexual | 1\.857143 |
```
disgust_scored <- disgust_tidy %>%
spread(domain, score) %>%
mutate(
total = moral + sexual + pathogen,
year = year(date)
) %>%
filter(!is.na(total)) %>%
arrange(user_id)
```
Table 5\.10: Rows 1\-6 from `disgust_scored`
| id | user\_id | date | moral | pathogen | sexual | total | year |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1199 | 0 | 2008\-10\-07 | 5\.285714 | 4\.714286 | 2\.142857 | 12\.142857 | 2008 |
| 1 | 1 | 2008\-07\-10 | 1\.428571 | 2\.714286 | 1\.714286 | 5\.857143 | 2008 |
| 13332 | 2118 | 2012\-01\-02 | 1\.000000 | 5\.000000 | 3\.000000 | 9\.000000 | 2012 |
| 23 | 2311 | 2008\-07\-15 | 4\.000000 | 4\.285714 | 1\.857143 | 10\.142857 | 2008 |
| 7980 | 4458 | 2011\-09\-05 | 3\.428571 | 3\.571429 | 3\.000000 | 10\.000000 | 2011 |
| 552 | 4651 | 2008\-08\-23 | 3\.857143 | 4\.857143 | 4\.285714 | 13\.000000 | 2008 |
```
disgust_summarised <- disgust_scored %>%
group_by(year) %>%
summarise(
n = n(),
avg_pathogen = mean(pathogen),
avg_moral = mean(moral),
avg_sexual = mean(sexual),
first_user = first(user_id),
last_user = last(user_id),
.groups = "drop"
)
```
Table 5\.11: Rows 1\-6 from `disgust_summarised`
| year | n | avg\_pathogen | avg\_moral | avg\_sexual | first\_user | last\_user |
| --- | --- | --- | --- | --- | --- | --- |
| 2008 | 2392 | 3\.697265 | 3\.806259 | 2\.539298 | 0 | 188708 |
| 2009 | 2410 | 3\.674333 | 3\.760937 | 2\.528275 | 6093 | 251959 |
| 2010 | 1418 | 3\.731412 | 3\.843139 | 2\.510075 | 5469 | 319641 |
| 2011 | 5586 | 3\.756918 | 3\.806506 | 2\.628612 | 4458 | 406569 |
| 2012 | 5375 | 3\.740465 | 3\.774591 | 2\.545701 | 2118 | 458194 |
| 2013 | 1222 | 3\.771920 | 3\.906944 | 2\.549100 | 7646 | 462428 |
| 2014 | 54 | 3\.759259 | 4\.000000 | 2\.306878 | 11090 | 461307 |
| 2015 | 19 | 3\.781955 | 4\.451128 | 2\.375940 | 102699 | 460283 |
| 2016 | 8 | 3\.696429 | 3\.625000 | 2\.375000 | 4976 | 453237 |
| 2017 | 6 | 3\.071429 | 3\.690476 | 1\.404762 | 48303 | 370464 |
### 5\.4\.1 select()
Select columns by name or number.
You can select each column individually, separated by commas (e.g., `col1, col2`). You can also select all columns between two columns by separating them with a colon (e.g., `start_col:end_col`).
```
moral <- disgust %>% select(user_id, moral1:moral7)
names(moral)
```
```
## [1] "user_id" "moral1" "moral2" "moral3" "moral4" "moral5" "moral6"
## [8] "moral7"
```
You can select columns by number, which is useful when the column names are long or complicated.
```
sexual <- disgust %>% select(2, 11:17)
names(sexual)
```
```
## [1] "user_id" "sexual1" "sexual2" "sexual3" "sexual4" "sexual5" "sexual6"
## [8] "sexual7"
```
You can use a minus symbol to unselect columns, leaving all of the other columns. If you want to exclude a span of columns, put parentheses around the span first (e.g., `-(moral1:moral7)`, not `-moral1:moral7`).
```
pathogen <- disgust %>% select(-id, -date, -(moral1:sexual7))
names(pathogen)
```
```
## [1] "user_id" "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5"
## [7] "pathogen6" "pathogen7"
```
#### 5\.4\.1\.1 Select helpers
You can select columns based on criteria about the column names.
##### 5\.4\.1\.1\.1 `starts_with()`
Select columns that start with a character string.
```
u <- disgust %>% select(starts_with("u"))
names(u)
```
```
## [1] "user_id"
```
##### 5\.4\.1\.1\.2 `ends_with()`
Select columns that end with a character string.
```
firstq <- disgust %>% select(ends_with("1"))
names(firstq)
```
```
## [1] "moral1" "sexual1" "pathogen1"
```
##### 5\.4\.1\.1\.3 `contains()`
Select columns that contain a character string.
```
pathogen <- disgust %>% select(contains("pathogen"))
names(pathogen)
```
```
## [1] "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5" "pathogen6"
## [7] "pathogen7"
```
##### 5\.4\.1\.1\.4 `num_range()`
Select columns with a name that matches the pattern `prefix`.
```
moral2_4 <- disgust %>% select(num_range("moral", 2:4))
names(moral2_4)
```
```
## [1] "moral2" "moral3" "moral4"
```
Use `width` to set the number of digits with leading zeros. For example, `num_range(‘var_,’ 8:10, width=2)` selects columns `var_08`, `var_09`, and `var_10`.
#### 5\.4\.1\.1 Select helpers
You can select columns based on criteria about the column names.
##### 5\.4\.1\.1\.1 `starts_with()`
Select columns that start with a character string.
```
u <- disgust %>% select(starts_with("u"))
names(u)
```
```
## [1] "user_id"
```
##### 5\.4\.1\.1\.2 `ends_with()`
Select columns that end with a character string.
```
firstq <- disgust %>% select(ends_with("1"))
names(firstq)
```
```
## [1] "moral1" "sexual1" "pathogen1"
```
##### 5\.4\.1\.1\.3 `contains()`
Select columns that contain a character string.
```
pathogen <- disgust %>% select(contains("pathogen"))
names(pathogen)
```
```
## [1] "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5" "pathogen6"
## [7] "pathogen7"
```
##### 5\.4\.1\.1\.4 `num_range()`
Select columns with a name that matches the pattern `prefix`.
```
moral2_4 <- disgust %>% select(num_range("moral", 2:4))
names(moral2_4)
```
```
## [1] "moral2" "moral3" "moral4"
```
Use `width` to set the number of digits with leading zeros. For example, `num_range(‘var_,’ 8:10, width=2)` selects columns `var_08`, `var_09`, and `var_10`.
##### 5\.4\.1\.1\.1 `starts_with()`
Select columns that start with a character string.
```
u <- disgust %>% select(starts_with("u"))
names(u)
```
```
## [1] "user_id"
```
##### 5\.4\.1\.1\.2 `ends_with()`
Select columns that end with a character string.
```
firstq <- disgust %>% select(ends_with("1"))
names(firstq)
```
```
## [1] "moral1" "sexual1" "pathogen1"
```
##### 5\.4\.1\.1\.3 `contains()`
Select columns that contain a character string.
```
pathogen <- disgust %>% select(contains("pathogen"))
names(pathogen)
```
```
## [1] "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5" "pathogen6"
## [7] "pathogen7"
```
##### 5\.4\.1\.1\.4 `num_range()`
Select columns with a name that matches the pattern `prefix`.
```
moral2_4 <- disgust %>% select(num_range("moral", 2:4))
names(moral2_4)
```
```
## [1] "moral2" "moral3" "moral4"
```
Use `width` to set the number of digits with leading zeros. For example, `num_range(‘var_,’ 8:10, width=2)` selects columns `var_08`, `var_09`, and `var_10`.
### 5\.4\.2 filter()
Select rows by matching column criteria.
Select all rows where the user\_id is 1 (that’s Lisa).
```
disgust %>% filter(user_id == 1)
```
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 |
Remember to use `==` and not `=` to check if two things are equivalent. A single `=` assigns the righthand value to the lefthand variable and (usually) evaluates to `TRUE`.
You can select on multiple criteria by separating them with commas.
```
amoral <- disgust %>% filter(
moral1 == 0,
moral2 == 0,
moral3 == 0,
moral4 == 0,
moral5 == 0,
moral6 == 0,
moral7 == 0
)
```
You can use the symbols `&`, `|`, and `!` to mean “and,” “or,” and “not.” You can also use other operators to make equations.
```
# everyone who chose either 0 or 7 for question moral1
moral_extremes <- disgust %>%
filter(moral1 == 0 | moral1 == 7)
# everyone who chose the same answer for all moral questions
moral_consistent <- disgust %>%
filter(
moral2 == moral1 &
moral3 == moral1 &
moral4 == moral1 &
moral5 == moral1 &
moral6 == moral1 &
moral7 == moral1
)
# everyone who did not answer 7 for all 7 moral questions
moral_no_ceiling <- disgust %>%
filter(moral1+moral2+moral3+moral4+moral5+moral6+moral7 != 7*7)
```
#### 5\.4\.2\.1 Match operator (%in%)
Sometimes you need to exclude some participant IDs for reasons that can’t be described in code. The match operator (`%in%`) is useful here for testing if a column value is in a list. Surround the equation with parentheses and put `!` in front to test that a value is not in the list.
```
no_researchers <- disgust %>%
filter(!(user_id %in% c(1,2)))
```
#### 5\.4\.2\.2 Dates
You can use the `lubridate` package to work with dates. For example, you can use the `year()` function to return just the year from the `date` column and then select only data collected in 2010\.
```
disgust2010 <- disgust %>%
filter(year(date) == 2010)
```
Table 5\.1: Rows 1\-6 from `disgust2010`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 6902 | 5469 | 2010\-12\-06 | 0 | 1 | 3 | 4 | 1 | 0 | 1 | 3 | 5 | 2 | 4 | 6 | 6 | 5 | 5 | 2 | 4 | 4 | 2 | 2 | 6 |
| 6158 | 6066 | 2010\-04\-18 | 4 | 5 | 6 | 5 | 5 | 4 | 4 | 3 | 0 | 1 | 6 | 3 | 5 | 3 | 6 | 5 | 5 | 5 | 5 | 5 | 5 |
| 6362 | 7129 | 2010\-06\-09 | 4 | 4 | 4 | 4 | 3 | 3 | 2 | 4 | 2 | 1 | 3 | 2 | 3 | 6 | 5 | 2 | 0 | 4 | 5 | 5 | 4 |
| 6302 | 39318 | 2010\-05\-20 | 2 | 4 | 1 | 4 | 5 | 6 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 3 | 2 | 3 | 2 | 3 | 2 | 4 |
| 5429 | 43029 | 2010\-01\-02 | 1 | 1 | 1 | 3 | 6 | 4 | 2 | 2 | 0 | 1 | 4 | 6 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 6 | 4 |
| 6732 | 71955 | 2010\-10\-15 | 2 | 5 | 3 | 6 | 3 | 2 | 5 | 4 | 3 | 3 | 6 | 6 | 6 | 5 | 4 | 2 | 6 | 5 | 6 | 6 | 3 |
Or select data from at least 5 years ago. You can use the `range` function to check the minimum and maximum dates in the resulting dataset.
```
disgust_5ago <- disgust %>%
filter(date < today() - dyears(5))
range(disgust_5ago$date)
```
```
## [1] "2008-07-10" "2016-08-04"
```
#### 5\.4\.2\.1 Match operator (%in%)
Sometimes you need to exclude some participant IDs for reasons that can’t be described in code. The match operator (`%in%`) is useful here for testing if a column value is in a list. Surround the equation with parentheses and put `!` in front to test that a value is not in the list.
```
no_researchers <- disgust %>%
filter(!(user_id %in% c(1,2)))
```
#### 5\.4\.2\.2 Dates
You can use the `lubridate` package to work with dates. For example, you can use the `year()` function to return just the year from the `date` column and then select only data collected in 2010\.
```
disgust2010 <- disgust %>%
filter(year(date) == 2010)
```
Table 5\.1: Rows 1\-6 from `disgust2010`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 6902 | 5469 | 2010\-12\-06 | 0 | 1 | 3 | 4 | 1 | 0 | 1 | 3 | 5 | 2 | 4 | 6 | 6 | 5 | 5 | 2 | 4 | 4 | 2 | 2 | 6 |
| 6158 | 6066 | 2010\-04\-18 | 4 | 5 | 6 | 5 | 5 | 4 | 4 | 3 | 0 | 1 | 6 | 3 | 5 | 3 | 6 | 5 | 5 | 5 | 5 | 5 | 5 |
| 6362 | 7129 | 2010\-06\-09 | 4 | 4 | 4 | 4 | 3 | 3 | 2 | 4 | 2 | 1 | 3 | 2 | 3 | 6 | 5 | 2 | 0 | 4 | 5 | 5 | 4 |
| 6302 | 39318 | 2010\-05\-20 | 2 | 4 | 1 | 4 | 5 | 6 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 3 | 2 | 3 | 2 | 3 | 2 | 4 |
| 5429 | 43029 | 2010\-01\-02 | 1 | 1 | 1 | 3 | 6 | 4 | 2 | 2 | 0 | 1 | 4 | 6 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 6 | 4 |
| 6732 | 71955 | 2010\-10\-15 | 2 | 5 | 3 | 6 | 3 | 2 | 5 | 4 | 3 | 3 | 6 | 6 | 6 | 5 | 4 | 2 | 6 | 5 | 6 | 6 | 3 |
Or select data from at least 5 years ago. You can use the `range` function to check the minimum and maximum dates in the resulting dataset.
```
disgust_5ago <- disgust %>%
filter(date < today() - dyears(5))
range(disgust_5ago$date)
```
```
## [1] "2008-07-10" "2016-08-04"
```
### 5\.4\.3 arrange()
Sort your dataset using `arrange()`. You will find yourself needing to sort data in R much less than you do in Excel, since you don’t need to have rows next to each other in order to, for example, calculate group means. But `arrange()` can be useful when preparing data from display in tables.
```
disgust_order <- disgust %>%
arrange(date, moral1)
```
Table 5\.2: Rows 1\-6 from `disgust_order`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 |
| 3 | 155324 | 2008\-07\-11 | 2 | 4 | 3 | 5 | 2 | 1 | 4 | 1 | 0 | 1 | 2 | 2 | 6 | 1 | 4 | 3 | 1 | 0 | 4 | 4 | 2 |
| 6 | 155386 | 2008\-07\-12 | 2 | 4 | 0 | 4 | 0 | 0 | 0 | 6 | 0 | 0 | 6 | 4 | 4 | 6 | 4 | 5 | 5 | 1 | 6 | 4 | 2 |
| 7 | 155409 | 2008\-07\-12 | 4 | 5 | 5 | 4 | 5 | 1 | 5 | 3 | 0 | 1 | 5 | 2 | 0 | 0 | 5 | 5 | 3 | 4 | 4 | 2 | 6 |
| 4 | 155366 | 2008\-07\-12 | 6 | 6 | 6 | 3 | 6 | 6 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 4 | 4 | 5 | 5 | 4 | 6 | 0 |
| 5 | 155370 | 2008\-07\-12 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 2 | 6 | 4 | 3 | 6 | 6 | 6 | 6 | 6 | 6 | 2 | 4 | 4 | 6 |
Reverse the order using `desc()`
```
disgust_order_desc <- disgust %>%
arrange(desc(date))
```
Table 5\.3: Rows 1\-6 from `disgust_order_desc`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 39456 | 356866 | 2017\-08\-21 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 39447 | 128727 | 2017\-08\-13 | 2 | 4 | 1 | 2 | 2 | 5 | 3 | 0 | 0 | 1 | 0 | 0 | 2 | 1 | 2 | 0 | 2 | 1 | 1 | 1 | 1 |
| 39371 | 152955 | 2017\-06\-13 | 6 | 6 | 3 | 6 | 6 | 6 | 6 | 1 | 0 | 0 | 2 | 1 | 4 | 4 | 5 | 0 | 5 | 4 | 3 | 6 | 3 |
| 39342 | 48303 | 2017\-05\-22 | 4 | 5 | 4 | 4 | 6 | 4 | 5 | 2 | 1 | 4 | 1 | 1 | 3 | 1 | 5 | 5 | 4 | 4 | 4 | 4 | 5 |
| 39159 | 151633 | 2017\-04\-04 | 4 | 5 | 6 | 5 | 3 | 6 | 2 | 6 | 4 | 0 | 4 | 0 | 3 | 6 | 4 | 4 | 6 | 6 | 6 | 6 | 4 |
| 38942 | 370464 | 2017\-02\-01 | 1 | 5 | 0 | 6 | 5 | 5 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 0 | 3 | 3 | 1 | 6 | 3 |
### 5\.4\.4 mutate()
Add new columns. This is one of the most useful functions in the tidyverse.
Refer to other columns by their names (unquoted). You can add more than one column in the same mutate function, just separate the columns with a comma. Once you make a new column, you can use it in further column definitions e.g., `total` below).
```
disgust_total <- disgust %>%
mutate(
pathogen = pathogen1 + pathogen2 + pathogen3 + pathogen4 + pathogen5 + pathogen6 + pathogen7,
moral = moral1 + moral2 + moral3 + moral4 + moral5 + moral6 + moral7,
sexual = sexual1 + sexual2 + sexual3 + sexual4 + sexual5 + sexual6 + sexual7,
total = pathogen + moral + sexual,
user_id = paste0("U", user_id)
)
```
Table 5\.4: Rows 1\-6 from `disgust_total`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 | pathogen | moral | sexual | total |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1199 | U0 | 2008\-10\-07 | 5 | 6 | 4 | 6 | 5 | 5 | 6 | 4 | 0 | 1 | 0 | 1 | 4 | 5 | 6 | 1 | 6 | 5 | 4 | 5 | 6 | 33 | 37 | 15 | 85 |
| 1 | U1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 | 19 | 10 | 12 | 41 |
| 1599 | U2 | 2008\-10\-27 | 1 | 1 | 1 | 1 | NA | NA | 1 | 1 | NA | 1 | NA | 1 | NA | NA | NA | NA | 1 | NA | NA | NA | NA | NA | NA | NA | NA |
| 13332 | U2118 | 2012\-01\-02 | 0 | 1 | 1 | 1 | 1 | 2 | 1 | 4 | 3 | 0 | 6 | 0 | 3 | 5 | 5 | 6 | 4 | 6 | 5 | 5 | 4 | 35 | 7 | 21 | 63 |
| 23 | U2311 | 2008\-07\-15 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 2 | 1 | 2 | 1 | 1 | 1 | 5 | 5 | 5 | 4 | 4 | 5 | 4 | 3 | 30 | 28 | 13 | 71 |
| 1160 | U3630 | 2008\-10\-06 | 1 | 5 | NA | 5 | 5 | 5 | 1 | 0 | 5 | 0 | 2 | 0 | 1 | 0 | 6 | 3 | 1 | 1 | 3 | 1 | 0 | 15 | NA | 8 | NA |
You can overwrite a column by giving a new column the same name as the old column (see `user_id`) above. Make sure that you mean to do this and that you aren’t trying to use the old column value after you redefine it.
### 5\.4\.5 summarise()
Create summary statistics for the dataset. Check the [Data Wrangling Cheat Sheet](https://www.rstudio.org/links/data_wrangling_cheat_sheet) or the [Data Transformation Cheat Sheet](https://github.com/rstudio/cheatsheets/raw/master/source/pdfs/data-transformation-cheatsheet.pdf) for various summary functions. Some common ones are: `mean()`, `sd()`, `n()`, `sum()`, and `quantile()`.
```
disgust_summary<- disgust_total %>%
summarise(
n = n(),
q25 = quantile(total, .25, na.rm = TRUE),
q50 = quantile(total, .50, na.rm = TRUE),
q75 = quantile(total, .75, na.rm = TRUE),
avg_total = mean(total, na.rm = TRUE),
sd_total = sd(total, na.rm = TRUE),
min_total = min(total, na.rm = TRUE),
max_total = max(total, na.rm = TRUE)
)
```
Table 5\.5: All rows from `disgust_summary`
| n | q25 | q50 | q75 | avg\_total | sd\_total | min\_total | max\_total |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 20000 | 59 | 71 | 83 | 70\.6868 | 18\.24253 | 0 | 126 |
### 5\.4\.6 group\_by()
Create subsets of the data. You can use this to create summaries,
like the mean value for all of your experimental groups.
Here, we’ll use `mutate` to create a new column called `year`, group by `year`, and calculate the average scores.
```
disgust_groups <- disgust_total %>%
mutate(year = year(date)) %>%
group_by(year) %>%
summarise(
n = n(),
avg_total = mean(total, na.rm = TRUE),
sd_total = sd(total, na.rm = TRUE),
min_total = min(total, na.rm = TRUE),
max_total = max(total, na.rm = TRUE),
.groups = "drop"
)
```
Table 5\.6: All rows from `disgust_groups`
| year | n | avg\_total | sd\_total | min\_total | max\_total |
| --- | --- | --- | --- | --- | --- |
| 2008 | 2578 | 70\.29975 | 18\.46251 | 0 | 126 |
| 2009 | 2580 | 69\.74481 | 18\.61959 | 3 | 126 |
| 2010 | 1514 | 70\.59238 | 18\.86846 | 6 | 126 |
| 2011 | 6046 | 71\.34425 | 17\.79446 | 0 | 126 |
| 2012 | 5938 | 70\.42530 | 18\.35782 | 0 | 126 |
| 2013 | 1251 | 71\.59574 | 17\.61375 | 0 | 126 |
| 2014 | 58 | 70\.46296 | 17\.23502 | 19 | 113 |
| 2015 | 21 | 74\.26316 | 16\.89787 | 43 | 107 |
| 2016 | 8 | 67\.87500 | 32\.62531 | 0 | 110 |
| 2017 | 6 | 57\.16667 | 27\.93862 | 21 | 90 |
If you don’t add `.groups = “drop”` at the end of the `summarise()` function, you will get the following message: “`summarise()` ungrouping output (override with `.groups` argument).” This just reminds you that the groups are still in effect and any further functions will also be grouped.
Older versions of dplyr didn’t do this, so older code will generate this warning if you run it with newer version of dplyr. Older code might `ungroup()` after `summarise()` to indicate that groupings should be dropped. The default behaviour is usually correct, so you don’t need to worry, but it’s best to explicitly set `.groups` in a `summarise()` function after `group_by()` if you want to “keep” or “drop” the groupings.
You can use `filter` after `group_by`. The following example returns the lowest total score from each year (i.e., the row where the `rank()` of the value in the column `total` is equivalent to `1`).
```
disgust_lowest <- disgust_total %>%
mutate(year = year(date)) %>%
select(user_id, year, total) %>%
group_by(year) %>%
filter(rank(total) == 1) %>%
arrange(year)
```
Table 5\.7: All rows from `disgust_lowest`
| user\_id | year | total |
| --- | --- | --- |
| U236585 | 2009 | 3 |
| U292359 | 2010 | 6 |
| U245384 | 2013 | 0 |
| U206293 | 2014 | 19 |
| U407089 | 2015 | 43 |
| U453237 | 2016 | 0 |
| U356866 | 2017 | 21 |
You can also use `mutate` after `group_by`. The following example calculates subject\-mean\-centered scores by grouping the scores by `user_id` and then subtracting the group\-specific mean from each score. Note the use of `gather` to tidy the data into a long format first.
```
disgust_smc <- disgust %>%
gather("question", "score", moral1:pathogen7) %>%
group_by(user_id) %>%
mutate(score_smc = score - mean(score, na.rm = TRUE)) %>%
ungroup()
```
Use `ungroup()` as soon as you are done with grouped functions, otherwise the data table will still be grouped when you use it in the future.
Table 5\.8: Rows 1\-6 from `disgust_smc`
| id | user\_id | date | question | score | score\_smc |
| --- | --- | --- | --- | --- | --- |
| 1199 | 0 | 2008\-10\-07 | moral1 | 5 | 0\.9523810 |
| 1 | 1 | 2008\-07\-10 | moral1 | 2 | 0\.0476190 |
| 1599 | 2 | 2008\-10\-27 | moral1 | 1 | 0\.0000000 |
| 13332 | 2118 | 2012\-01\-02 | moral1 | 0 | \-3\.0000000 |
| 23 | 2311 | 2008\-07\-15 | moral1 | 4 | 0\.6190476 |
| 1160 | 3630 | 2008\-10\-06 | moral1 | 1 | \-1\.2500000 |
### 5\.4\.7 All Together
A lot of what we did above would be easier if the data were tidy, so let’s do that first. Then we can use `group_by` to calculate the domain scores.
After that, we can spread out the 3 domains, calculate the total score, remove any rows with a missing (`NA`) total, and calculate mean values by year.
```
disgust_tidy <- dataskills::disgust %>%
gather("question", "score", moral1:pathogen7) %>%
separate(question, c("domain","q_num"), sep = -1) %>%
group_by(id, user_id, date, domain) %>%
summarise(score = mean(score), .groups = "drop")
```
Table 5\.9: Rows 1\-6 from `disgust_tidy`
| id | user\_id | date | domain | score |
| --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | moral | 1\.428571 |
| 1 | 1 | 2008\-07\-10 | pathogen | 2\.714286 |
| 1 | 1 | 2008\-07\-10 | sexual | 1\.714286 |
| 3 | 155324 | 2008\-07\-11 | moral | 3\.000000 |
| 3 | 155324 | 2008\-07\-11 | pathogen | 2\.571429 |
| 3 | 155324 | 2008\-07\-11 | sexual | 1\.857143 |
```
disgust_scored <- disgust_tidy %>%
spread(domain, score) %>%
mutate(
total = moral + sexual + pathogen,
year = year(date)
) %>%
filter(!is.na(total)) %>%
arrange(user_id)
```
Table 5\.10: Rows 1\-6 from `disgust_scored`
| id | user\_id | date | moral | pathogen | sexual | total | year |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1199 | 0 | 2008\-10\-07 | 5\.285714 | 4\.714286 | 2\.142857 | 12\.142857 | 2008 |
| 1 | 1 | 2008\-07\-10 | 1\.428571 | 2\.714286 | 1\.714286 | 5\.857143 | 2008 |
| 13332 | 2118 | 2012\-01\-02 | 1\.000000 | 5\.000000 | 3\.000000 | 9\.000000 | 2012 |
| 23 | 2311 | 2008\-07\-15 | 4\.000000 | 4\.285714 | 1\.857143 | 10\.142857 | 2008 |
| 7980 | 4458 | 2011\-09\-05 | 3\.428571 | 3\.571429 | 3\.000000 | 10\.000000 | 2011 |
| 552 | 4651 | 2008\-08\-23 | 3\.857143 | 4\.857143 | 4\.285714 | 13\.000000 | 2008 |
```
disgust_summarised <- disgust_scored %>%
group_by(year) %>%
summarise(
n = n(),
avg_pathogen = mean(pathogen),
avg_moral = mean(moral),
avg_sexual = mean(sexual),
first_user = first(user_id),
last_user = last(user_id),
.groups = "drop"
)
```
Table 5\.11: Rows 1\-6 from `disgust_summarised`
| year | n | avg\_pathogen | avg\_moral | avg\_sexual | first\_user | last\_user |
| --- | --- | --- | --- | --- | --- | --- |
| 2008 | 2392 | 3\.697265 | 3\.806259 | 2\.539298 | 0 | 188708 |
| 2009 | 2410 | 3\.674333 | 3\.760937 | 2\.528275 | 6093 | 251959 |
| 2010 | 1418 | 3\.731412 | 3\.843139 | 2\.510075 | 5469 | 319641 |
| 2011 | 5586 | 3\.756918 | 3\.806506 | 2\.628612 | 4458 | 406569 |
| 2012 | 5375 | 3\.740465 | 3\.774591 | 2\.545701 | 2118 | 458194 |
| 2013 | 1222 | 3\.771920 | 3\.906944 | 2\.549100 | 7646 | 462428 |
| 2014 | 54 | 3\.759259 | 4\.000000 | 2\.306878 | 11090 | 461307 |
| 2015 | 19 | 3\.781955 | 4\.451128 | 2\.375940 | 102699 | 460283 |
| 2016 | 8 | 3\.696429 | 3\.625000 | 2\.375000 | 4976 | 453237 |
| 2017 | 6 | 3\.071429 | 3\.690476 | 1\.404762 | 48303 | 370464 |
5\.5 Additional dplyr one\-table verbs
--------------------------------------
Use the code examples below and the help pages to figure out what the following one\-table verbs do. Most have pretty self\-explanatory names.
### 5\.5\.1 rename()
You can rename columns with `rename()`. Set the argument name to the new name, and the value to the old name. You need to put a name in quotes or backticks if it doesn’t follow the rules for a good variable name (contains only letter, numbers, underscores, and full stops; and doesn’t start with a number).
```
sw <- starwars %>%
rename(Name = name,
Height = height,
Mass = mass,
`Hair Colour` = hair_color,
`Skin Colour` = skin_color,
`Eye Colour` = eye_color,
`Birth Year` = birth_year)
names(sw)
```
```
## [1] "Name" "Height" "Mass" "Hair Colour" "Skin Colour"
## [6] "Eye Colour" "Birth Year" "sex" "gender" "homeworld"
## [11] "species" "films" "vehicles" "starships"
```
Almost everyone gets confused at some point with `rename()` and tries to put the original names on the left and the new names on the right. Try it and see what the error message looks like.
### 5\.5\.2 distinct()
Get rid of exactly duplicate rows with `distinct()`. This can be helpful if, for example, you are merging data from multiple computers and some of the data got copied from one computer to another, creating duplicate rows.
```
# create a data table with duplicated values
dupes <- tibble(
id = c( 1, 2, 1, 2, 1, 2),
dv = c("A", "B", "C", "D", "A", "B")
)
distinct(dupes)
```
| id | dv |
| --- | --- |
| 1 | A |
| 2 | B |
| 1 | C |
| 2 | D |
### 5\.5\.3 count()
The function `count()` is a quick shortcut for the common combination of `group_by()` and `summarise()` used to count the number of rows per group.
```
starwars %>%
group_by(sex) %>%
summarise(n = n(), .groups = "drop")
```
| sex | n |
| --- | --- |
| female | 16 |
| hermaphroditic | 1 |
| male | 60 |
| none | 6 |
| NA | 4 |
```
count(starwars, sex)
```
| sex | n |
| --- | --- |
| female | 16 |
| hermaphroditic | 1 |
| male | 60 |
| none | 6 |
| NA | 4 |
### 5\.5\.4 slice()
```
slice(starwars, 1:3, 10)
```
| name | height | mass | hair\_color | skin\_color | eye\_color | birth\_year | sex | gender | homeworld | species | films | vehicles | starships |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Luke Skywalker | 172 | 77 | blond | fair | blue | 19 | male | masculine | Tatooine | Human | The Empire Strikes Back, Revenge of the Sith , Return of the Jedi , A New Hope , The Force Awakens | Snowspeeder , Imperial Speeder Bike | X\-wing , Imperial shuttle |
| C\-3PO | 167 | 75 | NA | gold | yellow | 112 | none | masculine | Tatooine | Droid | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope | | |
| R2\-D2 | 96 | 32 | NA | white, blue | red | 33 | none | masculine | Naboo | Droid | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope , The Force Awakens | | |
| Obi\-Wan Kenobi | 182 | 77 | auburn, white | fair | blue\-gray | 57 | male | masculine | Stewjon | Human | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope | Tribubble bongo | Jedi starfighter , Trade Federation cruiser, Naboo star skiff , Jedi Interceptor , Belbullab\-22 starfighter |
### 5\.5\.5 pull()
```
starwars %>%
filter(species == "Droid") %>%
pull(name)
```
```
## [1] "C-3PO" "R2-D2" "R5-D4" "IG-88" "R4-P17" "BB8"
```
### 5\.5\.1 rename()
You can rename columns with `rename()`. Set the argument name to the new name, and the value to the old name. You need to put a name in quotes or backticks if it doesn’t follow the rules for a good variable name (contains only letter, numbers, underscores, and full stops; and doesn’t start with a number).
```
sw <- starwars %>%
rename(Name = name,
Height = height,
Mass = mass,
`Hair Colour` = hair_color,
`Skin Colour` = skin_color,
`Eye Colour` = eye_color,
`Birth Year` = birth_year)
names(sw)
```
```
## [1] "Name" "Height" "Mass" "Hair Colour" "Skin Colour"
## [6] "Eye Colour" "Birth Year" "sex" "gender" "homeworld"
## [11] "species" "films" "vehicles" "starships"
```
Almost everyone gets confused at some point with `rename()` and tries to put the original names on the left and the new names on the right. Try it and see what the error message looks like.
### 5\.5\.2 distinct()
Get rid of exactly duplicate rows with `distinct()`. This can be helpful if, for example, you are merging data from multiple computers and some of the data got copied from one computer to another, creating duplicate rows.
```
# create a data table with duplicated values
dupes <- tibble(
id = c( 1, 2, 1, 2, 1, 2),
dv = c("A", "B", "C", "D", "A", "B")
)
distinct(dupes)
```
| id | dv |
| --- | --- |
| 1 | A |
| 2 | B |
| 1 | C |
| 2 | D |
### 5\.5\.3 count()
The function `count()` is a quick shortcut for the common combination of `group_by()` and `summarise()` used to count the number of rows per group.
```
starwars %>%
group_by(sex) %>%
summarise(n = n(), .groups = "drop")
```
| sex | n |
| --- | --- |
| female | 16 |
| hermaphroditic | 1 |
| male | 60 |
| none | 6 |
| NA | 4 |
```
count(starwars, sex)
```
| sex | n |
| --- | --- |
| female | 16 |
| hermaphroditic | 1 |
| male | 60 |
| none | 6 |
| NA | 4 |
### 5\.5\.4 slice()
```
slice(starwars, 1:3, 10)
```
| name | height | mass | hair\_color | skin\_color | eye\_color | birth\_year | sex | gender | homeworld | species | films | vehicles | starships |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Luke Skywalker | 172 | 77 | blond | fair | blue | 19 | male | masculine | Tatooine | Human | The Empire Strikes Back, Revenge of the Sith , Return of the Jedi , A New Hope , The Force Awakens | Snowspeeder , Imperial Speeder Bike | X\-wing , Imperial shuttle |
| C\-3PO | 167 | 75 | NA | gold | yellow | 112 | none | masculine | Tatooine | Droid | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope | | |
| R2\-D2 | 96 | 32 | NA | white, blue | red | 33 | none | masculine | Naboo | Droid | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope , The Force Awakens | | |
| Obi\-Wan Kenobi | 182 | 77 | auburn, white | fair | blue\-gray | 57 | male | masculine | Stewjon | Human | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope | Tribubble bongo | Jedi starfighter , Trade Federation cruiser, Naboo star skiff , Jedi Interceptor , Belbullab\-22 starfighter |
### 5\.5\.5 pull()
```
starwars %>%
filter(species == "Droid") %>%
pull(name)
```
```
## [1] "C-3PO" "R2-D2" "R5-D4" "IG-88" "R4-P17" "BB8"
```
5\.6 Window functions
---------------------
Window functions use the order of rows to calculate values. You can use them to do things that require ranking or ordering, like choose the top scores in each class, or accessing the previous and next rows, like calculating cumulative sums or means.
The [dplyr window functions vignette](https://dplyr.tidyverse.org/articles/window-functions.html) has very good detailed explanations of these functions, but we’ve described a few of the most useful ones below.
### 5\.6\.1 Ranking functions
```
grades <- tibble(
id = 1:5,
"Data Skills" = c(16, 17, 17, 19, 20),
"Statistics" = c(14, 16, 18, 18, 19)
) %>%
gather(class, grade, 2:3) %>%
group_by(class) %>%
mutate(row_number = row_number(),
rank = rank(grade),
min_rank = min_rank(grade),
dense_rank = dense_rank(grade),
quartile = ntile(grade, 4),
percentile = ntile(grade, 100))
```
Table 5\.12: All rows from `grades`
| id | class | grade | row\_number | rank | min\_rank | dense\_rank | quartile | percentile |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | Data Skills | 16 | 1 | 1\.0 | 1 | 1 | 1 | 1 |
| 2 | Data Skills | 17 | 2 | 2\.5 | 2 | 2 | 1 | 2 |
| 3 | Data Skills | 17 | 3 | 2\.5 | 2 | 2 | 2 | 3 |
| 4 | Data Skills | 19 | 4 | 4\.0 | 4 | 3 | 3 | 4 |
| 5 | Data Skills | 20 | 5 | 5\.0 | 5 | 4 | 4 | 5 |
| 1 | Statistics | 14 | 1 | 1\.0 | 1 | 1 | 1 | 1 |
| 2 | Statistics | 16 | 2 | 2\.0 | 2 | 2 | 1 | 2 |
| 3 | Statistics | 18 | 3 | 3\.5 | 3 | 3 | 2 | 3 |
| 4 | Statistics | 18 | 4 | 3\.5 | 3 | 3 | 3 | 4 |
| 5 | Statistics | 19 | 5 | 5\.0 | 5 | 4 | 4 | 5 |
* What are the differences among `row_number()`, `rank()`, `min_rank()`, `dense_rank()`, and `ntile()`?
* Why doesn’t `row_number()` need an argument?
* What would happen if you gave it the argument `grade` or `class`?
* What do you think would happen if you removed the `group_by(class)` line above?
* What if you added `id` to the grouping?
* What happens if you change the order of the rows?
* What does the second argument in `ntile()` do?
You can use window functions to group your data into quantiles.
```
sw_mass <- starwars %>%
group_by(tertile = ntile(mass, 3)) %>%
summarise(min = min(mass),
max = max(mass),
mean = mean(mass),
.groups = "drop")
```
Table 5\.13: All rows from `sw_mass`
| tertile | min | max | mean |
| --- | --- | --- | --- |
| 1 | 15 | 68 | 45\.6600 |
| 2 | 74 | 82 | 78\.4100 |
| 3 | 83 | 1358 | 171\.5789 |
| NA | NA | NA | NA |
Why is there a row of `NA` values? How would you get rid of them?
### 5\.6\.2 Offset functions
The function `lag()` gives a previous row’s value. It defaults to 1 row back, but you can change that with the `n` argument. The function `lead()` gives values ahead of the current row.
```
lag_lead <- tibble(x = 1:6) %>%
mutate(lag = lag(x),
lag2 = lag(x, n = 2),
lead = lead(x, default = 0))
```
Table 5\.14: All rows from `lag_lead`
| x | lag | lag2 | lead |
| --- | --- | --- | --- |
| 1 | NA | NA | 2 |
| 2 | 1 | NA | 3 |
| 3 | 2 | 1 | 4 |
| 4 | 3 | 2 | 5 |
| 5 | 4 | 3 | 6 |
| 6 | 5 | 4 | 0 |
You can use offset functions to calculate change between trials or where a value changes. Use the `order_by` argument to specify the order of the rows. Alternatively, you can use `arrange()` before the offset functions.
```
trials <- tibble(
trial = sample(1:10, 10),
cond = sample(c("exp", "ctrl"), 10, T),
score = rpois(10, 4)
) %>%
mutate(
score_change = score - lag(score, order_by = trial),
change_cond = cond != lag(cond, order_by = trial,
default = "no condition")
) %>%
arrange(trial)
```
Table 5\.15: All rows from `trials`
| trial | cond | score | score\_change | change\_cond |
| --- | --- | --- | --- | --- |
| 1 | ctrl | 8 | NA | TRUE |
| 2 | ctrl | 4 | \-4 | FALSE |
| 3 | exp | 6 | 2 | TRUE |
| 4 | ctrl | 2 | \-4 | TRUE |
| 5 | ctrl | 3 | 1 | FALSE |
| 6 | ctrl | 6 | 3 | FALSE |
| 7 | ctrl | 2 | \-4 | FALSE |
| 8 | exp | 4 | 2 | TRUE |
| 9 | ctrl | 4 | 0 | TRUE |
| 10 | exp | 3 | \-1 | TRUE |
Look at the help pages for `lag()` and `lead()`.
* What happens if you remove the `order_by` argument or change it to `cond`?
* What does the `default` argument do?
* Can you think of circumstances in your own data where you might need to use `lag()` or `lead()`?
### 5\.6\.3 Cumulative aggregates
`cumsum()`, `cummin()`, and `cummax()` are base R functions for calculating cumulative means, minimums, and maximums. The dplyr package introduces `cumany()` and `cumall()`, which return `TRUE` if any or all of the previous values meet their criteria.
```
cumulative <- tibble(
time = 1:10,
obs = c(2, 2, 1, 2, 4, 3, 1, 0, 3, 5)
) %>%
mutate(
cumsum = cumsum(obs),
cummin = cummin(obs),
cummax = cummax(obs),
cumany = cumany(obs == 3),
cumall = cumall(obs < 4)
)
```
Table 5\.16: All rows from `cumulative`
| time | obs | cumsum | cummin | cummax | cumany | cumall |
| --- | --- | --- | --- | --- | --- | --- |
| 1 | 2 | 2 | 2 | 2 | FALSE | TRUE |
| 2 | 2 | 4 | 2 | 2 | FALSE | TRUE |
| 3 | 1 | 5 | 1 | 2 | FALSE | TRUE |
| 4 | 2 | 7 | 1 | 2 | FALSE | TRUE |
| 5 | 4 | 11 | 1 | 4 | FALSE | FALSE |
| 6 | 3 | 14 | 1 | 4 | TRUE | FALSE |
| 7 | 1 | 15 | 1 | 4 | TRUE | FALSE |
| 8 | 0 | 15 | 0 | 4 | TRUE | FALSE |
| 9 | 3 | 18 | 0 | 4 | TRUE | FALSE |
| 10 | 5 | 23 | 0 | 5 | TRUE | FALSE |
* What would happen if you change `cumany(obs == 3)` to `cumany(obs > 2)`?
* What would happen if you change `cumall(obs < 4)` to `cumall(obs < 2)`?
* Can you think of circumstances in your own data where you might need to use `cumany()` or `cumall()`?
### 5\.6\.1 Ranking functions
```
grades <- tibble(
id = 1:5,
"Data Skills" = c(16, 17, 17, 19, 20),
"Statistics" = c(14, 16, 18, 18, 19)
) %>%
gather(class, grade, 2:3) %>%
group_by(class) %>%
mutate(row_number = row_number(),
rank = rank(grade),
min_rank = min_rank(grade),
dense_rank = dense_rank(grade),
quartile = ntile(grade, 4),
percentile = ntile(grade, 100))
```
Table 5\.12: All rows from `grades`
| id | class | grade | row\_number | rank | min\_rank | dense\_rank | quartile | percentile |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | Data Skills | 16 | 1 | 1\.0 | 1 | 1 | 1 | 1 |
| 2 | Data Skills | 17 | 2 | 2\.5 | 2 | 2 | 1 | 2 |
| 3 | Data Skills | 17 | 3 | 2\.5 | 2 | 2 | 2 | 3 |
| 4 | Data Skills | 19 | 4 | 4\.0 | 4 | 3 | 3 | 4 |
| 5 | Data Skills | 20 | 5 | 5\.0 | 5 | 4 | 4 | 5 |
| 1 | Statistics | 14 | 1 | 1\.0 | 1 | 1 | 1 | 1 |
| 2 | Statistics | 16 | 2 | 2\.0 | 2 | 2 | 1 | 2 |
| 3 | Statistics | 18 | 3 | 3\.5 | 3 | 3 | 2 | 3 |
| 4 | Statistics | 18 | 4 | 3\.5 | 3 | 3 | 3 | 4 |
| 5 | Statistics | 19 | 5 | 5\.0 | 5 | 4 | 4 | 5 |
* What are the differences among `row_number()`, `rank()`, `min_rank()`, `dense_rank()`, and `ntile()`?
* Why doesn’t `row_number()` need an argument?
* What would happen if you gave it the argument `grade` or `class`?
* What do you think would happen if you removed the `group_by(class)` line above?
* What if you added `id` to the grouping?
* What happens if you change the order of the rows?
* What does the second argument in `ntile()` do?
You can use window functions to group your data into quantiles.
```
sw_mass <- starwars %>%
group_by(tertile = ntile(mass, 3)) %>%
summarise(min = min(mass),
max = max(mass),
mean = mean(mass),
.groups = "drop")
```
Table 5\.13: All rows from `sw_mass`
| tertile | min | max | mean |
| --- | --- | --- | --- |
| 1 | 15 | 68 | 45\.6600 |
| 2 | 74 | 82 | 78\.4100 |
| 3 | 83 | 1358 | 171\.5789 |
| NA | NA | NA | NA |
Why is there a row of `NA` values? How would you get rid of them?
### 5\.6\.2 Offset functions
The function `lag()` gives a previous row’s value. It defaults to 1 row back, but you can change that with the `n` argument. The function `lead()` gives values ahead of the current row.
```
lag_lead <- tibble(x = 1:6) %>%
mutate(lag = lag(x),
lag2 = lag(x, n = 2),
lead = lead(x, default = 0))
```
Table 5\.14: All rows from `lag_lead`
| x | lag | lag2 | lead |
| --- | --- | --- | --- |
| 1 | NA | NA | 2 |
| 2 | 1 | NA | 3 |
| 3 | 2 | 1 | 4 |
| 4 | 3 | 2 | 5 |
| 5 | 4 | 3 | 6 |
| 6 | 5 | 4 | 0 |
You can use offset functions to calculate change between trials or where a value changes. Use the `order_by` argument to specify the order of the rows. Alternatively, you can use `arrange()` before the offset functions.
```
trials <- tibble(
trial = sample(1:10, 10),
cond = sample(c("exp", "ctrl"), 10, T),
score = rpois(10, 4)
) %>%
mutate(
score_change = score - lag(score, order_by = trial),
change_cond = cond != lag(cond, order_by = trial,
default = "no condition")
) %>%
arrange(trial)
```
Table 5\.15: All rows from `trials`
| trial | cond | score | score\_change | change\_cond |
| --- | --- | --- | --- | --- |
| 1 | ctrl | 8 | NA | TRUE |
| 2 | ctrl | 4 | \-4 | FALSE |
| 3 | exp | 6 | 2 | TRUE |
| 4 | ctrl | 2 | \-4 | TRUE |
| 5 | ctrl | 3 | 1 | FALSE |
| 6 | ctrl | 6 | 3 | FALSE |
| 7 | ctrl | 2 | \-4 | FALSE |
| 8 | exp | 4 | 2 | TRUE |
| 9 | ctrl | 4 | 0 | TRUE |
| 10 | exp | 3 | \-1 | TRUE |
Look at the help pages for `lag()` and `lead()`.
* What happens if you remove the `order_by` argument or change it to `cond`?
* What does the `default` argument do?
* Can you think of circumstances in your own data where you might need to use `lag()` or `lead()`?
### 5\.6\.3 Cumulative aggregates
`cumsum()`, `cummin()`, and `cummax()` are base R functions for calculating cumulative means, minimums, and maximums. The dplyr package introduces `cumany()` and `cumall()`, which return `TRUE` if any or all of the previous values meet their criteria.
```
cumulative <- tibble(
time = 1:10,
obs = c(2, 2, 1, 2, 4, 3, 1, 0, 3, 5)
) %>%
mutate(
cumsum = cumsum(obs),
cummin = cummin(obs),
cummax = cummax(obs),
cumany = cumany(obs == 3),
cumall = cumall(obs < 4)
)
```
Table 5\.16: All rows from `cumulative`
| time | obs | cumsum | cummin | cummax | cumany | cumall |
| --- | --- | --- | --- | --- | --- | --- |
| 1 | 2 | 2 | 2 | 2 | FALSE | TRUE |
| 2 | 2 | 4 | 2 | 2 | FALSE | TRUE |
| 3 | 1 | 5 | 1 | 2 | FALSE | TRUE |
| 4 | 2 | 7 | 1 | 2 | FALSE | TRUE |
| 5 | 4 | 11 | 1 | 4 | FALSE | FALSE |
| 6 | 3 | 14 | 1 | 4 | TRUE | FALSE |
| 7 | 1 | 15 | 1 | 4 | TRUE | FALSE |
| 8 | 0 | 15 | 0 | 4 | TRUE | FALSE |
| 9 | 3 | 18 | 0 | 4 | TRUE | FALSE |
| 10 | 5 | 23 | 0 | 5 | TRUE | FALSE |
* What would happen if you change `cumany(obs == 3)` to `cumany(obs > 2)`?
* What would happen if you change `cumall(obs < 4)` to `cumall(obs < 2)`?
* Can you think of circumstances in your own data where you might need to use `cumany()` or `cumall()`?
5\.7 Glossary
-------------
| term | definition |
| --- | --- |
| [data wrangling](https://psyteachr.github.io/glossary/d#data.wrangling) | The process of preparing data for visualisation and statistical analysis. |
5\.8 Exercises
--------------
Download the [exercises](exercises/05_dplyr_exercise.Rmd). See the [answers](exercises/05_dplyr_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(5)
# run this to access the answers
dataskills::exercise(5, answers = TRUE)
```
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/dplyr.html |
Chapter 5 Data Wrangling
========================
5\.1 Learning Objectives
------------------------
### 5\.1\.1 Basic
1. Be able to use the 6 main dplyr one\-table verbs: [(video)](https://youtu.be/l12tNKClTR0)
* [`select()`](dplyr.html#select)
* [`filter()`](dplyr.html#filter)
* [`arrange()`](dplyr.html#arrange)
* [`mutate()`](dplyr.html#mutate)
* [`summarise()`](dplyr.html#summarise)
* [`group_by()`](dplyr.html#group_by)
2. Be able to [wrangle data by chaining tidyr and dplyr functions](dplyr.html#all-together) [(video)](https://youtu.be/hzFFAkwrkqA)
3. Be able to use these additional one\-table verbs: [(video)](https://youtu.be/GmfF162mq4g)
* [`rename()`](dplyr.html#rename)
* [`distinct()`](dplyr.html#distinct)
* [`count()`](dplyr.html#count)
* [`slice()`](dplyr.html#slice)
* [`pull()`](dplyr.html#pull)
### 5\.1\.2 Intermediate
4. Fine control of [`select()` operations](dplyr.html#select_helpers) [(video)](https://youtu.be/R1bi1QwF9t0)
5. Use [window functions](dplyr.html#window) [(video)](https://youtu.be/uo4b0W9mqPc)
5\.2 Resources
--------------
* [Chapter 5: Data Transformation](http://r4ds.had.co.nz/transform.html) in *R for Data Science*
* [Data transformation cheat sheet](https://github.com/rstudio/cheatsheets/raw/master/data-transformation.pdf)
* [Chapter 16: Date and times](http://r4ds.had.co.nz/dates-and-times.html) in *R for Data Science*
5\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(lubridate)
library(dataskills)
set.seed(8675309) # makes sure random numbers are reproducible
```
### 5\.3\.1 The `disgust` dataset
These examples will use data from `dataskills::disgust`, which contains data from the [Three Domain Disgust Scale](http://digitalrepository.unm.edu/cgi/viewcontent.cgi?article=1139&context=psy_etds). Each participant is identified by a unique `user_id` and each questionnaire completion has a unique `id`. Look at the Help for this dataset to see the individual questions.
```
data("disgust", package = "dataskills")
#disgust <- read_csv("https://psyteachr.github.io/msc-data-skills/data/disgust.csv")
```
5\.4 Six main dplyr verbs
-------------------------
Most of the [data wrangling](https://psyteachr.github.io/glossary/d#data-wrangling "The process of preparing data for visualisation and statistical analysis.") you’ll want to do with psychological data will involve the `tidyr` functions you learned in [Chapter 4](tidyr.html#tidyr) and the six main `dplyr` verbs: `select`, `filter`, `arrange`, `mutate`, `summarise`, and `group_by`.
### 5\.4\.1 select()
Select columns by name or number.
You can select each column individually, separated by commas (e.g., `col1, col2`). You can also select all columns between two columns by separating them with a colon (e.g., `start_col:end_col`).
```
moral <- disgust %>% select(user_id, moral1:moral7)
names(moral)
```
```
## [1] "user_id" "moral1" "moral2" "moral3" "moral4" "moral5" "moral6"
## [8] "moral7"
```
You can select columns by number, which is useful when the column names are long or complicated.
```
sexual <- disgust %>% select(2, 11:17)
names(sexual)
```
```
## [1] "user_id" "sexual1" "sexual2" "sexual3" "sexual4" "sexual5" "sexual6"
## [8] "sexual7"
```
You can use a minus symbol to unselect columns, leaving all of the other columns. If you want to exclude a span of columns, put parentheses around the span first (e.g., `-(moral1:moral7)`, not `-moral1:moral7`).
```
pathogen <- disgust %>% select(-id, -date, -(moral1:sexual7))
names(pathogen)
```
```
## [1] "user_id" "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5"
## [7] "pathogen6" "pathogen7"
```
#### 5\.4\.1\.1 Select helpers
You can select columns based on criteria about the column names.
##### 5\.4\.1\.1\.1 `starts_with()`
Select columns that start with a character string.
```
u <- disgust %>% select(starts_with("u"))
names(u)
```
```
## [1] "user_id"
```
##### 5\.4\.1\.1\.2 `ends_with()`
Select columns that end with a character string.
```
firstq <- disgust %>% select(ends_with("1"))
names(firstq)
```
```
## [1] "moral1" "sexual1" "pathogen1"
```
##### 5\.4\.1\.1\.3 `contains()`
Select columns that contain a character string.
```
pathogen <- disgust %>% select(contains("pathogen"))
names(pathogen)
```
```
## [1] "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5" "pathogen6"
## [7] "pathogen7"
```
##### 5\.4\.1\.1\.4 `num_range()`
Select columns with a name that matches the pattern `prefix`.
```
moral2_4 <- disgust %>% select(num_range("moral", 2:4))
names(moral2_4)
```
```
## [1] "moral2" "moral3" "moral4"
```
Use `width` to set the number of digits with leading zeros. For example, `num_range(‘var_,’ 8:10, width=2)` selects columns `var_08`, `var_09`, and `var_10`.
### 5\.4\.2 filter()
Select rows by matching column criteria.
Select all rows where the user\_id is 1 (that’s Lisa).
```
disgust %>% filter(user_id == 1)
```
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 |
Remember to use `==` and not `=` to check if two things are equivalent. A single `=` assigns the righthand value to the lefthand variable and (usually) evaluates to `TRUE`.
You can select on multiple criteria by separating them with commas.
```
amoral <- disgust %>% filter(
moral1 == 0,
moral2 == 0,
moral3 == 0,
moral4 == 0,
moral5 == 0,
moral6 == 0,
moral7 == 0
)
```
You can use the symbols `&`, `|`, and `!` to mean “and,” “or,” and “not.” You can also use other operators to make equations.
```
# everyone who chose either 0 or 7 for question moral1
moral_extremes <- disgust %>%
filter(moral1 == 0 | moral1 == 7)
# everyone who chose the same answer for all moral questions
moral_consistent <- disgust %>%
filter(
moral2 == moral1 &
moral3 == moral1 &
moral4 == moral1 &
moral5 == moral1 &
moral6 == moral1 &
moral7 == moral1
)
# everyone who did not answer 7 for all 7 moral questions
moral_no_ceiling <- disgust %>%
filter(moral1+moral2+moral3+moral4+moral5+moral6+moral7 != 7*7)
```
#### 5\.4\.2\.1 Match operator (%in%)
Sometimes you need to exclude some participant IDs for reasons that can’t be described in code. The match operator (`%in%`) is useful here for testing if a column value is in a list. Surround the equation with parentheses and put `!` in front to test that a value is not in the list.
```
no_researchers <- disgust %>%
filter(!(user_id %in% c(1,2)))
```
#### 5\.4\.2\.2 Dates
You can use the `lubridate` package to work with dates. For example, you can use the `year()` function to return just the year from the `date` column and then select only data collected in 2010\.
```
disgust2010 <- disgust %>%
filter(year(date) == 2010)
```
Table 5\.1: Rows 1\-6 from `disgust2010`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 6902 | 5469 | 2010\-12\-06 | 0 | 1 | 3 | 4 | 1 | 0 | 1 | 3 | 5 | 2 | 4 | 6 | 6 | 5 | 5 | 2 | 4 | 4 | 2 | 2 | 6 |
| 6158 | 6066 | 2010\-04\-18 | 4 | 5 | 6 | 5 | 5 | 4 | 4 | 3 | 0 | 1 | 6 | 3 | 5 | 3 | 6 | 5 | 5 | 5 | 5 | 5 | 5 |
| 6362 | 7129 | 2010\-06\-09 | 4 | 4 | 4 | 4 | 3 | 3 | 2 | 4 | 2 | 1 | 3 | 2 | 3 | 6 | 5 | 2 | 0 | 4 | 5 | 5 | 4 |
| 6302 | 39318 | 2010\-05\-20 | 2 | 4 | 1 | 4 | 5 | 6 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 3 | 2 | 3 | 2 | 3 | 2 | 4 |
| 5429 | 43029 | 2010\-01\-02 | 1 | 1 | 1 | 3 | 6 | 4 | 2 | 2 | 0 | 1 | 4 | 6 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 6 | 4 |
| 6732 | 71955 | 2010\-10\-15 | 2 | 5 | 3 | 6 | 3 | 2 | 5 | 4 | 3 | 3 | 6 | 6 | 6 | 5 | 4 | 2 | 6 | 5 | 6 | 6 | 3 |
Or select data from at least 5 years ago. You can use the `range` function to check the minimum and maximum dates in the resulting dataset.
```
disgust_5ago <- disgust %>%
filter(date < today() - dyears(5))
range(disgust_5ago$date)
```
```
## [1] "2008-07-10" "2016-08-04"
```
### 5\.4\.3 arrange()
Sort your dataset using `arrange()`. You will find yourself needing to sort data in R much less than you do in Excel, since you don’t need to have rows next to each other in order to, for example, calculate group means. But `arrange()` can be useful when preparing data from display in tables.
```
disgust_order <- disgust %>%
arrange(date, moral1)
```
Table 5\.2: Rows 1\-6 from `disgust_order`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 |
| 3 | 155324 | 2008\-07\-11 | 2 | 4 | 3 | 5 | 2 | 1 | 4 | 1 | 0 | 1 | 2 | 2 | 6 | 1 | 4 | 3 | 1 | 0 | 4 | 4 | 2 |
| 6 | 155386 | 2008\-07\-12 | 2 | 4 | 0 | 4 | 0 | 0 | 0 | 6 | 0 | 0 | 6 | 4 | 4 | 6 | 4 | 5 | 5 | 1 | 6 | 4 | 2 |
| 7 | 155409 | 2008\-07\-12 | 4 | 5 | 5 | 4 | 5 | 1 | 5 | 3 | 0 | 1 | 5 | 2 | 0 | 0 | 5 | 5 | 3 | 4 | 4 | 2 | 6 |
| 4 | 155366 | 2008\-07\-12 | 6 | 6 | 6 | 3 | 6 | 6 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 4 | 4 | 5 | 5 | 4 | 6 | 0 |
| 5 | 155370 | 2008\-07\-12 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 2 | 6 | 4 | 3 | 6 | 6 | 6 | 6 | 6 | 6 | 2 | 4 | 4 | 6 |
Reverse the order using `desc()`
```
disgust_order_desc <- disgust %>%
arrange(desc(date))
```
Table 5\.3: Rows 1\-6 from `disgust_order_desc`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 39456 | 356866 | 2017\-08\-21 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 39447 | 128727 | 2017\-08\-13 | 2 | 4 | 1 | 2 | 2 | 5 | 3 | 0 | 0 | 1 | 0 | 0 | 2 | 1 | 2 | 0 | 2 | 1 | 1 | 1 | 1 |
| 39371 | 152955 | 2017\-06\-13 | 6 | 6 | 3 | 6 | 6 | 6 | 6 | 1 | 0 | 0 | 2 | 1 | 4 | 4 | 5 | 0 | 5 | 4 | 3 | 6 | 3 |
| 39342 | 48303 | 2017\-05\-22 | 4 | 5 | 4 | 4 | 6 | 4 | 5 | 2 | 1 | 4 | 1 | 1 | 3 | 1 | 5 | 5 | 4 | 4 | 4 | 4 | 5 |
| 39159 | 151633 | 2017\-04\-04 | 4 | 5 | 6 | 5 | 3 | 6 | 2 | 6 | 4 | 0 | 4 | 0 | 3 | 6 | 4 | 4 | 6 | 6 | 6 | 6 | 4 |
| 38942 | 370464 | 2017\-02\-01 | 1 | 5 | 0 | 6 | 5 | 5 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 0 | 3 | 3 | 1 | 6 | 3 |
### 5\.4\.4 mutate()
Add new columns. This is one of the most useful functions in the tidyverse.
Refer to other columns by their names (unquoted). You can add more than one column in the same mutate function, just separate the columns with a comma. Once you make a new column, you can use it in further column definitions e.g., `total` below).
```
disgust_total <- disgust %>%
mutate(
pathogen = pathogen1 + pathogen2 + pathogen3 + pathogen4 + pathogen5 + pathogen6 + pathogen7,
moral = moral1 + moral2 + moral3 + moral4 + moral5 + moral6 + moral7,
sexual = sexual1 + sexual2 + sexual3 + sexual4 + sexual5 + sexual6 + sexual7,
total = pathogen + moral + sexual,
user_id = paste0("U", user_id)
)
```
Table 5\.4: Rows 1\-6 from `disgust_total`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 | pathogen | moral | sexual | total |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1199 | U0 | 2008\-10\-07 | 5 | 6 | 4 | 6 | 5 | 5 | 6 | 4 | 0 | 1 | 0 | 1 | 4 | 5 | 6 | 1 | 6 | 5 | 4 | 5 | 6 | 33 | 37 | 15 | 85 |
| 1 | U1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 | 19 | 10 | 12 | 41 |
| 1599 | U2 | 2008\-10\-27 | 1 | 1 | 1 | 1 | NA | NA | 1 | 1 | NA | 1 | NA | 1 | NA | NA | NA | NA | 1 | NA | NA | NA | NA | NA | NA | NA | NA |
| 13332 | U2118 | 2012\-01\-02 | 0 | 1 | 1 | 1 | 1 | 2 | 1 | 4 | 3 | 0 | 6 | 0 | 3 | 5 | 5 | 6 | 4 | 6 | 5 | 5 | 4 | 35 | 7 | 21 | 63 |
| 23 | U2311 | 2008\-07\-15 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 2 | 1 | 2 | 1 | 1 | 1 | 5 | 5 | 5 | 4 | 4 | 5 | 4 | 3 | 30 | 28 | 13 | 71 |
| 1160 | U3630 | 2008\-10\-06 | 1 | 5 | NA | 5 | 5 | 5 | 1 | 0 | 5 | 0 | 2 | 0 | 1 | 0 | 6 | 3 | 1 | 1 | 3 | 1 | 0 | 15 | NA | 8 | NA |
You can overwrite a column by giving a new column the same name as the old column (see `user_id`) above. Make sure that you mean to do this and that you aren’t trying to use the old column value after you redefine it.
### 5\.4\.5 summarise()
Create summary statistics for the dataset. Check the [Data Wrangling Cheat Sheet](https://www.rstudio.org/links/data_wrangling_cheat_sheet) or the [Data Transformation Cheat Sheet](https://github.com/rstudio/cheatsheets/raw/master/source/pdfs/data-transformation-cheatsheet.pdf) for various summary functions. Some common ones are: `mean()`, `sd()`, `n()`, `sum()`, and `quantile()`.
```
disgust_summary<- disgust_total %>%
summarise(
n = n(),
q25 = quantile(total, .25, na.rm = TRUE),
q50 = quantile(total, .50, na.rm = TRUE),
q75 = quantile(total, .75, na.rm = TRUE),
avg_total = mean(total, na.rm = TRUE),
sd_total = sd(total, na.rm = TRUE),
min_total = min(total, na.rm = TRUE),
max_total = max(total, na.rm = TRUE)
)
```
Table 5\.5: All rows from `disgust_summary`
| n | q25 | q50 | q75 | avg\_total | sd\_total | min\_total | max\_total |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 20000 | 59 | 71 | 83 | 70\.6868 | 18\.24253 | 0 | 126 |
### 5\.4\.6 group\_by()
Create subsets of the data. You can use this to create summaries,
like the mean value for all of your experimental groups.
Here, we’ll use `mutate` to create a new column called `year`, group by `year`, and calculate the average scores.
```
disgust_groups <- disgust_total %>%
mutate(year = year(date)) %>%
group_by(year) %>%
summarise(
n = n(),
avg_total = mean(total, na.rm = TRUE),
sd_total = sd(total, na.rm = TRUE),
min_total = min(total, na.rm = TRUE),
max_total = max(total, na.rm = TRUE),
.groups = "drop"
)
```
Table 5\.6: All rows from `disgust_groups`
| year | n | avg\_total | sd\_total | min\_total | max\_total |
| --- | --- | --- | --- | --- | --- |
| 2008 | 2578 | 70\.29975 | 18\.46251 | 0 | 126 |
| 2009 | 2580 | 69\.74481 | 18\.61959 | 3 | 126 |
| 2010 | 1514 | 70\.59238 | 18\.86846 | 6 | 126 |
| 2011 | 6046 | 71\.34425 | 17\.79446 | 0 | 126 |
| 2012 | 5938 | 70\.42530 | 18\.35782 | 0 | 126 |
| 2013 | 1251 | 71\.59574 | 17\.61375 | 0 | 126 |
| 2014 | 58 | 70\.46296 | 17\.23502 | 19 | 113 |
| 2015 | 21 | 74\.26316 | 16\.89787 | 43 | 107 |
| 2016 | 8 | 67\.87500 | 32\.62531 | 0 | 110 |
| 2017 | 6 | 57\.16667 | 27\.93862 | 21 | 90 |
If you don’t add `.groups = “drop”` at the end of the `summarise()` function, you will get the following message: “`summarise()` ungrouping output (override with `.groups` argument).” This just reminds you that the groups are still in effect and any further functions will also be grouped.
Older versions of dplyr didn’t do this, so older code will generate this warning if you run it with newer version of dplyr. Older code might `ungroup()` after `summarise()` to indicate that groupings should be dropped. The default behaviour is usually correct, so you don’t need to worry, but it’s best to explicitly set `.groups` in a `summarise()` function after `group_by()` if you want to “keep” or “drop” the groupings.
You can use `filter` after `group_by`. The following example returns the lowest total score from each year (i.e., the row where the `rank()` of the value in the column `total` is equivalent to `1`).
```
disgust_lowest <- disgust_total %>%
mutate(year = year(date)) %>%
select(user_id, year, total) %>%
group_by(year) %>%
filter(rank(total) == 1) %>%
arrange(year)
```
Table 5\.7: All rows from `disgust_lowest`
| user\_id | year | total |
| --- | --- | --- |
| U236585 | 2009 | 3 |
| U292359 | 2010 | 6 |
| U245384 | 2013 | 0 |
| U206293 | 2014 | 19 |
| U407089 | 2015 | 43 |
| U453237 | 2016 | 0 |
| U356866 | 2017 | 21 |
You can also use `mutate` after `group_by`. The following example calculates subject\-mean\-centered scores by grouping the scores by `user_id` and then subtracting the group\-specific mean from each score. Note the use of `gather` to tidy the data into a long format first.
```
disgust_smc <- disgust %>%
gather("question", "score", moral1:pathogen7) %>%
group_by(user_id) %>%
mutate(score_smc = score - mean(score, na.rm = TRUE)) %>%
ungroup()
```
Use `ungroup()` as soon as you are done with grouped functions, otherwise the data table will still be grouped when you use it in the future.
Table 5\.8: Rows 1\-6 from `disgust_smc`
| id | user\_id | date | question | score | score\_smc |
| --- | --- | --- | --- | --- | --- |
| 1199 | 0 | 2008\-10\-07 | moral1 | 5 | 0\.9523810 |
| 1 | 1 | 2008\-07\-10 | moral1 | 2 | 0\.0476190 |
| 1599 | 2 | 2008\-10\-27 | moral1 | 1 | 0\.0000000 |
| 13332 | 2118 | 2012\-01\-02 | moral1 | 0 | \-3\.0000000 |
| 23 | 2311 | 2008\-07\-15 | moral1 | 4 | 0\.6190476 |
| 1160 | 3630 | 2008\-10\-06 | moral1 | 1 | \-1\.2500000 |
### 5\.4\.7 All Together
A lot of what we did above would be easier if the data were tidy, so let’s do that first. Then we can use `group_by` to calculate the domain scores.
After that, we can spread out the 3 domains, calculate the total score, remove any rows with a missing (`NA`) total, and calculate mean values by year.
```
disgust_tidy <- dataskills::disgust %>%
gather("question", "score", moral1:pathogen7) %>%
separate(question, c("domain","q_num"), sep = -1) %>%
group_by(id, user_id, date, domain) %>%
summarise(score = mean(score), .groups = "drop")
```
Table 5\.9: Rows 1\-6 from `disgust_tidy`
| id | user\_id | date | domain | score |
| --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | moral | 1\.428571 |
| 1 | 1 | 2008\-07\-10 | pathogen | 2\.714286 |
| 1 | 1 | 2008\-07\-10 | sexual | 1\.714286 |
| 3 | 155324 | 2008\-07\-11 | moral | 3\.000000 |
| 3 | 155324 | 2008\-07\-11 | pathogen | 2\.571429 |
| 3 | 155324 | 2008\-07\-11 | sexual | 1\.857143 |
```
disgust_scored <- disgust_tidy %>%
spread(domain, score) %>%
mutate(
total = moral + sexual + pathogen,
year = year(date)
) %>%
filter(!is.na(total)) %>%
arrange(user_id)
```
Table 5\.10: Rows 1\-6 from `disgust_scored`
| id | user\_id | date | moral | pathogen | sexual | total | year |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1199 | 0 | 2008\-10\-07 | 5\.285714 | 4\.714286 | 2\.142857 | 12\.142857 | 2008 |
| 1 | 1 | 2008\-07\-10 | 1\.428571 | 2\.714286 | 1\.714286 | 5\.857143 | 2008 |
| 13332 | 2118 | 2012\-01\-02 | 1\.000000 | 5\.000000 | 3\.000000 | 9\.000000 | 2012 |
| 23 | 2311 | 2008\-07\-15 | 4\.000000 | 4\.285714 | 1\.857143 | 10\.142857 | 2008 |
| 7980 | 4458 | 2011\-09\-05 | 3\.428571 | 3\.571429 | 3\.000000 | 10\.000000 | 2011 |
| 552 | 4651 | 2008\-08\-23 | 3\.857143 | 4\.857143 | 4\.285714 | 13\.000000 | 2008 |
```
disgust_summarised <- disgust_scored %>%
group_by(year) %>%
summarise(
n = n(),
avg_pathogen = mean(pathogen),
avg_moral = mean(moral),
avg_sexual = mean(sexual),
first_user = first(user_id),
last_user = last(user_id),
.groups = "drop"
)
```
Table 5\.11: Rows 1\-6 from `disgust_summarised`
| year | n | avg\_pathogen | avg\_moral | avg\_sexual | first\_user | last\_user |
| --- | --- | --- | --- | --- | --- | --- |
| 2008 | 2392 | 3\.697265 | 3\.806259 | 2\.539298 | 0 | 188708 |
| 2009 | 2410 | 3\.674333 | 3\.760937 | 2\.528275 | 6093 | 251959 |
| 2010 | 1418 | 3\.731412 | 3\.843139 | 2\.510075 | 5469 | 319641 |
| 2011 | 5586 | 3\.756918 | 3\.806506 | 2\.628612 | 4458 | 406569 |
| 2012 | 5375 | 3\.740465 | 3\.774591 | 2\.545701 | 2118 | 458194 |
| 2013 | 1222 | 3\.771920 | 3\.906944 | 2\.549100 | 7646 | 462428 |
| 2014 | 54 | 3\.759259 | 4\.000000 | 2\.306878 | 11090 | 461307 |
| 2015 | 19 | 3\.781955 | 4\.451128 | 2\.375940 | 102699 | 460283 |
| 2016 | 8 | 3\.696429 | 3\.625000 | 2\.375000 | 4976 | 453237 |
| 2017 | 6 | 3\.071429 | 3\.690476 | 1\.404762 | 48303 | 370464 |
5\.5 Additional dplyr one\-table verbs
--------------------------------------
Use the code examples below and the help pages to figure out what the following one\-table verbs do. Most have pretty self\-explanatory names.
### 5\.5\.1 rename()
You can rename columns with `rename()`. Set the argument name to the new name, and the value to the old name. You need to put a name in quotes or backticks if it doesn’t follow the rules for a good variable name (contains only letter, numbers, underscores, and full stops; and doesn’t start with a number).
```
sw <- starwars %>%
rename(Name = name,
Height = height,
Mass = mass,
`Hair Colour` = hair_color,
`Skin Colour` = skin_color,
`Eye Colour` = eye_color,
`Birth Year` = birth_year)
names(sw)
```
```
## [1] "Name" "Height" "Mass" "Hair Colour" "Skin Colour"
## [6] "Eye Colour" "Birth Year" "sex" "gender" "homeworld"
## [11] "species" "films" "vehicles" "starships"
```
Almost everyone gets confused at some point with `rename()` and tries to put the original names on the left and the new names on the right. Try it and see what the error message looks like.
### 5\.5\.2 distinct()
Get rid of exactly duplicate rows with `distinct()`. This can be helpful if, for example, you are merging data from multiple computers and some of the data got copied from one computer to another, creating duplicate rows.
```
# create a data table with duplicated values
dupes <- tibble(
id = c( 1, 2, 1, 2, 1, 2),
dv = c("A", "B", "C", "D", "A", "B")
)
distinct(dupes)
```
| id | dv |
| --- | --- |
| 1 | A |
| 2 | B |
| 1 | C |
| 2 | D |
### 5\.5\.3 count()
The function `count()` is a quick shortcut for the common combination of `group_by()` and `summarise()` used to count the number of rows per group.
```
starwars %>%
group_by(sex) %>%
summarise(n = n(), .groups = "drop")
```
| sex | n |
| --- | --- |
| female | 16 |
| hermaphroditic | 1 |
| male | 60 |
| none | 6 |
| NA | 4 |
```
count(starwars, sex)
```
| sex | n |
| --- | --- |
| female | 16 |
| hermaphroditic | 1 |
| male | 60 |
| none | 6 |
| NA | 4 |
### 5\.5\.4 slice()
```
slice(starwars, 1:3, 10)
```
| name | height | mass | hair\_color | skin\_color | eye\_color | birth\_year | sex | gender | homeworld | species | films | vehicles | starships |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Luke Skywalker | 172 | 77 | blond | fair | blue | 19 | male | masculine | Tatooine | Human | The Empire Strikes Back, Revenge of the Sith , Return of the Jedi , A New Hope , The Force Awakens | Snowspeeder , Imperial Speeder Bike | X\-wing , Imperial shuttle |
| C\-3PO | 167 | 75 | NA | gold | yellow | 112 | none | masculine | Tatooine | Droid | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope | | |
| R2\-D2 | 96 | 32 | NA | white, blue | red | 33 | none | masculine | Naboo | Droid | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope , The Force Awakens | | |
| Obi\-Wan Kenobi | 182 | 77 | auburn, white | fair | blue\-gray | 57 | male | masculine | Stewjon | Human | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope | Tribubble bongo | Jedi starfighter , Trade Federation cruiser, Naboo star skiff , Jedi Interceptor , Belbullab\-22 starfighter |
### 5\.5\.5 pull()
```
starwars %>%
filter(species == "Droid") %>%
pull(name)
```
```
## [1] "C-3PO" "R2-D2" "R5-D4" "IG-88" "R4-P17" "BB8"
```
5\.6 Window functions
---------------------
Window functions use the order of rows to calculate values. You can use them to do things that require ranking or ordering, like choose the top scores in each class, or accessing the previous and next rows, like calculating cumulative sums or means.
The [dplyr window functions vignette](https://dplyr.tidyverse.org/articles/window-functions.html) has very good detailed explanations of these functions, but we’ve described a few of the most useful ones below.
### 5\.6\.1 Ranking functions
```
grades <- tibble(
id = 1:5,
"Data Skills" = c(16, 17, 17, 19, 20),
"Statistics" = c(14, 16, 18, 18, 19)
) %>%
gather(class, grade, 2:3) %>%
group_by(class) %>%
mutate(row_number = row_number(),
rank = rank(grade),
min_rank = min_rank(grade),
dense_rank = dense_rank(grade),
quartile = ntile(grade, 4),
percentile = ntile(grade, 100))
```
Table 5\.12: All rows from `grades`
| id | class | grade | row\_number | rank | min\_rank | dense\_rank | quartile | percentile |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | Data Skills | 16 | 1 | 1\.0 | 1 | 1 | 1 | 1 |
| 2 | Data Skills | 17 | 2 | 2\.5 | 2 | 2 | 1 | 2 |
| 3 | Data Skills | 17 | 3 | 2\.5 | 2 | 2 | 2 | 3 |
| 4 | Data Skills | 19 | 4 | 4\.0 | 4 | 3 | 3 | 4 |
| 5 | Data Skills | 20 | 5 | 5\.0 | 5 | 4 | 4 | 5 |
| 1 | Statistics | 14 | 1 | 1\.0 | 1 | 1 | 1 | 1 |
| 2 | Statistics | 16 | 2 | 2\.0 | 2 | 2 | 1 | 2 |
| 3 | Statistics | 18 | 3 | 3\.5 | 3 | 3 | 2 | 3 |
| 4 | Statistics | 18 | 4 | 3\.5 | 3 | 3 | 3 | 4 |
| 5 | Statistics | 19 | 5 | 5\.0 | 5 | 4 | 4 | 5 |
* What are the differences among `row_number()`, `rank()`, `min_rank()`, `dense_rank()`, and `ntile()`?
* Why doesn’t `row_number()` need an argument?
* What would happen if you gave it the argument `grade` or `class`?
* What do you think would happen if you removed the `group_by(class)` line above?
* What if you added `id` to the grouping?
* What happens if you change the order of the rows?
* What does the second argument in `ntile()` do?
You can use window functions to group your data into quantiles.
```
sw_mass <- starwars %>%
group_by(tertile = ntile(mass, 3)) %>%
summarise(min = min(mass),
max = max(mass),
mean = mean(mass),
.groups = "drop")
```
Table 5\.13: All rows from `sw_mass`
| tertile | min | max | mean |
| --- | --- | --- | --- |
| 1 | 15 | 68 | 45\.6600 |
| 2 | 74 | 82 | 78\.4100 |
| 3 | 83 | 1358 | 171\.5789 |
| NA | NA | NA | NA |
Why is there a row of `NA` values? How would you get rid of them?
### 5\.6\.2 Offset functions
The function `lag()` gives a previous row’s value. It defaults to 1 row back, but you can change that with the `n` argument. The function `lead()` gives values ahead of the current row.
```
lag_lead <- tibble(x = 1:6) %>%
mutate(lag = lag(x),
lag2 = lag(x, n = 2),
lead = lead(x, default = 0))
```
Table 5\.14: All rows from `lag_lead`
| x | lag | lag2 | lead |
| --- | --- | --- | --- |
| 1 | NA | NA | 2 |
| 2 | 1 | NA | 3 |
| 3 | 2 | 1 | 4 |
| 4 | 3 | 2 | 5 |
| 5 | 4 | 3 | 6 |
| 6 | 5 | 4 | 0 |
You can use offset functions to calculate change between trials or where a value changes. Use the `order_by` argument to specify the order of the rows. Alternatively, you can use `arrange()` before the offset functions.
```
trials <- tibble(
trial = sample(1:10, 10),
cond = sample(c("exp", "ctrl"), 10, T),
score = rpois(10, 4)
) %>%
mutate(
score_change = score - lag(score, order_by = trial),
change_cond = cond != lag(cond, order_by = trial,
default = "no condition")
) %>%
arrange(trial)
```
Table 5\.15: All rows from `trials`
| trial | cond | score | score\_change | change\_cond |
| --- | --- | --- | --- | --- |
| 1 | ctrl | 8 | NA | TRUE |
| 2 | ctrl | 4 | \-4 | FALSE |
| 3 | exp | 6 | 2 | TRUE |
| 4 | ctrl | 2 | \-4 | TRUE |
| 5 | ctrl | 3 | 1 | FALSE |
| 6 | ctrl | 6 | 3 | FALSE |
| 7 | ctrl | 2 | \-4 | FALSE |
| 8 | exp | 4 | 2 | TRUE |
| 9 | ctrl | 4 | 0 | TRUE |
| 10 | exp | 3 | \-1 | TRUE |
Look at the help pages for `lag()` and `lead()`.
* What happens if you remove the `order_by` argument or change it to `cond`?
* What does the `default` argument do?
* Can you think of circumstances in your own data where you might need to use `lag()` or `lead()`?
### 5\.6\.3 Cumulative aggregates
`cumsum()`, `cummin()`, and `cummax()` are base R functions for calculating cumulative means, minimums, and maximums. The dplyr package introduces `cumany()` and `cumall()`, which return `TRUE` if any or all of the previous values meet their criteria.
```
cumulative <- tibble(
time = 1:10,
obs = c(2, 2, 1, 2, 4, 3, 1, 0, 3, 5)
) %>%
mutate(
cumsum = cumsum(obs),
cummin = cummin(obs),
cummax = cummax(obs),
cumany = cumany(obs == 3),
cumall = cumall(obs < 4)
)
```
Table 5\.16: All rows from `cumulative`
| time | obs | cumsum | cummin | cummax | cumany | cumall |
| --- | --- | --- | --- | --- | --- | --- |
| 1 | 2 | 2 | 2 | 2 | FALSE | TRUE |
| 2 | 2 | 4 | 2 | 2 | FALSE | TRUE |
| 3 | 1 | 5 | 1 | 2 | FALSE | TRUE |
| 4 | 2 | 7 | 1 | 2 | FALSE | TRUE |
| 5 | 4 | 11 | 1 | 4 | FALSE | FALSE |
| 6 | 3 | 14 | 1 | 4 | TRUE | FALSE |
| 7 | 1 | 15 | 1 | 4 | TRUE | FALSE |
| 8 | 0 | 15 | 0 | 4 | TRUE | FALSE |
| 9 | 3 | 18 | 0 | 4 | TRUE | FALSE |
| 10 | 5 | 23 | 0 | 5 | TRUE | FALSE |
* What would happen if you change `cumany(obs == 3)` to `cumany(obs > 2)`?
* What would happen if you change `cumall(obs < 4)` to `cumall(obs < 2)`?
* Can you think of circumstances in your own data where you might need to use `cumany()` or `cumall()`?
5\.7 Glossary
-------------
| term | definition |
| --- | --- |
| [data wrangling](https://psyteachr.github.io/glossary/d#data.wrangling) | The process of preparing data for visualisation and statistical analysis. |
5\.8 Exercises
--------------
Download the [exercises](exercises/05_dplyr_exercise.Rmd). See the [answers](exercises/05_dplyr_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(5)
# run this to access the answers
dataskills::exercise(5, answers = TRUE)
```
5\.1 Learning Objectives
------------------------
### 5\.1\.1 Basic
1. Be able to use the 6 main dplyr one\-table verbs: [(video)](https://youtu.be/l12tNKClTR0)
* [`select()`](dplyr.html#select)
* [`filter()`](dplyr.html#filter)
* [`arrange()`](dplyr.html#arrange)
* [`mutate()`](dplyr.html#mutate)
* [`summarise()`](dplyr.html#summarise)
* [`group_by()`](dplyr.html#group_by)
2. Be able to [wrangle data by chaining tidyr and dplyr functions](dplyr.html#all-together) [(video)](https://youtu.be/hzFFAkwrkqA)
3. Be able to use these additional one\-table verbs: [(video)](https://youtu.be/GmfF162mq4g)
* [`rename()`](dplyr.html#rename)
* [`distinct()`](dplyr.html#distinct)
* [`count()`](dplyr.html#count)
* [`slice()`](dplyr.html#slice)
* [`pull()`](dplyr.html#pull)
### 5\.1\.2 Intermediate
4. Fine control of [`select()` operations](dplyr.html#select_helpers) [(video)](https://youtu.be/R1bi1QwF9t0)
5. Use [window functions](dplyr.html#window) [(video)](https://youtu.be/uo4b0W9mqPc)
### 5\.1\.1 Basic
1. Be able to use the 6 main dplyr one\-table verbs: [(video)](https://youtu.be/l12tNKClTR0)
* [`select()`](dplyr.html#select)
* [`filter()`](dplyr.html#filter)
* [`arrange()`](dplyr.html#arrange)
* [`mutate()`](dplyr.html#mutate)
* [`summarise()`](dplyr.html#summarise)
* [`group_by()`](dplyr.html#group_by)
2. Be able to [wrangle data by chaining tidyr and dplyr functions](dplyr.html#all-together) [(video)](https://youtu.be/hzFFAkwrkqA)
3. Be able to use these additional one\-table verbs: [(video)](https://youtu.be/GmfF162mq4g)
* [`rename()`](dplyr.html#rename)
* [`distinct()`](dplyr.html#distinct)
* [`count()`](dplyr.html#count)
* [`slice()`](dplyr.html#slice)
* [`pull()`](dplyr.html#pull)
### 5\.1\.2 Intermediate
4. Fine control of [`select()` operations](dplyr.html#select_helpers) [(video)](https://youtu.be/R1bi1QwF9t0)
5. Use [window functions](dplyr.html#window) [(video)](https://youtu.be/uo4b0W9mqPc)
5\.2 Resources
--------------
* [Chapter 5: Data Transformation](http://r4ds.had.co.nz/transform.html) in *R for Data Science*
* [Data transformation cheat sheet](https://github.com/rstudio/cheatsheets/raw/master/data-transformation.pdf)
* [Chapter 16: Date and times](http://r4ds.had.co.nz/dates-and-times.html) in *R for Data Science*
5\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(lubridate)
library(dataskills)
set.seed(8675309) # makes sure random numbers are reproducible
```
### 5\.3\.1 The `disgust` dataset
These examples will use data from `dataskills::disgust`, which contains data from the [Three Domain Disgust Scale](http://digitalrepository.unm.edu/cgi/viewcontent.cgi?article=1139&context=psy_etds). Each participant is identified by a unique `user_id` and each questionnaire completion has a unique `id`. Look at the Help for this dataset to see the individual questions.
```
data("disgust", package = "dataskills")
#disgust <- read_csv("https://psyteachr.github.io/msc-data-skills/data/disgust.csv")
```
### 5\.3\.1 The `disgust` dataset
These examples will use data from `dataskills::disgust`, which contains data from the [Three Domain Disgust Scale](http://digitalrepository.unm.edu/cgi/viewcontent.cgi?article=1139&context=psy_etds). Each participant is identified by a unique `user_id` and each questionnaire completion has a unique `id`. Look at the Help for this dataset to see the individual questions.
```
data("disgust", package = "dataskills")
#disgust <- read_csv("https://psyteachr.github.io/msc-data-skills/data/disgust.csv")
```
5\.4 Six main dplyr verbs
-------------------------
Most of the [data wrangling](https://psyteachr.github.io/glossary/d#data-wrangling "The process of preparing data for visualisation and statistical analysis.") you’ll want to do with psychological data will involve the `tidyr` functions you learned in [Chapter 4](tidyr.html#tidyr) and the six main `dplyr` verbs: `select`, `filter`, `arrange`, `mutate`, `summarise`, and `group_by`.
### 5\.4\.1 select()
Select columns by name or number.
You can select each column individually, separated by commas (e.g., `col1, col2`). You can also select all columns between two columns by separating them with a colon (e.g., `start_col:end_col`).
```
moral <- disgust %>% select(user_id, moral1:moral7)
names(moral)
```
```
## [1] "user_id" "moral1" "moral2" "moral3" "moral4" "moral5" "moral6"
## [8] "moral7"
```
You can select columns by number, which is useful when the column names are long or complicated.
```
sexual <- disgust %>% select(2, 11:17)
names(sexual)
```
```
## [1] "user_id" "sexual1" "sexual2" "sexual3" "sexual4" "sexual5" "sexual6"
## [8] "sexual7"
```
You can use a minus symbol to unselect columns, leaving all of the other columns. If you want to exclude a span of columns, put parentheses around the span first (e.g., `-(moral1:moral7)`, not `-moral1:moral7`).
```
pathogen <- disgust %>% select(-id, -date, -(moral1:sexual7))
names(pathogen)
```
```
## [1] "user_id" "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5"
## [7] "pathogen6" "pathogen7"
```
#### 5\.4\.1\.1 Select helpers
You can select columns based on criteria about the column names.
##### 5\.4\.1\.1\.1 `starts_with()`
Select columns that start with a character string.
```
u <- disgust %>% select(starts_with("u"))
names(u)
```
```
## [1] "user_id"
```
##### 5\.4\.1\.1\.2 `ends_with()`
Select columns that end with a character string.
```
firstq <- disgust %>% select(ends_with("1"))
names(firstq)
```
```
## [1] "moral1" "sexual1" "pathogen1"
```
##### 5\.4\.1\.1\.3 `contains()`
Select columns that contain a character string.
```
pathogen <- disgust %>% select(contains("pathogen"))
names(pathogen)
```
```
## [1] "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5" "pathogen6"
## [7] "pathogen7"
```
##### 5\.4\.1\.1\.4 `num_range()`
Select columns with a name that matches the pattern `prefix`.
```
moral2_4 <- disgust %>% select(num_range("moral", 2:4))
names(moral2_4)
```
```
## [1] "moral2" "moral3" "moral4"
```
Use `width` to set the number of digits with leading zeros. For example, `num_range(‘var_,’ 8:10, width=2)` selects columns `var_08`, `var_09`, and `var_10`.
### 5\.4\.2 filter()
Select rows by matching column criteria.
Select all rows where the user\_id is 1 (that’s Lisa).
```
disgust %>% filter(user_id == 1)
```
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 |
Remember to use `==` and not `=` to check if two things are equivalent. A single `=` assigns the righthand value to the lefthand variable and (usually) evaluates to `TRUE`.
You can select on multiple criteria by separating them with commas.
```
amoral <- disgust %>% filter(
moral1 == 0,
moral2 == 0,
moral3 == 0,
moral4 == 0,
moral5 == 0,
moral6 == 0,
moral7 == 0
)
```
You can use the symbols `&`, `|`, and `!` to mean “and,” “or,” and “not.” You can also use other operators to make equations.
```
# everyone who chose either 0 or 7 for question moral1
moral_extremes <- disgust %>%
filter(moral1 == 0 | moral1 == 7)
# everyone who chose the same answer for all moral questions
moral_consistent <- disgust %>%
filter(
moral2 == moral1 &
moral3 == moral1 &
moral4 == moral1 &
moral5 == moral1 &
moral6 == moral1 &
moral7 == moral1
)
# everyone who did not answer 7 for all 7 moral questions
moral_no_ceiling <- disgust %>%
filter(moral1+moral2+moral3+moral4+moral5+moral6+moral7 != 7*7)
```
#### 5\.4\.2\.1 Match operator (%in%)
Sometimes you need to exclude some participant IDs for reasons that can’t be described in code. The match operator (`%in%`) is useful here for testing if a column value is in a list. Surround the equation with parentheses and put `!` in front to test that a value is not in the list.
```
no_researchers <- disgust %>%
filter(!(user_id %in% c(1,2)))
```
#### 5\.4\.2\.2 Dates
You can use the `lubridate` package to work with dates. For example, you can use the `year()` function to return just the year from the `date` column and then select only data collected in 2010\.
```
disgust2010 <- disgust %>%
filter(year(date) == 2010)
```
Table 5\.1: Rows 1\-6 from `disgust2010`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 6902 | 5469 | 2010\-12\-06 | 0 | 1 | 3 | 4 | 1 | 0 | 1 | 3 | 5 | 2 | 4 | 6 | 6 | 5 | 5 | 2 | 4 | 4 | 2 | 2 | 6 |
| 6158 | 6066 | 2010\-04\-18 | 4 | 5 | 6 | 5 | 5 | 4 | 4 | 3 | 0 | 1 | 6 | 3 | 5 | 3 | 6 | 5 | 5 | 5 | 5 | 5 | 5 |
| 6362 | 7129 | 2010\-06\-09 | 4 | 4 | 4 | 4 | 3 | 3 | 2 | 4 | 2 | 1 | 3 | 2 | 3 | 6 | 5 | 2 | 0 | 4 | 5 | 5 | 4 |
| 6302 | 39318 | 2010\-05\-20 | 2 | 4 | 1 | 4 | 5 | 6 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 3 | 2 | 3 | 2 | 3 | 2 | 4 |
| 5429 | 43029 | 2010\-01\-02 | 1 | 1 | 1 | 3 | 6 | 4 | 2 | 2 | 0 | 1 | 4 | 6 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 6 | 4 |
| 6732 | 71955 | 2010\-10\-15 | 2 | 5 | 3 | 6 | 3 | 2 | 5 | 4 | 3 | 3 | 6 | 6 | 6 | 5 | 4 | 2 | 6 | 5 | 6 | 6 | 3 |
Or select data from at least 5 years ago. You can use the `range` function to check the minimum and maximum dates in the resulting dataset.
```
disgust_5ago <- disgust %>%
filter(date < today() - dyears(5))
range(disgust_5ago$date)
```
```
## [1] "2008-07-10" "2016-08-04"
```
### 5\.4\.3 arrange()
Sort your dataset using `arrange()`. You will find yourself needing to sort data in R much less than you do in Excel, since you don’t need to have rows next to each other in order to, for example, calculate group means. But `arrange()` can be useful when preparing data from display in tables.
```
disgust_order <- disgust %>%
arrange(date, moral1)
```
Table 5\.2: Rows 1\-6 from `disgust_order`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 |
| 3 | 155324 | 2008\-07\-11 | 2 | 4 | 3 | 5 | 2 | 1 | 4 | 1 | 0 | 1 | 2 | 2 | 6 | 1 | 4 | 3 | 1 | 0 | 4 | 4 | 2 |
| 6 | 155386 | 2008\-07\-12 | 2 | 4 | 0 | 4 | 0 | 0 | 0 | 6 | 0 | 0 | 6 | 4 | 4 | 6 | 4 | 5 | 5 | 1 | 6 | 4 | 2 |
| 7 | 155409 | 2008\-07\-12 | 4 | 5 | 5 | 4 | 5 | 1 | 5 | 3 | 0 | 1 | 5 | 2 | 0 | 0 | 5 | 5 | 3 | 4 | 4 | 2 | 6 |
| 4 | 155366 | 2008\-07\-12 | 6 | 6 | 6 | 3 | 6 | 6 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 4 | 4 | 5 | 5 | 4 | 6 | 0 |
| 5 | 155370 | 2008\-07\-12 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 2 | 6 | 4 | 3 | 6 | 6 | 6 | 6 | 6 | 6 | 2 | 4 | 4 | 6 |
Reverse the order using `desc()`
```
disgust_order_desc <- disgust %>%
arrange(desc(date))
```
Table 5\.3: Rows 1\-6 from `disgust_order_desc`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 39456 | 356866 | 2017\-08\-21 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 39447 | 128727 | 2017\-08\-13 | 2 | 4 | 1 | 2 | 2 | 5 | 3 | 0 | 0 | 1 | 0 | 0 | 2 | 1 | 2 | 0 | 2 | 1 | 1 | 1 | 1 |
| 39371 | 152955 | 2017\-06\-13 | 6 | 6 | 3 | 6 | 6 | 6 | 6 | 1 | 0 | 0 | 2 | 1 | 4 | 4 | 5 | 0 | 5 | 4 | 3 | 6 | 3 |
| 39342 | 48303 | 2017\-05\-22 | 4 | 5 | 4 | 4 | 6 | 4 | 5 | 2 | 1 | 4 | 1 | 1 | 3 | 1 | 5 | 5 | 4 | 4 | 4 | 4 | 5 |
| 39159 | 151633 | 2017\-04\-04 | 4 | 5 | 6 | 5 | 3 | 6 | 2 | 6 | 4 | 0 | 4 | 0 | 3 | 6 | 4 | 4 | 6 | 6 | 6 | 6 | 4 |
| 38942 | 370464 | 2017\-02\-01 | 1 | 5 | 0 | 6 | 5 | 5 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 0 | 3 | 3 | 1 | 6 | 3 |
### 5\.4\.4 mutate()
Add new columns. This is one of the most useful functions in the tidyverse.
Refer to other columns by their names (unquoted). You can add more than one column in the same mutate function, just separate the columns with a comma. Once you make a new column, you can use it in further column definitions e.g., `total` below).
```
disgust_total <- disgust %>%
mutate(
pathogen = pathogen1 + pathogen2 + pathogen3 + pathogen4 + pathogen5 + pathogen6 + pathogen7,
moral = moral1 + moral2 + moral3 + moral4 + moral5 + moral6 + moral7,
sexual = sexual1 + sexual2 + sexual3 + sexual4 + sexual5 + sexual6 + sexual7,
total = pathogen + moral + sexual,
user_id = paste0("U", user_id)
)
```
Table 5\.4: Rows 1\-6 from `disgust_total`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 | pathogen | moral | sexual | total |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1199 | U0 | 2008\-10\-07 | 5 | 6 | 4 | 6 | 5 | 5 | 6 | 4 | 0 | 1 | 0 | 1 | 4 | 5 | 6 | 1 | 6 | 5 | 4 | 5 | 6 | 33 | 37 | 15 | 85 |
| 1 | U1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 | 19 | 10 | 12 | 41 |
| 1599 | U2 | 2008\-10\-27 | 1 | 1 | 1 | 1 | NA | NA | 1 | 1 | NA | 1 | NA | 1 | NA | NA | NA | NA | 1 | NA | NA | NA | NA | NA | NA | NA | NA |
| 13332 | U2118 | 2012\-01\-02 | 0 | 1 | 1 | 1 | 1 | 2 | 1 | 4 | 3 | 0 | 6 | 0 | 3 | 5 | 5 | 6 | 4 | 6 | 5 | 5 | 4 | 35 | 7 | 21 | 63 |
| 23 | U2311 | 2008\-07\-15 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 2 | 1 | 2 | 1 | 1 | 1 | 5 | 5 | 5 | 4 | 4 | 5 | 4 | 3 | 30 | 28 | 13 | 71 |
| 1160 | U3630 | 2008\-10\-06 | 1 | 5 | NA | 5 | 5 | 5 | 1 | 0 | 5 | 0 | 2 | 0 | 1 | 0 | 6 | 3 | 1 | 1 | 3 | 1 | 0 | 15 | NA | 8 | NA |
You can overwrite a column by giving a new column the same name as the old column (see `user_id`) above. Make sure that you mean to do this and that you aren’t trying to use the old column value after you redefine it.
### 5\.4\.5 summarise()
Create summary statistics for the dataset. Check the [Data Wrangling Cheat Sheet](https://www.rstudio.org/links/data_wrangling_cheat_sheet) or the [Data Transformation Cheat Sheet](https://github.com/rstudio/cheatsheets/raw/master/source/pdfs/data-transformation-cheatsheet.pdf) for various summary functions. Some common ones are: `mean()`, `sd()`, `n()`, `sum()`, and `quantile()`.
```
disgust_summary<- disgust_total %>%
summarise(
n = n(),
q25 = quantile(total, .25, na.rm = TRUE),
q50 = quantile(total, .50, na.rm = TRUE),
q75 = quantile(total, .75, na.rm = TRUE),
avg_total = mean(total, na.rm = TRUE),
sd_total = sd(total, na.rm = TRUE),
min_total = min(total, na.rm = TRUE),
max_total = max(total, na.rm = TRUE)
)
```
Table 5\.5: All rows from `disgust_summary`
| n | q25 | q50 | q75 | avg\_total | sd\_total | min\_total | max\_total |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 20000 | 59 | 71 | 83 | 70\.6868 | 18\.24253 | 0 | 126 |
### 5\.4\.6 group\_by()
Create subsets of the data. You can use this to create summaries,
like the mean value for all of your experimental groups.
Here, we’ll use `mutate` to create a new column called `year`, group by `year`, and calculate the average scores.
```
disgust_groups <- disgust_total %>%
mutate(year = year(date)) %>%
group_by(year) %>%
summarise(
n = n(),
avg_total = mean(total, na.rm = TRUE),
sd_total = sd(total, na.rm = TRUE),
min_total = min(total, na.rm = TRUE),
max_total = max(total, na.rm = TRUE),
.groups = "drop"
)
```
Table 5\.6: All rows from `disgust_groups`
| year | n | avg\_total | sd\_total | min\_total | max\_total |
| --- | --- | --- | --- | --- | --- |
| 2008 | 2578 | 70\.29975 | 18\.46251 | 0 | 126 |
| 2009 | 2580 | 69\.74481 | 18\.61959 | 3 | 126 |
| 2010 | 1514 | 70\.59238 | 18\.86846 | 6 | 126 |
| 2011 | 6046 | 71\.34425 | 17\.79446 | 0 | 126 |
| 2012 | 5938 | 70\.42530 | 18\.35782 | 0 | 126 |
| 2013 | 1251 | 71\.59574 | 17\.61375 | 0 | 126 |
| 2014 | 58 | 70\.46296 | 17\.23502 | 19 | 113 |
| 2015 | 21 | 74\.26316 | 16\.89787 | 43 | 107 |
| 2016 | 8 | 67\.87500 | 32\.62531 | 0 | 110 |
| 2017 | 6 | 57\.16667 | 27\.93862 | 21 | 90 |
If you don’t add `.groups = “drop”` at the end of the `summarise()` function, you will get the following message: “`summarise()` ungrouping output (override with `.groups` argument).” This just reminds you that the groups are still in effect and any further functions will also be grouped.
Older versions of dplyr didn’t do this, so older code will generate this warning if you run it with newer version of dplyr. Older code might `ungroup()` after `summarise()` to indicate that groupings should be dropped. The default behaviour is usually correct, so you don’t need to worry, but it’s best to explicitly set `.groups` in a `summarise()` function after `group_by()` if you want to “keep” or “drop” the groupings.
You can use `filter` after `group_by`. The following example returns the lowest total score from each year (i.e., the row where the `rank()` of the value in the column `total` is equivalent to `1`).
```
disgust_lowest <- disgust_total %>%
mutate(year = year(date)) %>%
select(user_id, year, total) %>%
group_by(year) %>%
filter(rank(total) == 1) %>%
arrange(year)
```
Table 5\.7: All rows from `disgust_lowest`
| user\_id | year | total |
| --- | --- | --- |
| U236585 | 2009 | 3 |
| U292359 | 2010 | 6 |
| U245384 | 2013 | 0 |
| U206293 | 2014 | 19 |
| U407089 | 2015 | 43 |
| U453237 | 2016 | 0 |
| U356866 | 2017 | 21 |
You can also use `mutate` after `group_by`. The following example calculates subject\-mean\-centered scores by grouping the scores by `user_id` and then subtracting the group\-specific mean from each score. Note the use of `gather` to tidy the data into a long format first.
```
disgust_smc <- disgust %>%
gather("question", "score", moral1:pathogen7) %>%
group_by(user_id) %>%
mutate(score_smc = score - mean(score, na.rm = TRUE)) %>%
ungroup()
```
Use `ungroup()` as soon as you are done with grouped functions, otherwise the data table will still be grouped when you use it in the future.
Table 5\.8: Rows 1\-6 from `disgust_smc`
| id | user\_id | date | question | score | score\_smc |
| --- | --- | --- | --- | --- | --- |
| 1199 | 0 | 2008\-10\-07 | moral1 | 5 | 0\.9523810 |
| 1 | 1 | 2008\-07\-10 | moral1 | 2 | 0\.0476190 |
| 1599 | 2 | 2008\-10\-27 | moral1 | 1 | 0\.0000000 |
| 13332 | 2118 | 2012\-01\-02 | moral1 | 0 | \-3\.0000000 |
| 23 | 2311 | 2008\-07\-15 | moral1 | 4 | 0\.6190476 |
| 1160 | 3630 | 2008\-10\-06 | moral1 | 1 | \-1\.2500000 |
### 5\.4\.7 All Together
A lot of what we did above would be easier if the data were tidy, so let’s do that first. Then we can use `group_by` to calculate the domain scores.
After that, we can spread out the 3 domains, calculate the total score, remove any rows with a missing (`NA`) total, and calculate mean values by year.
```
disgust_tidy <- dataskills::disgust %>%
gather("question", "score", moral1:pathogen7) %>%
separate(question, c("domain","q_num"), sep = -1) %>%
group_by(id, user_id, date, domain) %>%
summarise(score = mean(score), .groups = "drop")
```
Table 5\.9: Rows 1\-6 from `disgust_tidy`
| id | user\_id | date | domain | score |
| --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | moral | 1\.428571 |
| 1 | 1 | 2008\-07\-10 | pathogen | 2\.714286 |
| 1 | 1 | 2008\-07\-10 | sexual | 1\.714286 |
| 3 | 155324 | 2008\-07\-11 | moral | 3\.000000 |
| 3 | 155324 | 2008\-07\-11 | pathogen | 2\.571429 |
| 3 | 155324 | 2008\-07\-11 | sexual | 1\.857143 |
```
disgust_scored <- disgust_tidy %>%
spread(domain, score) %>%
mutate(
total = moral + sexual + pathogen,
year = year(date)
) %>%
filter(!is.na(total)) %>%
arrange(user_id)
```
Table 5\.10: Rows 1\-6 from `disgust_scored`
| id | user\_id | date | moral | pathogen | sexual | total | year |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1199 | 0 | 2008\-10\-07 | 5\.285714 | 4\.714286 | 2\.142857 | 12\.142857 | 2008 |
| 1 | 1 | 2008\-07\-10 | 1\.428571 | 2\.714286 | 1\.714286 | 5\.857143 | 2008 |
| 13332 | 2118 | 2012\-01\-02 | 1\.000000 | 5\.000000 | 3\.000000 | 9\.000000 | 2012 |
| 23 | 2311 | 2008\-07\-15 | 4\.000000 | 4\.285714 | 1\.857143 | 10\.142857 | 2008 |
| 7980 | 4458 | 2011\-09\-05 | 3\.428571 | 3\.571429 | 3\.000000 | 10\.000000 | 2011 |
| 552 | 4651 | 2008\-08\-23 | 3\.857143 | 4\.857143 | 4\.285714 | 13\.000000 | 2008 |
```
disgust_summarised <- disgust_scored %>%
group_by(year) %>%
summarise(
n = n(),
avg_pathogen = mean(pathogen),
avg_moral = mean(moral),
avg_sexual = mean(sexual),
first_user = first(user_id),
last_user = last(user_id),
.groups = "drop"
)
```
Table 5\.11: Rows 1\-6 from `disgust_summarised`
| year | n | avg\_pathogen | avg\_moral | avg\_sexual | first\_user | last\_user |
| --- | --- | --- | --- | --- | --- | --- |
| 2008 | 2392 | 3\.697265 | 3\.806259 | 2\.539298 | 0 | 188708 |
| 2009 | 2410 | 3\.674333 | 3\.760937 | 2\.528275 | 6093 | 251959 |
| 2010 | 1418 | 3\.731412 | 3\.843139 | 2\.510075 | 5469 | 319641 |
| 2011 | 5586 | 3\.756918 | 3\.806506 | 2\.628612 | 4458 | 406569 |
| 2012 | 5375 | 3\.740465 | 3\.774591 | 2\.545701 | 2118 | 458194 |
| 2013 | 1222 | 3\.771920 | 3\.906944 | 2\.549100 | 7646 | 462428 |
| 2014 | 54 | 3\.759259 | 4\.000000 | 2\.306878 | 11090 | 461307 |
| 2015 | 19 | 3\.781955 | 4\.451128 | 2\.375940 | 102699 | 460283 |
| 2016 | 8 | 3\.696429 | 3\.625000 | 2\.375000 | 4976 | 453237 |
| 2017 | 6 | 3\.071429 | 3\.690476 | 1\.404762 | 48303 | 370464 |
### 5\.4\.1 select()
Select columns by name or number.
You can select each column individually, separated by commas (e.g., `col1, col2`). You can also select all columns between two columns by separating them with a colon (e.g., `start_col:end_col`).
```
moral <- disgust %>% select(user_id, moral1:moral7)
names(moral)
```
```
## [1] "user_id" "moral1" "moral2" "moral3" "moral4" "moral5" "moral6"
## [8] "moral7"
```
You can select columns by number, which is useful when the column names are long or complicated.
```
sexual <- disgust %>% select(2, 11:17)
names(sexual)
```
```
## [1] "user_id" "sexual1" "sexual2" "sexual3" "sexual4" "sexual5" "sexual6"
## [8] "sexual7"
```
You can use a minus symbol to unselect columns, leaving all of the other columns. If you want to exclude a span of columns, put parentheses around the span first (e.g., `-(moral1:moral7)`, not `-moral1:moral7`).
```
pathogen <- disgust %>% select(-id, -date, -(moral1:sexual7))
names(pathogen)
```
```
## [1] "user_id" "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5"
## [7] "pathogen6" "pathogen7"
```
#### 5\.4\.1\.1 Select helpers
You can select columns based on criteria about the column names.
##### 5\.4\.1\.1\.1 `starts_with()`
Select columns that start with a character string.
```
u <- disgust %>% select(starts_with("u"))
names(u)
```
```
## [1] "user_id"
```
##### 5\.4\.1\.1\.2 `ends_with()`
Select columns that end with a character string.
```
firstq <- disgust %>% select(ends_with("1"))
names(firstq)
```
```
## [1] "moral1" "sexual1" "pathogen1"
```
##### 5\.4\.1\.1\.3 `contains()`
Select columns that contain a character string.
```
pathogen <- disgust %>% select(contains("pathogen"))
names(pathogen)
```
```
## [1] "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5" "pathogen6"
## [7] "pathogen7"
```
##### 5\.4\.1\.1\.4 `num_range()`
Select columns with a name that matches the pattern `prefix`.
```
moral2_4 <- disgust %>% select(num_range("moral", 2:4))
names(moral2_4)
```
```
## [1] "moral2" "moral3" "moral4"
```
Use `width` to set the number of digits with leading zeros. For example, `num_range(‘var_,’ 8:10, width=2)` selects columns `var_08`, `var_09`, and `var_10`.
#### 5\.4\.1\.1 Select helpers
You can select columns based on criteria about the column names.
##### 5\.4\.1\.1\.1 `starts_with()`
Select columns that start with a character string.
```
u <- disgust %>% select(starts_with("u"))
names(u)
```
```
## [1] "user_id"
```
##### 5\.4\.1\.1\.2 `ends_with()`
Select columns that end with a character string.
```
firstq <- disgust %>% select(ends_with("1"))
names(firstq)
```
```
## [1] "moral1" "sexual1" "pathogen1"
```
##### 5\.4\.1\.1\.3 `contains()`
Select columns that contain a character string.
```
pathogen <- disgust %>% select(contains("pathogen"))
names(pathogen)
```
```
## [1] "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5" "pathogen6"
## [7] "pathogen7"
```
##### 5\.4\.1\.1\.4 `num_range()`
Select columns with a name that matches the pattern `prefix`.
```
moral2_4 <- disgust %>% select(num_range("moral", 2:4))
names(moral2_4)
```
```
## [1] "moral2" "moral3" "moral4"
```
Use `width` to set the number of digits with leading zeros. For example, `num_range(‘var_,’ 8:10, width=2)` selects columns `var_08`, `var_09`, and `var_10`.
##### 5\.4\.1\.1\.1 `starts_with()`
Select columns that start with a character string.
```
u <- disgust %>% select(starts_with("u"))
names(u)
```
```
## [1] "user_id"
```
##### 5\.4\.1\.1\.2 `ends_with()`
Select columns that end with a character string.
```
firstq <- disgust %>% select(ends_with("1"))
names(firstq)
```
```
## [1] "moral1" "sexual1" "pathogen1"
```
##### 5\.4\.1\.1\.3 `contains()`
Select columns that contain a character string.
```
pathogen <- disgust %>% select(contains("pathogen"))
names(pathogen)
```
```
## [1] "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5" "pathogen6"
## [7] "pathogen7"
```
##### 5\.4\.1\.1\.4 `num_range()`
Select columns with a name that matches the pattern `prefix`.
```
moral2_4 <- disgust %>% select(num_range("moral", 2:4))
names(moral2_4)
```
```
## [1] "moral2" "moral3" "moral4"
```
Use `width` to set the number of digits with leading zeros. For example, `num_range(‘var_,’ 8:10, width=2)` selects columns `var_08`, `var_09`, and `var_10`.
### 5\.4\.2 filter()
Select rows by matching column criteria.
Select all rows where the user\_id is 1 (that’s Lisa).
```
disgust %>% filter(user_id == 1)
```
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 |
Remember to use `==` and not `=` to check if two things are equivalent. A single `=` assigns the righthand value to the lefthand variable and (usually) evaluates to `TRUE`.
You can select on multiple criteria by separating them with commas.
```
amoral <- disgust %>% filter(
moral1 == 0,
moral2 == 0,
moral3 == 0,
moral4 == 0,
moral5 == 0,
moral6 == 0,
moral7 == 0
)
```
You can use the symbols `&`, `|`, and `!` to mean “and,” “or,” and “not.” You can also use other operators to make equations.
```
# everyone who chose either 0 or 7 for question moral1
moral_extremes <- disgust %>%
filter(moral1 == 0 | moral1 == 7)
# everyone who chose the same answer for all moral questions
moral_consistent <- disgust %>%
filter(
moral2 == moral1 &
moral3 == moral1 &
moral4 == moral1 &
moral5 == moral1 &
moral6 == moral1 &
moral7 == moral1
)
# everyone who did not answer 7 for all 7 moral questions
moral_no_ceiling <- disgust %>%
filter(moral1+moral2+moral3+moral4+moral5+moral6+moral7 != 7*7)
```
#### 5\.4\.2\.1 Match operator (%in%)
Sometimes you need to exclude some participant IDs for reasons that can’t be described in code. The match operator (`%in%`) is useful here for testing if a column value is in a list. Surround the equation with parentheses and put `!` in front to test that a value is not in the list.
```
no_researchers <- disgust %>%
filter(!(user_id %in% c(1,2)))
```
#### 5\.4\.2\.2 Dates
You can use the `lubridate` package to work with dates. For example, you can use the `year()` function to return just the year from the `date` column and then select only data collected in 2010\.
```
disgust2010 <- disgust %>%
filter(year(date) == 2010)
```
Table 5\.1: Rows 1\-6 from `disgust2010`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 6902 | 5469 | 2010\-12\-06 | 0 | 1 | 3 | 4 | 1 | 0 | 1 | 3 | 5 | 2 | 4 | 6 | 6 | 5 | 5 | 2 | 4 | 4 | 2 | 2 | 6 |
| 6158 | 6066 | 2010\-04\-18 | 4 | 5 | 6 | 5 | 5 | 4 | 4 | 3 | 0 | 1 | 6 | 3 | 5 | 3 | 6 | 5 | 5 | 5 | 5 | 5 | 5 |
| 6362 | 7129 | 2010\-06\-09 | 4 | 4 | 4 | 4 | 3 | 3 | 2 | 4 | 2 | 1 | 3 | 2 | 3 | 6 | 5 | 2 | 0 | 4 | 5 | 5 | 4 |
| 6302 | 39318 | 2010\-05\-20 | 2 | 4 | 1 | 4 | 5 | 6 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 3 | 2 | 3 | 2 | 3 | 2 | 4 |
| 5429 | 43029 | 2010\-01\-02 | 1 | 1 | 1 | 3 | 6 | 4 | 2 | 2 | 0 | 1 | 4 | 6 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 6 | 4 |
| 6732 | 71955 | 2010\-10\-15 | 2 | 5 | 3 | 6 | 3 | 2 | 5 | 4 | 3 | 3 | 6 | 6 | 6 | 5 | 4 | 2 | 6 | 5 | 6 | 6 | 3 |
Or select data from at least 5 years ago. You can use the `range` function to check the minimum and maximum dates in the resulting dataset.
```
disgust_5ago <- disgust %>%
filter(date < today() - dyears(5))
range(disgust_5ago$date)
```
```
## [1] "2008-07-10" "2016-08-04"
```
#### 5\.4\.2\.1 Match operator (%in%)
Sometimes you need to exclude some participant IDs for reasons that can’t be described in code. The match operator (`%in%`) is useful here for testing if a column value is in a list. Surround the equation with parentheses and put `!` in front to test that a value is not in the list.
```
no_researchers <- disgust %>%
filter(!(user_id %in% c(1,2)))
```
#### 5\.4\.2\.2 Dates
You can use the `lubridate` package to work with dates. For example, you can use the `year()` function to return just the year from the `date` column and then select only data collected in 2010\.
```
disgust2010 <- disgust %>%
filter(year(date) == 2010)
```
Table 5\.1: Rows 1\-6 from `disgust2010`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 6902 | 5469 | 2010\-12\-06 | 0 | 1 | 3 | 4 | 1 | 0 | 1 | 3 | 5 | 2 | 4 | 6 | 6 | 5 | 5 | 2 | 4 | 4 | 2 | 2 | 6 |
| 6158 | 6066 | 2010\-04\-18 | 4 | 5 | 6 | 5 | 5 | 4 | 4 | 3 | 0 | 1 | 6 | 3 | 5 | 3 | 6 | 5 | 5 | 5 | 5 | 5 | 5 |
| 6362 | 7129 | 2010\-06\-09 | 4 | 4 | 4 | 4 | 3 | 3 | 2 | 4 | 2 | 1 | 3 | 2 | 3 | 6 | 5 | 2 | 0 | 4 | 5 | 5 | 4 |
| 6302 | 39318 | 2010\-05\-20 | 2 | 4 | 1 | 4 | 5 | 6 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 3 | 2 | 3 | 2 | 3 | 2 | 4 |
| 5429 | 43029 | 2010\-01\-02 | 1 | 1 | 1 | 3 | 6 | 4 | 2 | 2 | 0 | 1 | 4 | 6 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 6 | 4 |
| 6732 | 71955 | 2010\-10\-15 | 2 | 5 | 3 | 6 | 3 | 2 | 5 | 4 | 3 | 3 | 6 | 6 | 6 | 5 | 4 | 2 | 6 | 5 | 6 | 6 | 3 |
Or select data from at least 5 years ago. You can use the `range` function to check the minimum and maximum dates in the resulting dataset.
```
disgust_5ago <- disgust %>%
filter(date < today() - dyears(5))
range(disgust_5ago$date)
```
```
## [1] "2008-07-10" "2016-08-04"
```
### 5\.4\.3 arrange()
Sort your dataset using `arrange()`. You will find yourself needing to sort data in R much less than you do in Excel, since you don’t need to have rows next to each other in order to, for example, calculate group means. But `arrange()` can be useful when preparing data from display in tables.
```
disgust_order <- disgust %>%
arrange(date, moral1)
```
Table 5\.2: Rows 1\-6 from `disgust_order`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 |
| 3 | 155324 | 2008\-07\-11 | 2 | 4 | 3 | 5 | 2 | 1 | 4 | 1 | 0 | 1 | 2 | 2 | 6 | 1 | 4 | 3 | 1 | 0 | 4 | 4 | 2 |
| 6 | 155386 | 2008\-07\-12 | 2 | 4 | 0 | 4 | 0 | 0 | 0 | 6 | 0 | 0 | 6 | 4 | 4 | 6 | 4 | 5 | 5 | 1 | 6 | 4 | 2 |
| 7 | 155409 | 2008\-07\-12 | 4 | 5 | 5 | 4 | 5 | 1 | 5 | 3 | 0 | 1 | 5 | 2 | 0 | 0 | 5 | 5 | 3 | 4 | 4 | 2 | 6 |
| 4 | 155366 | 2008\-07\-12 | 6 | 6 | 6 | 3 | 6 | 6 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 4 | 4 | 5 | 5 | 4 | 6 | 0 |
| 5 | 155370 | 2008\-07\-12 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 2 | 6 | 4 | 3 | 6 | 6 | 6 | 6 | 6 | 6 | 2 | 4 | 4 | 6 |
Reverse the order using `desc()`
```
disgust_order_desc <- disgust %>%
arrange(desc(date))
```
Table 5\.3: Rows 1\-6 from `disgust_order_desc`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 39456 | 356866 | 2017\-08\-21 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 39447 | 128727 | 2017\-08\-13 | 2 | 4 | 1 | 2 | 2 | 5 | 3 | 0 | 0 | 1 | 0 | 0 | 2 | 1 | 2 | 0 | 2 | 1 | 1 | 1 | 1 |
| 39371 | 152955 | 2017\-06\-13 | 6 | 6 | 3 | 6 | 6 | 6 | 6 | 1 | 0 | 0 | 2 | 1 | 4 | 4 | 5 | 0 | 5 | 4 | 3 | 6 | 3 |
| 39342 | 48303 | 2017\-05\-22 | 4 | 5 | 4 | 4 | 6 | 4 | 5 | 2 | 1 | 4 | 1 | 1 | 3 | 1 | 5 | 5 | 4 | 4 | 4 | 4 | 5 |
| 39159 | 151633 | 2017\-04\-04 | 4 | 5 | 6 | 5 | 3 | 6 | 2 | 6 | 4 | 0 | 4 | 0 | 3 | 6 | 4 | 4 | 6 | 6 | 6 | 6 | 4 |
| 38942 | 370464 | 2017\-02\-01 | 1 | 5 | 0 | 6 | 5 | 5 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 0 | 3 | 3 | 1 | 6 | 3 |
### 5\.4\.4 mutate()
Add new columns. This is one of the most useful functions in the tidyverse.
Refer to other columns by their names (unquoted). You can add more than one column in the same mutate function, just separate the columns with a comma. Once you make a new column, you can use it in further column definitions e.g., `total` below).
```
disgust_total <- disgust %>%
mutate(
pathogen = pathogen1 + pathogen2 + pathogen3 + pathogen4 + pathogen5 + pathogen6 + pathogen7,
moral = moral1 + moral2 + moral3 + moral4 + moral5 + moral6 + moral7,
sexual = sexual1 + sexual2 + sexual3 + sexual4 + sexual5 + sexual6 + sexual7,
total = pathogen + moral + sexual,
user_id = paste0("U", user_id)
)
```
Table 5\.4: Rows 1\-6 from `disgust_total`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 | pathogen | moral | sexual | total |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1199 | U0 | 2008\-10\-07 | 5 | 6 | 4 | 6 | 5 | 5 | 6 | 4 | 0 | 1 | 0 | 1 | 4 | 5 | 6 | 1 | 6 | 5 | 4 | 5 | 6 | 33 | 37 | 15 | 85 |
| 1 | U1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 | 19 | 10 | 12 | 41 |
| 1599 | U2 | 2008\-10\-27 | 1 | 1 | 1 | 1 | NA | NA | 1 | 1 | NA | 1 | NA | 1 | NA | NA | NA | NA | 1 | NA | NA | NA | NA | NA | NA | NA | NA |
| 13332 | U2118 | 2012\-01\-02 | 0 | 1 | 1 | 1 | 1 | 2 | 1 | 4 | 3 | 0 | 6 | 0 | 3 | 5 | 5 | 6 | 4 | 6 | 5 | 5 | 4 | 35 | 7 | 21 | 63 |
| 23 | U2311 | 2008\-07\-15 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 2 | 1 | 2 | 1 | 1 | 1 | 5 | 5 | 5 | 4 | 4 | 5 | 4 | 3 | 30 | 28 | 13 | 71 |
| 1160 | U3630 | 2008\-10\-06 | 1 | 5 | NA | 5 | 5 | 5 | 1 | 0 | 5 | 0 | 2 | 0 | 1 | 0 | 6 | 3 | 1 | 1 | 3 | 1 | 0 | 15 | NA | 8 | NA |
You can overwrite a column by giving a new column the same name as the old column (see `user_id`) above. Make sure that you mean to do this and that you aren’t trying to use the old column value after you redefine it.
### 5\.4\.5 summarise()
Create summary statistics for the dataset. Check the [Data Wrangling Cheat Sheet](https://www.rstudio.org/links/data_wrangling_cheat_sheet) or the [Data Transformation Cheat Sheet](https://github.com/rstudio/cheatsheets/raw/master/source/pdfs/data-transformation-cheatsheet.pdf) for various summary functions. Some common ones are: `mean()`, `sd()`, `n()`, `sum()`, and `quantile()`.
```
disgust_summary<- disgust_total %>%
summarise(
n = n(),
q25 = quantile(total, .25, na.rm = TRUE),
q50 = quantile(total, .50, na.rm = TRUE),
q75 = quantile(total, .75, na.rm = TRUE),
avg_total = mean(total, na.rm = TRUE),
sd_total = sd(total, na.rm = TRUE),
min_total = min(total, na.rm = TRUE),
max_total = max(total, na.rm = TRUE)
)
```
Table 5\.5: All rows from `disgust_summary`
| n | q25 | q50 | q75 | avg\_total | sd\_total | min\_total | max\_total |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 20000 | 59 | 71 | 83 | 70\.6868 | 18\.24253 | 0 | 126 |
### 5\.4\.6 group\_by()
Create subsets of the data. You can use this to create summaries,
like the mean value for all of your experimental groups.
Here, we’ll use `mutate` to create a new column called `year`, group by `year`, and calculate the average scores.
```
disgust_groups <- disgust_total %>%
mutate(year = year(date)) %>%
group_by(year) %>%
summarise(
n = n(),
avg_total = mean(total, na.rm = TRUE),
sd_total = sd(total, na.rm = TRUE),
min_total = min(total, na.rm = TRUE),
max_total = max(total, na.rm = TRUE),
.groups = "drop"
)
```
Table 5\.6: All rows from `disgust_groups`
| year | n | avg\_total | sd\_total | min\_total | max\_total |
| --- | --- | --- | --- | --- | --- |
| 2008 | 2578 | 70\.29975 | 18\.46251 | 0 | 126 |
| 2009 | 2580 | 69\.74481 | 18\.61959 | 3 | 126 |
| 2010 | 1514 | 70\.59238 | 18\.86846 | 6 | 126 |
| 2011 | 6046 | 71\.34425 | 17\.79446 | 0 | 126 |
| 2012 | 5938 | 70\.42530 | 18\.35782 | 0 | 126 |
| 2013 | 1251 | 71\.59574 | 17\.61375 | 0 | 126 |
| 2014 | 58 | 70\.46296 | 17\.23502 | 19 | 113 |
| 2015 | 21 | 74\.26316 | 16\.89787 | 43 | 107 |
| 2016 | 8 | 67\.87500 | 32\.62531 | 0 | 110 |
| 2017 | 6 | 57\.16667 | 27\.93862 | 21 | 90 |
If you don’t add `.groups = “drop”` at the end of the `summarise()` function, you will get the following message: “`summarise()` ungrouping output (override with `.groups` argument).” This just reminds you that the groups are still in effect and any further functions will also be grouped.
Older versions of dplyr didn’t do this, so older code will generate this warning if you run it with newer version of dplyr. Older code might `ungroup()` after `summarise()` to indicate that groupings should be dropped. The default behaviour is usually correct, so you don’t need to worry, but it’s best to explicitly set `.groups` in a `summarise()` function after `group_by()` if you want to “keep” or “drop” the groupings.
You can use `filter` after `group_by`. The following example returns the lowest total score from each year (i.e., the row where the `rank()` of the value in the column `total` is equivalent to `1`).
```
disgust_lowest <- disgust_total %>%
mutate(year = year(date)) %>%
select(user_id, year, total) %>%
group_by(year) %>%
filter(rank(total) == 1) %>%
arrange(year)
```
Table 5\.7: All rows from `disgust_lowest`
| user\_id | year | total |
| --- | --- | --- |
| U236585 | 2009 | 3 |
| U292359 | 2010 | 6 |
| U245384 | 2013 | 0 |
| U206293 | 2014 | 19 |
| U407089 | 2015 | 43 |
| U453237 | 2016 | 0 |
| U356866 | 2017 | 21 |
You can also use `mutate` after `group_by`. The following example calculates subject\-mean\-centered scores by grouping the scores by `user_id` and then subtracting the group\-specific mean from each score. Note the use of `gather` to tidy the data into a long format first.
```
disgust_smc <- disgust %>%
gather("question", "score", moral1:pathogen7) %>%
group_by(user_id) %>%
mutate(score_smc = score - mean(score, na.rm = TRUE)) %>%
ungroup()
```
Use `ungroup()` as soon as you are done with grouped functions, otherwise the data table will still be grouped when you use it in the future.
Table 5\.8: Rows 1\-6 from `disgust_smc`
| id | user\_id | date | question | score | score\_smc |
| --- | --- | --- | --- | --- | --- |
| 1199 | 0 | 2008\-10\-07 | moral1 | 5 | 0\.9523810 |
| 1 | 1 | 2008\-07\-10 | moral1 | 2 | 0\.0476190 |
| 1599 | 2 | 2008\-10\-27 | moral1 | 1 | 0\.0000000 |
| 13332 | 2118 | 2012\-01\-02 | moral1 | 0 | \-3\.0000000 |
| 23 | 2311 | 2008\-07\-15 | moral1 | 4 | 0\.6190476 |
| 1160 | 3630 | 2008\-10\-06 | moral1 | 1 | \-1\.2500000 |
### 5\.4\.7 All Together
A lot of what we did above would be easier if the data were tidy, so let’s do that first. Then we can use `group_by` to calculate the domain scores.
After that, we can spread out the 3 domains, calculate the total score, remove any rows with a missing (`NA`) total, and calculate mean values by year.
```
disgust_tidy <- dataskills::disgust %>%
gather("question", "score", moral1:pathogen7) %>%
separate(question, c("domain","q_num"), sep = -1) %>%
group_by(id, user_id, date, domain) %>%
summarise(score = mean(score), .groups = "drop")
```
Table 5\.9: Rows 1\-6 from `disgust_tidy`
| id | user\_id | date | domain | score |
| --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | moral | 1\.428571 |
| 1 | 1 | 2008\-07\-10 | pathogen | 2\.714286 |
| 1 | 1 | 2008\-07\-10 | sexual | 1\.714286 |
| 3 | 155324 | 2008\-07\-11 | moral | 3\.000000 |
| 3 | 155324 | 2008\-07\-11 | pathogen | 2\.571429 |
| 3 | 155324 | 2008\-07\-11 | sexual | 1\.857143 |
```
disgust_scored <- disgust_tidy %>%
spread(domain, score) %>%
mutate(
total = moral + sexual + pathogen,
year = year(date)
) %>%
filter(!is.na(total)) %>%
arrange(user_id)
```
Table 5\.10: Rows 1\-6 from `disgust_scored`
| id | user\_id | date | moral | pathogen | sexual | total | year |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1199 | 0 | 2008\-10\-07 | 5\.285714 | 4\.714286 | 2\.142857 | 12\.142857 | 2008 |
| 1 | 1 | 2008\-07\-10 | 1\.428571 | 2\.714286 | 1\.714286 | 5\.857143 | 2008 |
| 13332 | 2118 | 2012\-01\-02 | 1\.000000 | 5\.000000 | 3\.000000 | 9\.000000 | 2012 |
| 23 | 2311 | 2008\-07\-15 | 4\.000000 | 4\.285714 | 1\.857143 | 10\.142857 | 2008 |
| 7980 | 4458 | 2011\-09\-05 | 3\.428571 | 3\.571429 | 3\.000000 | 10\.000000 | 2011 |
| 552 | 4651 | 2008\-08\-23 | 3\.857143 | 4\.857143 | 4\.285714 | 13\.000000 | 2008 |
```
disgust_summarised <- disgust_scored %>%
group_by(year) %>%
summarise(
n = n(),
avg_pathogen = mean(pathogen),
avg_moral = mean(moral),
avg_sexual = mean(sexual),
first_user = first(user_id),
last_user = last(user_id),
.groups = "drop"
)
```
Table 5\.11: Rows 1\-6 from `disgust_summarised`
| year | n | avg\_pathogen | avg\_moral | avg\_sexual | first\_user | last\_user |
| --- | --- | --- | --- | --- | --- | --- |
| 2008 | 2392 | 3\.697265 | 3\.806259 | 2\.539298 | 0 | 188708 |
| 2009 | 2410 | 3\.674333 | 3\.760937 | 2\.528275 | 6093 | 251959 |
| 2010 | 1418 | 3\.731412 | 3\.843139 | 2\.510075 | 5469 | 319641 |
| 2011 | 5586 | 3\.756918 | 3\.806506 | 2\.628612 | 4458 | 406569 |
| 2012 | 5375 | 3\.740465 | 3\.774591 | 2\.545701 | 2118 | 458194 |
| 2013 | 1222 | 3\.771920 | 3\.906944 | 2\.549100 | 7646 | 462428 |
| 2014 | 54 | 3\.759259 | 4\.000000 | 2\.306878 | 11090 | 461307 |
| 2015 | 19 | 3\.781955 | 4\.451128 | 2\.375940 | 102699 | 460283 |
| 2016 | 8 | 3\.696429 | 3\.625000 | 2\.375000 | 4976 | 453237 |
| 2017 | 6 | 3\.071429 | 3\.690476 | 1\.404762 | 48303 | 370464 |
5\.5 Additional dplyr one\-table verbs
--------------------------------------
Use the code examples below and the help pages to figure out what the following one\-table verbs do. Most have pretty self\-explanatory names.
### 5\.5\.1 rename()
You can rename columns with `rename()`. Set the argument name to the new name, and the value to the old name. You need to put a name in quotes or backticks if it doesn’t follow the rules for a good variable name (contains only letter, numbers, underscores, and full stops; and doesn’t start with a number).
```
sw <- starwars %>%
rename(Name = name,
Height = height,
Mass = mass,
`Hair Colour` = hair_color,
`Skin Colour` = skin_color,
`Eye Colour` = eye_color,
`Birth Year` = birth_year)
names(sw)
```
```
## [1] "Name" "Height" "Mass" "Hair Colour" "Skin Colour"
## [6] "Eye Colour" "Birth Year" "sex" "gender" "homeworld"
## [11] "species" "films" "vehicles" "starships"
```
Almost everyone gets confused at some point with `rename()` and tries to put the original names on the left and the new names on the right. Try it and see what the error message looks like.
### 5\.5\.2 distinct()
Get rid of exactly duplicate rows with `distinct()`. This can be helpful if, for example, you are merging data from multiple computers and some of the data got copied from one computer to another, creating duplicate rows.
```
# create a data table with duplicated values
dupes <- tibble(
id = c( 1, 2, 1, 2, 1, 2),
dv = c("A", "B", "C", "D", "A", "B")
)
distinct(dupes)
```
| id | dv |
| --- | --- |
| 1 | A |
| 2 | B |
| 1 | C |
| 2 | D |
### 5\.5\.3 count()
The function `count()` is a quick shortcut for the common combination of `group_by()` and `summarise()` used to count the number of rows per group.
```
starwars %>%
group_by(sex) %>%
summarise(n = n(), .groups = "drop")
```
| sex | n |
| --- | --- |
| female | 16 |
| hermaphroditic | 1 |
| male | 60 |
| none | 6 |
| NA | 4 |
```
count(starwars, sex)
```
| sex | n |
| --- | --- |
| female | 16 |
| hermaphroditic | 1 |
| male | 60 |
| none | 6 |
| NA | 4 |
### 5\.5\.4 slice()
```
slice(starwars, 1:3, 10)
```
| name | height | mass | hair\_color | skin\_color | eye\_color | birth\_year | sex | gender | homeworld | species | films | vehicles | starships |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Luke Skywalker | 172 | 77 | blond | fair | blue | 19 | male | masculine | Tatooine | Human | The Empire Strikes Back, Revenge of the Sith , Return of the Jedi , A New Hope , The Force Awakens | Snowspeeder , Imperial Speeder Bike | X\-wing , Imperial shuttle |
| C\-3PO | 167 | 75 | NA | gold | yellow | 112 | none | masculine | Tatooine | Droid | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope | | |
| R2\-D2 | 96 | 32 | NA | white, blue | red | 33 | none | masculine | Naboo | Droid | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope , The Force Awakens | | |
| Obi\-Wan Kenobi | 182 | 77 | auburn, white | fair | blue\-gray | 57 | male | masculine | Stewjon | Human | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope | Tribubble bongo | Jedi starfighter , Trade Federation cruiser, Naboo star skiff , Jedi Interceptor , Belbullab\-22 starfighter |
### 5\.5\.5 pull()
```
starwars %>%
filter(species == "Droid") %>%
pull(name)
```
```
## [1] "C-3PO" "R2-D2" "R5-D4" "IG-88" "R4-P17" "BB8"
```
### 5\.5\.1 rename()
You can rename columns with `rename()`. Set the argument name to the new name, and the value to the old name. You need to put a name in quotes or backticks if it doesn’t follow the rules for a good variable name (contains only letter, numbers, underscores, and full stops; and doesn’t start with a number).
```
sw <- starwars %>%
rename(Name = name,
Height = height,
Mass = mass,
`Hair Colour` = hair_color,
`Skin Colour` = skin_color,
`Eye Colour` = eye_color,
`Birth Year` = birth_year)
names(sw)
```
```
## [1] "Name" "Height" "Mass" "Hair Colour" "Skin Colour"
## [6] "Eye Colour" "Birth Year" "sex" "gender" "homeworld"
## [11] "species" "films" "vehicles" "starships"
```
Almost everyone gets confused at some point with `rename()` and tries to put the original names on the left and the new names on the right. Try it and see what the error message looks like.
### 5\.5\.2 distinct()
Get rid of exactly duplicate rows with `distinct()`. This can be helpful if, for example, you are merging data from multiple computers and some of the data got copied from one computer to another, creating duplicate rows.
```
# create a data table with duplicated values
dupes <- tibble(
id = c( 1, 2, 1, 2, 1, 2),
dv = c("A", "B", "C", "D", "A", "B")
)
distinct(dupes)
```
| id | dv |
| --- | --- |
| 1 | A |
| 2 | B |
| 1 | C |
| 2 | D |
### 5\.5\.3 count()
The function `count()` is a quick shortcut for the common combination of `group_by()` and `summarise()` used to count the number of rows per group.
```
starwars %>%
group_by(sex) %>%
summarise(n = n(), .groups = "drop")
```
| sex | n |
| --- | --- |
| female | 16 |
| hermaphroditic | 1 |
| male | 60 |
| none | 6 |
| NA | 4 |
```
count(starwars, sex)
```
| sex | n |
| --- | --- |
| female | 16 |
| hermaphroditic | 1 |
| male | 60 |
| none | 6 |
| NA | 4 |
### 5\.5\.4 slice()
```
slice(starwars, 1:3, 10)
```
| name | height | mass | hair\_color | skin\_color | eye\_color | birth\_year | sex | gender | homeworld | species | films | vehicles | starships |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Luke Skywalker | 172 | 77 | blond | fair | blue | 19 | male | masculine | Tatooine | Human | The Empire Strikes Back, Revenge of the Sith , Return of the Jedi , A New Hope , The Force Awakens | Snowspeeder , Imperial Speeder Bike | X\-wing , Imperial shuttle |
| C\-3PO | 167 | 75 | NA | gold | yellow | 112 | none | masculine | Tatooine | Droid | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope | | |
| R2\-D2 | 96 | 32 | NA | white, blue | red | 33 | none | masculine | Naboo | Droid | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope , The Force Awakens | | |
| Obi\-Wan Kenobi | 182 | 77 | auburn, white | fair | blue\-gray | 57 | male | masculine | Stewjon | Human | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope | Tribubble bongo | Jedi starfighter , Trade Federation cruiser, Naboo star skiff , Jedi Interceptor , Belbullab\-22 starfighter |
### 5\.5\.5 pull()
```
starwars %>%
filter(species == "Droid") %>%
pull(name)
```
```
## [1] "C-3PO" "R2-D2" "R5-D4" "IG-88" "R4-P17" "BB8"
```
5\.6 Window functions
---------------------
Window functions use the order of rows to calculate values. You can use them to do things that require ranking or ordering, like choose the top scores in each class, or accessing the previous and next rows, like calculating cumulative sums or means.
The [dplyr window functions vignette](https://dplyr.tidyverse.org/articles/window-functions.html) has very good detailed explanations of these functions, but we’ve described a few of the most useful ones below.
### 5\.6\.1 Ranking functions
```
grades <- tibble(
id = 1:5,
"Data Skills" = c(16, 17, 17, 19, 20),
"Statistics" = c(14, 16, 18, 18, 19)
) %>%
gather(class, grade, 2:3) %>%
group_by(class) %>%
mutate(row_number = row_number(),
rank = rank(grade),
min_rank = min_rank(grade),
dense_rank = dense_rank(grade),
quartile = ntile(grade, 4),
percentile = ntile(grade, 100))
```
Table 5\.12: All rows from `grades`
| id | class | grade | row\_number | rank | min\_rank | dense\_rank | quartile | percentile |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | Data Skills | 16 | 1 | 1\.0 | 1 | 1 | 1 | 1 |
| 2 | Data Skills | 17 | 2 | 2\.5 | 2 | 2 | 1 | 2 |
| 3 | Data Skills | 17 | 3 | 2\.5 | 2 | 2 | 2 | 3 |
| 4 | Data Skills | 19 | 4 | 4\.0 | 4 | 3 | 3 | 4 |
| 5 | Data Skills | 20 | 5 | 5\.0 | 5 | 4 | 4 | 5 |
| 1 | Statistics | 14 | 1 | 1\.0 | 1 | 1 | 1 | 1 |
| 2 | Statistics | 16 | 2 | 2\.0 | 2 | 2 | 1 | 2 |
| 3 | Statistics | 18 | 3 | 3\.5 | 3 | 3 | 2 | 3 |
| 4 | Statistics | 18 | 4 | 3\.5 | 3 | 3 | 3 | 4 |
| 5 | Statistics | 19 | 5 | 5\.0 | 5 | 4 | 4 | 5 |
* What are the differences among `row_number()`, `rank()`, `min_rank()`, `dense_rank()`, and `ntile()`?
* Why doesn’t `row_number()` need an argument?
* What would happen if you gave it the argument `grade` or `class`?
* What do you think would happen if you removed the `group_by(class)` line above?
* What if you added `id` to the grouping?
* What happens if you change the order of the rows?
* What does the second argument in `ntile()` do?
You can use window functions to group your data into quantiles.
```
sw_mass <- starwars %>%
group_by(tertile = ntile(mass, 3)) %>%
summarise(min = min(mass),
max = max(mass),
mean = mean(mass),
.groups = "drop")
```
Table 5\.13: All rows from `sw_mass`
| tertile | min | max | mean |
| --- | --- | --- | --- |
| 1 | 15 | 68 | 45\.6600 |
| 2 | 74 | 82 | 78\.4100 |
| 3 | 83 | 1358 | 171\.5789 |
| NA | NA | NA | NA |
Why is there a row of `NA` values? How would you get rid of them?
### 5\.6\.2 Offset functions
The function `lag()` gives a previous row’s value. It defaults to 1 row back, but you can change that with the `n` argument. The function `lead()` gives values ahead of the current row.
```
lag_lead <- tibble(x = 1:6) %>%
mutate(lag = lag(x),
lag2 = lag(x, n = 2),
lead = lead(x, default = 0))
```
Table 5\.14: All rows from `lag_lead`
| x | lag | lag2 | lead |
| --- | --- | --- | --- |
| 1 | NA | NA | 2 |
| 2 | 1 | NA | 3 |
| 3 | 2 | 1 | 4 |
| 4 | 3 | 2 | 5 |
| 5 | 4 | 3 | 6 |
| 6 | 5 | 4 | 0 |
You can use offset functions to calculate change between trials or where a value changes. Use the `order_by` argument to specify the order of the rows. Alternatively, you can use `arrange()` before the offset functions.
```
trials <- tibble(
trial = sample(1:10, 10),
cond = sample(c("exp", "ctrl"), 10, T),
score = rpois(10, 4)
) %>%
mutate(
score_change = score - lag(score, order_by = trial),
change_cond = cond != lag(cond, order_by = trial,
default = "no condition")
) %>%
arrange(trial)
```
Table 5\.15: All rows from `trials`
| trial | cond | score | score\_change | change\_cond |
| --- | --- | --- | --- | --- |
| 1 | ctrl | 8 | NA | TRUE |
| 2 | ctrl | 4 | \-4 | FALSE |
| 3 | exp | 6 | 2 | TRUE |
| 4 | ctrl | 2 | \-4 | TRUE |
| 5 | ctrl | 3 | 1 | FALSE |
| 6 | ctrl | 6 | 3 | FALSE |
| 7 | ctrl | 2 | \-4 | FALSE |
| 8 | exp | 4 | 2 | TRUE |
| 9 | ctrl | 4 | 0 | TRUE |
| 10 | exp | 3 | \-1 | TRUE |
Look at the help pages for `lag()` and `lead()`.
* What happens if you remove the `order_by` argument or change it to `cond`?
* What does the `default` argument do?
* Can you think of circumstances in your own data where you might need to use `lag()` or `lead()`?
### 5\.6\.3 Cumulative aggregates
`cumsum()`, `cummin()`, and `cummax()` are base R functions for calculating cumulative means, minimums, and maximums. The dplyr package introduces `cumany()` and `cumall()`, which return `TRUE` if any or all of the previous values meet their criteria.
```
cumulative <- tibble(
time = 1:10,
obs = c(2, 2, 1, 2, 4, 3, 1, 0, 3, 5)
) %>%
mutate(
cumsum = cumsum(obs),
cummin = cummin(obs),
cummax = cummax(obs),
cumany = cumany(obs == 3),
cumall = cumall(obs < 4)
)
```
Table 5\.16: All rows from `cumulative`
| time | obs | cumsum | cummin | cummax | cumany | cumall |
| --- | --- | --- | --- | --- | --- | --- |
| 1 | 2 | 2 | 2 | 2 | FALSE | TRUE |
| 2 | 2 | 4 | 2 | 2 | FALSE | TRUE |
| 3 | 1 | 5 | 1 | 2 | FALSE | TRUE |
| 4 | 2 | 7 | 1 | 2 | FALSE | TRUE |
| 5 | 4 | 11 | 1 | 4 | FALSE | FALSE |
| 6 | 3 | 14 | 1 | 4 | TRUE | FALSE |
| 7 | 1 | 15 | 1 | 4 | TRUE | FALSE |
| 8 | 0 | 15 | 0 | 4 | TRUE | FALSE |
| 9 | 3 | 18 | 0 | 4 | TRUE | FALSE |
| 10 | 5 | 23 | 0 | 5 | TRUE | FALSE |
* What would happen if you change `cumany(obs == 3)` to `cumany(obs > 2)`?
* What would happen if you change `cumall(obs < 4)` to `cumall(obs < 2)`?
* Can you think of circumstances in your own data where you might need to use `cumany()` or `cumall()`?
### 5\.6\.1 Ranking functions
```
grades <- tibble(
id = 1:5,
"Data Skills" = c(16, 17, 17, 19, 20),
"Statistics" = c(14, 16, 18, 18, 19)
) %>%
gather(class, grade, 2:3) %>%
group_by(class) %>%
mutate(row_number = row_number(),
rank = rank(grade),
min_rank = min_rank(grade),
dense_rank = dense_rank(grade),
quartile = ntile(grade, 4),
percentile = ntile(grade, 100))
```
Table 5\.12: All rows from `grades`
| id | class | grade | row\_number | rank | min\_rank | dense\_rank | quartile | percentile |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | Data Skills | 16 | 1 | 1\.0 | 1 | 1 | 1 | 1 |
| 2 | Data Skills | 17 | 2 | 2\.5 | 2 | 2 | 1 | 2 |
| 3 | Data Skills | 17 | 3 | 2\.5 | 2 | 2 | 2 | 3 |
| 4 | Data Skills | 19 | 4 | 4\.0 | 4 | 3 | 3 | 4 |
| 5 | Data Skills | 20 | 5 | 5\.0 | 5 | 4 | 4 | 5 |
| 1 | Statistics | 14 | 1 | 1\.0 | 1 | 1 | 1 | 1 |
| 2 | Statistics | 16 | 2 | 2\.0 | 2 | 2 | 1 | 2 |
| 3 | Statistics | 18 | 3 | 3\.5 | 3 | 3 | 2 | 3 |
| 4 | Statistics | 18 | 4 | 3\.5 | 3 | 3 | 3 | 4 |
| 5 | Statistics | 19 | 5 | 5\.0 | 5 | 4 | 4 | 5 |
* What are the differences among `row_number()`, `rank()`, `min_rank()`, `dense_rank()`, and `ntile()`?
* Why doesn’t `row_number()` need an argument?
* What would happen if you gave it the argument `grade` or `class`?
* What do you think would happen if you removed the `group_by(class)` line above?
* What if you added `id` to the grouping?
* What happens if you change the order of the rows?
* What does the second argument in `ntile()` do?
You can use window functions to group your data into quantiles.
```
sw_mass <- starwars %>%
group_by(tertile = ntile(mass, 3)) %>%
summarise(min = min(mass),
max = max(mass),
mean = mean(mass),
.groups = "drop")
```
Table 5\.13: All rows from `sw_mass`
| tertile | min | max | mean |
| --- | --- | --- | --- |
| 1 | 15 | 68 | 45\.6600 |
| 2 | 74 | 82 | 78\.4100 |
| 3 | 83 | 1358 | 171\.5789 |
| NA | NA | NA | NA |
Why is there a row of `NA` values? How would you get rid of them?
### 5\.6\.2 Offset functions
The function `lag()` gives a previous row’s value. It defaults to 1 row back, but you can change that with the `n` argument. The function `lead()` gives values ahead of the current row.
```
lag_lead <- tibble(x = 1:6) %>%
mutate(lag = lag(x),
lag2 = lag(x, n = 2),
lead = lead(x, default = 0))
```
Table 5\.14: All rows from `lag_lead`
| x | lag | lag2 | lead |
| --- | --- | --- | --- |
| 1 | NA | NA | 2 |
| 2 | 1 | NA | 3 |
| 3 | 2 | 1 | 4 |
| 4 | 3 | 2 | 5 |
| 5 | 4 | 3 | 6 |
| 6 | 5 | 4 | 0 |
You can use offset functions to calculate change between trials or where a value changes. Use the `order_by` argument to specify the order of the rows. Alternatively, you can use `arrange()` before the offset functions.
```
trials <- tibble(
trial = sample(1:10, 10),
cond = sample(c("exp", "ctrl"), 10, T),
score = rpois(10, 4)
) %>%
mutate(
score_change = score - lag(score, order_by = trial),
change_cond = cond != lag(cond, order_by = trial,
default = "no condition")
) %>%
arrange(trial)
```
Table 5\.15: All rows from `trials`
| trial | cond | score | score\_change | change\_cond |
| --- | --- | --- | --- | --- |
| 1 | ctrl | 8 | NA | TRUE |
| 2 | ctrl | 4 | \-4 | FALSE |
| 3 | exp | 6 | 2 | TRUE |
| 4 | ctrl | 2 | \-4 | TRUE |
| 5 | ctrl | 3 | 1 | FALSE |
| 6 | ctrl | 6 | 3 | FALSE |
| 7 | ctrl | 2 | \-4 | FALSE |
| 8 | exp | 4 | 2 | TRUE |
| 9 | ctrl | 4 | 0 | TRUE |
| 10 | exp | 3 | \-1 | TRUE |
Look at the help pages for `lag()` and `lead()`.
* What happens if you remove the `order_by` argument or change it to `cond`?
* What does the `default` argument do?
* Can you think of circumstances in your own data where you might need to use `lag()` or `lead()`?
### 5\.6\.3 Cumulative aggregates
`cumsum()`, `cummin()`, and `cummax()` are base R functions for calculating cumulative means, minimums, and maximums. The dplyr package introduces `cumany()` and `cumall()`, which return `TRUE` if any or all of the previous values meet their criteria.
```
cumulative <- tibble(
time = 1:10,
obs = c(2, 2, 1, 2, 4, 3, 1, 0, 3, 5)
) %>%
mutate(
cumsum = cumsum(obs),
cummin = cummin(obs),
cummax = cummax(obs),
cumany = cumany(obs == 3),
cumall = cumall(obs < 4)
)
```
Table 5\.16: All rows from `cumulative`
| time | obs | cumsum | cummin | cummax | cumany | cumall |
| --- | --- | --- | --- | --- | --- | --- |
| 1 | 2 | 2 | 2 | 2 | FALSE | TRUE |
| 2 | 2 | 4 | 2 | 2 | FALSE | TRUE |
| 3 | 1 | 5 | 1 | 2 | FALSE | TRUE |
| 4 | 2 | 7 | 1 | 2 | FALSE | TRUE |
| 5 | 4 | 11 | 1 | 4 | FALSE | FALSE |
| 6 | 3 | 14 | 1 | 4 | TRUE | FALSE |
| 7 | 1 | 15 | 1 | 4 | TRUE | FALSE |
| 8 | 0 | 15 | 0 | 4 | TRUE | FALSE |
| 9 | 3 | 18 | 0 | 4 | TRUE | FALSE |
| 10 | 5 | 23 | 0 | 5 | TRUE | FALSE |
* What would happen if you change `cumany(obs == 3)` to `cumany(obs > 2)`?
* What would happen if you change `cumall(obs < 4)` to `cumall(obs < 2)`?
* Can you think of circumstances in your own data where you might need to use `cumany()` or `cumall()`?
5\.7 Glossary
-------------
| term | definition |
| --- | --- |
| [data wrangling](https://psyteachr.github.io/glossary/d#data.wrangling) | The process of preparing data for visualisation and statistical analysis. |
5\.8 Exercises
--------------
Download the [exercises](exercises/05_dplyr_exercise.Rmd). See the [answers](exercises/05_dplyr_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(5)
# run this to access the answers
dataskills::exercise(5, answers = TRUE)
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/dplyr.html |
Chapter 5 Data Wrangling
========================
5\.1 Learning Objectives
------------------------
### 5\.1\.1 Basic
1. Be able to use the 6 main dplyr one\-table verbs: [(video)](https://youtu.be/l12tNKClTR0)
* [`select()`](dplyr.html#select)
* [`filter()`](dplyr.html#filter)
* [`arrange()`](dplyr.html#arrange)
* [`mutate()`](dplyr.html#mutate)
* [`summarise()`](dplyr.html#summarise)
* [`group_by()`](dplyr.html#group_by)
2. Be able to [wrangle data by chaining tidyr and dplyr functions](dplyr.html#all-together) [(video)](https://youtu.be/hzFFAkwrkqA)
3. Be able to use these additional one\-table verbs: [(video)](https://youtu.be/GmfF162mq4g)
* [`rename()`](dplyr.html#rename)
* [`distinct()`](dplyr.html#distinct)
* [`count()`](dplyr.html#count)
* [`slice()`](dplyr.html#slice)
* [`pull()`](dplyr.html#pull)
### 5\.1\.2 Intermediate
4. Fine control of [`select()` operations](dplyr.html#select_helpers) [(video)](https://youtu.be/R1bi1QwF9t0)
5. Use [window functions](dplyr.html#window) [(video)](https://youtu.be/uo4b0W9mqPc)
5\.2 Resources
--------------
* [Chapter 5: Data Transformation](http://r4ds.had.co.nz/transform.html) in *R for Data Science*
* [Data transformation cheat sheet](https://github.com/rstudio/cheatsheets/raw/master/data-transformation.pdf)
* [Chapter 16: Date and times](http://r4ds.had.co.nz/dates-and-times.html) in *R for Data Science*
5\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(lubridate)
library(dataskills)
set.seed(8675309) # makes sure random numbers are reproducible
```
### 5\.3\.1 The `disgust` dataset
These examples will use data from `dataskills::disgust`, which contains data from the [Three Domain Disgust Scale](http://digitalrepository.unm.edu/cgi/viewcontent.cgi?article=1139&context=psy_etds). Each participant is identified by a unique `user_id` and each questionnaire completion has a unique `id`. Look at the Help for this dataset to see the individual questions.
```
data("disgust", package = "dataskills")
#disgust <- read_csv("https://psyteachr.github.io/msc-data-skills/data/disgust.csv")
```
5\.4 Six main dplyr verbs
-------------------------
Most of the [data wrangling](https://psyteachr.github.io/glossary/d#data-wrangling "The process of preparing data for visualisation and statistical analysis.") you’ll want to do with psychological data will involve the `tidyr` functions you learned in [Chapter 4](tidyr.html#tidyr) and the six main `dplyr` verbs: `select`, `filter`, `arrange`, `mutate`, `summarise`, and `group_by`.
### 5\.4\.1 select()
Select columns by name or number.
You can select each column individually, separated by commas (e.g., `col1, col2`). You can also select all columns between two columns by separating them with a colon (e.g., `start_col:end_col`).
```
moral <- disgust %>% select(user_id, moral1:moral7)
names(moral)
```
```
## [1] "user_id" "moral1" "moral2" "moral3" "moral4" "moral5" "moral6"
## [8] "moral7"
```
You can select columns by number, which is useful when the column names are long or complicated.
```
sexual <- disgust %>% select(2, 11:17)
names(sexual)
```
```
## [1] "user_id" "sexual1" "sexual2" "sexual3" "sexual4" "sexual5" "sexual6"
## [8] "sexual7"
```
You can use a minus symbol to unselect columns, leaving all of the other columns. If you want to exclude a span of columns, put parentheses around the span first (e.g., `-(moral1:moral7)`, not `-moral1:moral7`).
```
pathogen <- disgust %>% select(-id, -date, -(moral1:sexual7))
names(pathogen)
```
```
## [1] "user_id" "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5"
## [7] "pathogen6" "pathogen7"
```
#### 5\.4\.1\.1 Select helpers
You can select columns based on criteria about the column names.
##### 5\.4\.1\.1\.1 `starts_with()`
Select columns that start with a character string.
```
u <- disgust %>% select(starts_with("u"))
names(u)
```
```
## [1] "user_id"
```
##### 5\.4\.1\.1\.2 `ends_with()`
Select columns that end with a character string.
```
firstq <- disgust %>% select(ends_with("1"))
names(firstq)
```
```
## [1] "moral1" "sexual1" "pathogen1"
```
##### 5\.4\.1\.1\.3 `contains()`
Select columns that contain a character string.
```
pathogen <- disgust %>% select(contains("pathogen"))
names(pathogen)
```
```
## [1] "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5" "pathogen6"
## [7] "pathogen7"
```
##### 5\.4\.1\.1\.4 `num_range()`
Select columns with a name that matches the pattern `prefix`.
```
moral2_4 <- disgust %>% select(num_range("moral", 2:4))
names(moral2_4)
```
```
## [1] "moral2" "moral3" "moral4"
```
Use `width` to set the number of digits with leading zeros. For example, `num_range(‘var_,’ 8:10, width=2)` selects columns `var_08`, `var_09`, and `var_10`.
### 5\.4\.2 filter()
Select rows by matching column criteria.
Select all rows where the user\_id is 1 (that’s Lisa).
```
disgust %>% filter(user_id == 1)
```
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 |
Remember to use `==` and not `=` to check if two things are equivalent. A single `=` assigns the righthand value to the lefthand variable and (usually) evaluates to `TRUE`.
You can select on multiple criteria by separating them with commas.
```
amoral <- disgust %>% filter(
moral1 == 0,
moral2 == 0,
moral3 == 0,
moral4 == 0,
moral5 == 0,
moral6 == 0,
moral7 == 0
)
```
You can use the symbols `&`, `|`, and `!` to mean “and,” “or,” and “not.” You can also use other operators to make equations.
```
# everyone who chose either 0 or 7 for question moral1
moral_extremes <- disgust %>%
filter(moral1 == 0 | moral1 == 7)
# everyone who chose the same answer for all moral questions
moral_consistent <- disgust %>%
filter(
moral2 == moral1 &
moral3 == moral1 &
moral4 == moral1 &
moral5 == moral1 &
moral6 == moral1 &
moral7 == moral1
)
# everyone who did not answer 7 for all 7 moral questions
moral_no_ceiling <- disgust %>%
filter(moral1+moral2+moral3+moral4+moral5+moral6+moral7 != 7*7)
```
#### 5\.4\.2\.1 Match operator (%in%)
Sometimes you need to exclude some participant IDs for reasons that can’t be described in code. The match operator (`%in%`) is useful here for testing if a column value is in a list. Surround the equation with parentheses and put `!` in front to test that a value is not in the list.
```
no_researchers <- disgust %>%
filter(!(user_id %in% c(1,2)))
```
#### 5\.4\.2\.2 Dates
You can use the `lubridate` package to work with dates. For example, you can use the `year()` function to return just the year from the `date` column and then select only data collected in 2010\.
```
disgust2010 <- disgust %>%
filter(year(date) == 2010)
```
Table 5\.1: Rows 1\-6 from `disgust2010`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 6902 | 5469 | 2010\-12\-06 | 0 | 1 | 3 | 4 | 1 | 0 | 1 | 3 | 5 | 2 | 4 | 6 | 6 | 5 | 5 | 2 | 4 | 4 | 2 | 2 | 6 |
| 6158 | 6066 | 2010\-04\-18 | 4 | 5 | 6 | 5 | 5 | 4 | 4 | 3 | 0 | 1 | 6 | 3 | 5 | 3 | 6 | 5 | 5 | 5 | 5 | 5 | 5 |
| 6362 | 7129 | 2010\-06\-09 | 4 | 4 | 4 | 4 | 3 | 3 | 2 | 4 | 2 | 1 | 3 | 2 | 3 | 6 | 5 | 2 | 0 | 4 | 5 | 5 | 4 |
| 6302 | 39318 | 2010\-05\-20 | 2 | 4 | 1 | 4 | 5 | 6 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 3 | 2 | 3 | 2 | 3 | 2 | 4 |
| 5429 | 43029 | 2010\-01\-02 | 1 | 1 | 1 | 3 | 6 | 4 | 2 | 2 | 0 | 1 | 4 | 6 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 6 | 4 |
| 6732 | 71955 | 2010\-10\-15 | 2 | 5 | 3 | 6 | 3 | 2 | 5 | 4 | 3 | 3 | 6 | 6 | 6 | 5 | 4 | 2 | 6 | 5 | 6 | 6 | 3 |
Or select data from at least 5 years ago. You can use the `range` function to check the minimum and maximum dates in the resulting dataset.
```
disgust_5ago <- disgust %>%
filter(date < today() - dyears(5))
range(disgust_5ago$date)
```
```
## [1] "2008-07-10" "2016-08-04"
```
### 5\.4\.3 arrange()
Sort your dataset using `arrange()`. You will find yourself needing to sort data in R much less than you do in Excel, since you don’t need to have rows next to each other in order to, for example, calculate group means. But `arrange()` can be useful when preparing data from display in tables.
```
disgust_order <- disgust %>%
arrange(date, moral1)
```
Table 5\.2: Rows 1\-6 from `disgust_order`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 |
| 3 | 155324 | 2008\-07\-11 | 2 | 4 | 3 | 5 | 2 | 1 | 4 | 1 | 0 | 1 | 2 | 2 | 6 | 1 | 4 | 3 | 1 | 0 | 4 | 4 | 2 |
| 6 | 155386 | 2008\-07\-12 | 2 | 4 | 0 | 4 | 0 | 0 | 0 | 6 | 0 | 0 | 6 | 4 | 4 | 6 | 4 | 5 | 5 | 1 | 6 | 4 | 2 |
| 7 | 155409 | 2008\-07\-12 | 4 | 5 | 5 | 4 | 5 | 1 | 5 | 3 | 0 | 1 | 5 | 2 | 0 | 0 | 5 | 5 | 3 | 4 | 4 | 2 | 6 |
| 4 | 155366 | 2008\-07\-12 | 6 | 6 | 6 | 3 | 6 | 6 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 4 | 4 | 5 | 5 | 4 | 6 | 0 |
| 5 | 155370 | 2008\-07\-12 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 2 | 6 | 4 | 3 | 6 | 6 | 6 | 6 | 6 | 6 | 2 | 4 | 4 | 6 |
Reverse the order using `desc()`
```
disgust_order_desc <- disgust %>%
arrange(desc(date))
```
Table 5\.3: Rows 1\-6 from `disgust_order_desc`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 39456 | 356866 | 2017\-08\-21 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 39447 | 128727 | 2017\-08\-13 | 2 | 4 | 1 | 2 | 2 | 5 | 3 | 0 | 0 | 1 | 0 | 0 | 2 | 1 | 2 | 0 | 2 | 1 | 1 | 1 | 1 |
| 39371 | 152955 | 2017\-06\-13 | 6 | 6 | 3 | 6 | 6 | 6 | 6 | 1 | 0 | 0 | 2 | 1 | 4 | 4 | 5 | 0 | 5 | 4 | 3 | 6 | 3 |
| 39342 | 48303 | 2017\-05\-22 | 4 | 5 | 4 | 4 | 6 | 4 | 5 | 2 | 1 | 4 | 1 | 1 | 3 | 1 | 5 | 5 | 4 | 4 | 4 | 4 | 5 |
| 39159 | 151633 | 2017\-04\-04 | 4 | 5 | 6 | 5 | 3 | 6 | 2 | 6 | 4 | 0 | 4 | 0 | 3 | 6 | 4 | 4 | 6 | 6 | 6 | 6 | 4 |
| 38942 | 370464 | 2017\-02\-01 | 1 | 5 | 0 | 6 | 5 | 5 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 0 | 3 | 3 | 1 | 6 | 3 |
### 5\.4\.4 mutate()
Add new columns. This is one of the most useful functions in the tidyverse.
Refer to other columns by their names (unquoted). You can add more than one column in the same mutate function, just separate the columns with a comma. Once you make a new column, you can use it in further column definitions e.g., `total` below).
```
disgust_total <- disgust %>%
mutate(
pathogen = pathogen1 + pathogen2 + pathogen3 + pathogen4 + pathogen5 + pathogen6 + pathogen7,
moral = moral1 + moral2 + moral3 + moral4 + moral5 + moral6 + moral7,
sexual = sexual1 + sexual2 + sexual3 + sexual4 + sexual5 + sexual6 + sexual7,
total = pathogen + moral + sexual,
user_id = paste0("U", user_id)
)
```
Table 5\.4: Rows 1\-6 from `disgust_total`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 | pathogen | moral | sexual | total |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1199 | U0 | 2008\-10\-07 | 5 | 6 | 4 | 6 | 5 | 5 | 6 | 4 | 0 | 1 | 0 | 1 | 4 | 5 | 6 | 1 | 6 | 5 | 4 | 5 | 6 | 33 | 37 | 15 | 85 |
| 1 | U1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 | 19 | 10 | 12 | 41 |
| 1599 | U2 | 2008\-10\-27 | 1 | 1 | 1 | 1 | NA | NA | 1 | 1 | NA | 1 | NA | 1 | NA | NA | NA | NA | 1 | NA | NA | NA | NA | NA | NA | NA | NA |
| 13332 | U2118 | 2012\-01\-02 | 0 | 1 | 1 | 1 | 1 | 2 | 1 | 4 | 3 | 0 | 6 | 0 | 3 | 5 | 5 | 6 | 4 | 6 | 5 | 5 | 4 | 35 | 7 | 21 | 63 |
| 23 | U2311 | 2008\-07\-15 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 2 | 1 | 2 | 1 | 1 | 1 | 5 | 5 | 5 | 4 | 4 | 5 | 4 | 3 | 30 | 28 | 13 | 71 |
| 1160 | U3630 | 2008\-10\-06 | 1 | 5 | NA | 5 | 5 | 5 | 1 | 0 | 5 | 0 | 2 | 0 | 1 | 0 | 6 | 3 | 1 | 1 | 3 | 1 | 0 | 15 | NA | 8 | NA |
You can overwrite a column by giving a new column the same name as the old column (see `user_id`) above. Make sure that you mean to do this and that you aren’t trying to use the old column value after you redefine it.
### 5\.4\.5 summarise()
Create summary statistics for the dataset. Check the [Data Wrangling Cheat Sheet](https://www.rstudio.org/links/data_wrangling_cheat_sheet) or the [Data Transformation Cheat Sheet](https://github.com/rstudio/cheatsheets/raw/master/source/pdfs/data-transformation-cheatsheet.pdf) for various summary functions. Some common ones are: `mean()`, `sd()`, `n()`, `sum()`, and `quantile()`.
```
disgust_summary<- disgust_total %>%
summarise(
n = n(),
q25 = quantile(total, .25, na.rm = TRUE),
q50 = quantile(total, .50, na.rm = TRUE),
q75 = quantile(total, .75, na.rm = TRUE),
avg_total = mean(total, na.rm = TRUE),
sd_total = sd(total, na.rm = TRUE),
min_total = min(total, na.rm = TRUE),
max_total = max(total, na.rm = TRUE)
)
```
Table 5\.5: All rows from `disgust_summary`
| n | q25 | q50 | q75 | avg\_total | sd\_total | min\_total | max\_total |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 20000 | 59 | 71 | 83 | 70\.6868 | 18\.24253 | 0 | 126 |
### 5\.4\.6 group\_by()
Create subsets of the data. You can use this to create summaries,
like the mean value for all of your experimental groups.
Here, we’ll use `mutate` to create a new column called `year`, group by `year`, and calculate the average scores.
```
disgust_groups <- disgust_total %>%
mutate(year = year(date)) %>%
group_by(year) %>%
summarise(
n = n(),
avg_total = mean(total, na.rm = TRUE),
sd_total = sd(total, na.rm = TRUE),
min_total = min(total, na.rm = TRUE),
max_total = max(total, na.rm = TRUE),
.groups = "drop"
)
```
Table 5\.6: All rows from `disgust_groups`
| year | n | avg\_total | sd\_total | min\_total | max\_total |
| --- | --- | --- | --- | --- | --- |
| 2008 | 2578 | 70\.29975 | 18\.46251 | 0 | 126 |
| 2009 | 2580 | 69\.74481 | 18\.61959 | 3 | 126 |
| 2010 | 1514 | 70\.59238 | 18\.86846 | 6 | 126 |
| 2011 | 6046 | 71\.34425 | 17\.79446 | 0 | 126 |
| 2012 | 5938 | 70\.42530 | 18\.35782 | 0 | 126 |
| 2013 | 1251 | 71\.59574 | 17\.61375 | 0 | 126 |
| 2014 | 58 | 70\.46296 | 17\.23502 | 19 | 113 |
| 2015 | 21 | 74\.26316 | 16\.89787 | 43 | 107 |
| 2016 | 8 | 67\.87500 | 32\.62531 | 0 | 110 |
| 2017 | 6 | 57\.16667 | 27\.93862 | 21 | 90 |
If you don’t add `.groups = “drop”` at the end of the `summarise()` function, you will get the following message: “`summarise()` ungrouping output (override with `.groups` argument).” This just reminds you that the groups are still in effect and any further functions will also be grouped.
Older versions of dplyr didn’t do this, so older code will generate this warning if you run it with newer version of dplyr. Older code might `ungroup()` after `summarise()` to indicate that groupings should be dropped. The default behaviour is usually correct, so you don’t need to worry, but it’s best to explicitly set `.groups` in a `summarise()` function after `group_by()` if you want to “keep” or “drop” the groupings.
You can use `filter` after `group_by`. The following example returns the lowest total score from each year (i.e., the row where the `rank()` of the value in the column `total` is equivalent to `1`).
```
disgust_lowest <- disgust_total %>%
mutate(year = year(date)) %>%
select(user_id, year, total) %>%
group_by(year) %>%
filter(rank(total) == 1) %>%
arrange(year)
```
Table 5\.7: All rows from `disgust_lowest`
| user\_id | year | total |
| --- | --- | --- |
| U236585 | 2009 | 3 |
| U292359 | 2010 | 6 |
| U245384 | 2013 | 0 |
| U206293 | 2014 | 19 |
| U407089 | 2015 | 43 |
| U453237 | 2016 | 0 |
| U356866 | 2017 | 21 |
You can also use `mutate` after `group_by`. The following example calculates subject\-mean\-centered scores by grouping the scores by `user_id` and then subtracting the group\-specific mean from each score. Note the use of `gather` to tidy the data into a long format first.
```
disgust_smc <- disgust %>%
gather("question", "score", moral1:pathogen7) %>%
group_by(user_id) %>%
mutate(score_smc = score - mean(score, na.rm = TRUE)) %>%
ungroup()
```
Use `ungroup()` as soon as you are done with grouped functions, otherwise the data table will still be grouped when you use it in the future.
Table 5\.8: Rows 1\-6 from `disgust_smc`
| id | user\_id | date | question | score | score\_smc |
| --- | --- | --- | --- | --- | --- |
| 1199 | 0 | 2008\-10\-07 | moral1 | 5 | 0\.9523810 |
| 1 | 1 | 2008\-07\-10 | moral1 | 2 | 0\.0476190 |
| 1599 | 2 | 2008\-10\-27 | moral1 | 1 | 0\.0000000 |
| 13332 | 2118 | 2012\-01\-02 | moral1 | 0 | \-3\.0000000 |
| 23 | 2311 | 2008\-07\-15 | moral1 | 4 | 0\.6190476 |
| 1160 | 3630 | 2008\-10\-06 | moral1 | 1 | \-1\.2500000 |
### 5\.4\.7 All Together
A lot of what we did above would be easier if the data were tidy, so let’s do that first. Then we can use `group_by` to calculate the domain scores.
After that, we can spread out the 3 domains, calculate the total score, remove any rows with a missing (`NA`) total, and calculate mean values by year.
```
disgust_tidy <- dataskills::disgust %>%
gather("question", "score", moral1:pathogen7) %>%
separate(question, c("domain","q_num"), sep = -1) %>%
group_by(id, user_id, date, domain) %>%
summarise(score = mean(score), .groups = "drop")
```
Table 5\.9: Rows 1\-6 from `disgust_tidy`
| id | user\_id | date | domain | score |
| --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | moral | 1\.428571 |
| 1 | 1 | 2008\-07\-10 | pathogen | 2\.714286 |
| 1 | 1 | 2008\-07\-10 | sexual | 1\.714286 |
| 3 | 155324 | 2008\-07\-11 | moral | 3\.000000 |
| 3 | 155324 | 2008\-07\-11 | pathogen | 2\.571429 |
| 3 | 155324 | 2008\-07\-11 | sexual | 1\.857143 |
```
disgust_scored <- disgust_tidy %>%
spread(domain, score) %>%
mutate(
total = moral + sexual + pathogen,
year = year(date)
) %>%
filter(!is.na(total)) %>%
arrange(user_id)
```
Table 5\.10: Rows 1\-6 from `disgust_scored`
| id | user\_id | date | moral | pathogen | sexual | total | year |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1199 | 0 | 2008\-10\-07 | 5\.285714 | 4\.714286 | 2\.142857 | 12\.142857 | 2008 |
| 1 | 1 | 2008\-07\-10 | 1\.428571 | 2\.714286 | 1\.714286 | 5\.857143 | 2008 |
| 13332 | 2118 | 2012\-01\-02 | 1\.000000 | 5\.000000 | 3\.000000 | 9\.000000 | 2012 |
| 23 | 2311 | 2008\-07\-15 | 4\.000000 | 4\.285714 | 1\.857143 | 10\.142857 | 2008 |
| 7980 | 4458 | 2011\-09\-05 | 3\.428571 | 3\.571429 | 3\.000000 | 10\.000000 | 2011 |
| 552 | 4651 | 2008\-08\-23 | 3\.857143 | 4\.857143 | 4\.285714 | 13\.000000 | 2008 |
```
disgust_summarised <- disgust_scored %>%
group_by(year) %>%
summarise(
n = n(),
avg_pathogen = mean(pathogen),
avg_moral = mean(moral),
avg_sexual = mean(sexual),
first_user = first(user_id),
last_user = last(user_id),
.groups = "drop"
)
```
Table 5\.11: Rows 1\-6 from `disgust_summarised`
| year | n | avg\_pathogen | avg\_moral | avg\_sexual | first\_user | last\_user |
| --- | --- | --- | --- | --- | --- | --- |
| 2008 | 2392 | 3\.697265 | 3\.806259 | 2\.539298 | 0 | 188708 |
| 2009 | 2410 | 3\.674333 | 3\.760937 | 2\.528275 | 6093 | 251959 |
| 2010 | 1418 | 3\.731412 | 3\.843139 | 2\.510075 | 5469 | 319641 |
| 2011 | 5586 | 3\.756918 | 3\.806506 | 2\.628612 | 4458 | 406569 |
| 2012 | 5375 | 3\.740465 | 3\.774591 | 2\.545701 | 2118 | 458194 |
| 2013 | 1222 | 3\.771920 | 3\.906944 | 2\.549100 | 7646 | 462428 |
| 2014 | 54 | 3\.759259 | 4\.000000 | 2\.306878 | 11090 | 461307 |
| 2015 | 19 | 3\.781955 | 4\.451128 | 2\.375940 | 102699 | 460283 |
| 2016 | 8 | 3\.696429 | 3\.625000 | 2\.375000 | 4976 | 453237 |
| 2017 | 6 | 3\.071429 | 3\.690476 | 1\.404762 | 48303 | 370464 |
5\.5 Additional dplyr one\-table verbs
--------------------------------------
Use the code examples below and the help pages to figure out what the following one\-table verbs do. Most have pretty self\-explanatory names.
### 5\.5\.1 rename()
You can rename columns with `rename()`. Set the argument name to the new name, and the value to the old name. You need to put a name in quotes or backticks if it doesn’t follow the rules for a good variable name (contains only letter, numbers, underscores, and full stops; and doesn’t start with a number).
```
sw <- starwars %>%
rename(Name = name,
Height = height,
Mass = mass,
`Hair Colour` = hair_color,
`Skin Colour` = skin_color,
`Eye Colour` = eye_color,
`Birth Year` = birth_year)
names(sw)
```
```
## [1] "Name" "Height" "Mass" "Hair Colour" "Skin Colour"
## [6] "Eye Colour" "Birth Year" "sex" "gender" "homeworld"
## [11] "species" "films" "vehicles" "starships"
```
Almost everyone gets confused at some point with `rename()` and tries to put the original names on the left and the new names on the right. Try it and see what the error message looks like.
### 5\.5\.2 distinct()
Get rid of exactly duplicate rows with `distinct()`. This can be helpful if, for example, you are merging data from multiple computers and some of the data got copied from one computer to another, creating duplicate rows.
```
# create a data table with duplicated values
dupes <- tibble(
id = c( 1, 2, 1, 2, 1, 2),
dv = c("A", "B", "C", "D", "A", "B")
)
distinct(dupes)
```
| id | dv |
| --- | --- |
| 1 | A |
| 2 | B |
| 1 | C |
| 2 | D |
### 5\.5\.3 count()
The function `count()` is a quick shortcut for the common combination of `group_by()` and `summarise()` used to count the number of rows per group.
```
starwars %>%
group_by(sex) %>%
summarise(n = n(), .groups = "drop")
```
| sex | n |
| --- | --- |
| female | 16 |
| hermaphroditic | 1 |
| male | 60 |
| none | 6 |
| NA | 4 |
```
count(starwars, sex)
```
| sex | n |
| --- | --- |
| female | 16 |
| hermaphroditic | 1 |
| male | 60 |
| none | 6 |
| NA | 4 |
### 5\.5\.4 slice()
```
slice(starwars, 1:3, 10)
```
| name | height | mass | hair\_color | skin\_color | eye\_color | birth\_year | sex | gender | homeworld | species | films | vehicles | starships |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Luke Skywalker | 172 | 77 | blond | fair | blue | 19 | male | masculine | Tatooine | Human | The Empire Strikes Back, Revenge of the Sith , Return of the Jedi , A New Hope , The Force Awakens | Snowspeeder , Imperial Speeder Bike | X\-wing , Imperial shuttle |
| C\-3PO | 167 | 75 | NA | gold | yellow | 112 | none | masculine | Tatooine | Droid | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope | | |
| R2\-D2 | 96 | 32 | NA | white, blue | red | 33 | none | masculine | Naboo | Droid | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope , The Force Awakens | | |
| Obi\-Wan Kenobi | 182 | 77 | auburn, white | fair | blue\-gray | 57 | male | masculine | Stewjon | Human | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope | Tribubble bongo | Jedi starfighter , Trade Federation cruiser, Naboo star skiff , Jedi Interceptor , Belbullab\-22 starfighter |
### 5\.5\.5 pull()
```
starwars %>%
filter(species == "Droid") %>%
pull(name)
```
```
## [1] "C-3PO" "R2-D2" "R5-D4" "IG-88" "R4-P17" "BB8"
```
5\.6 Window functions
---------------------
Window functions use the order of rows to calculate values. You can use them to do things that require ranking or ordering, like choose the top scores in each class, or accessing the previous and next rows, like calculating cumulative sums or means.
The [dplyr window functions vignette](https://dplyr.tidyverse.org/articles/window-functions.html) has very good detailed explanations of these functions, but we’ve described a few of the most useful ones below.
### 5\.6\.1 Ranking functions
```
grades <- tibble(
id = 1:5,
"Data Skills" = c(16, 17, 17, 19, 20),
"Statistics" = c(14, 16, 18, 18, 19)
) %>%
gather(class, grade, 2:3) %>%
group_by(class) %>%
mutate(row_number = row_number(),
rank = rank(grade),
min_rank = min_rank(grade),
dense_rank = dense_rank(grade),
quartile = ntile(grade, 4),
percentile = ntile(grade, 100))
```
Table 5\.12: All rows from `grades`
| id | class | grade | row\_number | rank | min\_rank | dense\_rank | quartile | percentile |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | Data Skills | 16 | 1 | 1\.0 | 1 | 1 | 1 | 1 |
| 2 | Data Skills | 17 | 2 | 2\.5 | 2 | 2 | 1 | 2 |
| 3 | Data Skills | 17 | 3 | 2\.5 | 2 | 2 | 2 | 3 |
| 4 | Data Skills | 19 | 4 | 4\.0 | 4 | 3 | 3 | 4 |
| 5 | Data Skills | 20 | 5 | 5\.0 | 5 | 4 | 4 | 5 |
| 1 | Statistics | 14 | 1 | 1\.0 | 1 | 1 | 1 | 1 |
| 2 | Statistics | 16 | 2 | 2\.0 | 2 | 2 | 1 | 2 |
| 3 | Statistics | 18 | 3 | 3\.5 | 3 | 3 | 2 | 3 |
| 4 | Statistics | 18 | 4 | 3\.5 | 3 | 3 | 3 | 4 |
| 5 | Statistics | 19 | 5 | 5\.0 | 5 | 4 | 4 | 5 |
* What are the differences among `row_number()`, `rank()`, `min_rank()`, `dense_rank()`, and `ntile()`?
* Why doesn’t `row_number()` need an argument?
* What would happen if you gave it the argument `grade` or `class`?
* What do you think would happen if you removed the `group_by(class)` line above?
* What if you added `id` to the grouping?
* What happens if you change the order of the rows?
* What does the second argument in `ntile()` do?
You can use window functions to group your data into quantiles.
```
sw_mass <- starwars %>%
group_by(tertile = ntile(mass, 3)) %>%
summarise(min = min(mass),
max = max(mass),
mean = mean(mass),
.groups = "drop")
```
Table 5\.13: All rows from `sw_mass`
| tertile | min | max | mean |
| --- | --- | --- | --- |
| 1 | 15 | 68 | 45\.6600 |
| 2 | 74 | 82 | 78\.4100 |
| 3 | 83 | 1358 | 171\.5789 |
| NA | NA | NA | NA |
Why is there a row of `NA` values? How would you get rid of them?
### 5\.6\.2 Offset functions
The function `lag()` gives a previous row’s value. It defaults to 1 row back, but you can change that with the `n` argument. The function `lead()` gives values ahead of the current row.
```
lag_lead <- tibble(x = 1:6) %>%
mutate(lag = lag(x),
lag2 = lag(x, n = 2),
lead = lead(x, default = 0))
```
Table 5\.14: All rows from `lag_lead`
| x | lag | lag2 | lead |
| --- | --- | --- | --- |
| 1 | NA | NA | 2 |
| 2 | 1 | NA | 3 |
| 3 | 2 | 1 | 4 |
| 4 | 3 | 2 | 5 |
| 5 | 4 | 3 | 6 |
| 6 | 5 | 4 | 0 |
You can use offset functions to calculate change between trials or where a value changes. Use the `order_by` argument to specify the order of the rows. Alternatively, you can use `arrange()` before the offset functions.
```
trials <- tibble(
trial = sample(1:10, 10),
cond = sample(c("exp", "ctrl"), 10, T),
score = rpois(10, 4)
) %>%
mutate(
score_change = score - lag(score, order_by = trial),
change_cond = cond != lag(cond, order_by = trial,
default = "no condition")
) %>%
arrange(trial)
```
Table 5\.15: All rows from `trials`
| trial | cond | score | score\_change | change\_cond |
| --- | --- | --- | --- | --- |
| 1 | ctrl | 8 | NA | TRUE |
| 2 | ctrl | 4 | \-4 | FALSE |
| 3 | exp | 6 | 2 | TRUE |
| 4 | ctrl | 2 | \-4 | TRUE |
| 5 | ctrl | 3 | 1 | FALSE |
| 6 | ctrl | 6 | 3 | FALSE |
| 7 | ctrl | 2 | \-4 | FALSE |
| 8 | exp | 4 | 2 | TRUE |
| 9 | ctrl | 4 | 0 | TRUE |
| 10 | exp | 3 | \-1 | TRUE |
Look at the help pages for `lag()` and `lead()`.
* What happens if you remove the `order_by` argument or change it to `cond`?
* What does the `default` argument do?
* Can you think of circumstances in your own data where you might need to use `lag()` or `lead()`?
### 5\.6\.3 Cumulative aggregates
`cumsum()`, `cummin()`, and `cummax()` are base R functions for calculating cumulative means, minimums, and maximums. The dplyr package introduces `cumany()` and `cumall()`, which return `TRUE` if any or all of the previous values meet their criteria.
```
cumulative <- tibble(
time = 1:10,
obs = c(2, 2, 1, 2, 4, 3, 1, 0, 3, 5)
) %>%
mutate(
cumsum = cumsum(obs),
cummin = cummin(obs),
cummax = cummax(obs),
cumany = cumany(obs == 3),
cumall = cumall(obs < 4)
)
```
Table 5\.16: All rows from `cumulative`
| time | obs | cumsum | cummin | cummax | cumany | cumall |
| --- | --- | --- | --- | --- | --- | --- |
| 1 | 2 | 2 | 2 | 2 | FALSE | TRUE |
| 2 | 2 | 4 | 2 | 2 | FALSE | TRUE |
| 3 | 1 | 5 | 1 | 2 | FALSE | TRUE |
| 4 | 2 | 7 | 1 | 2 | FALSE | TRUE |
| 5 | 4 | 11 | 1 | 4 | FALSE | FALSE |
| 6 | 3 | 14 | 1 | 4 | TRUE | FALSE |
| 7 | 1 | 15 | 1 | 4 | TRUE | FALSE |
| 8 | 0 | 15 | 0 | 4 | TRUE | FALSE |
| 9 | 3 | 18 | 0 | 4 | TRUE | FALSE |
| 10 | 5 | 23 | 0 | 5 | TRUE | FALSE |
* What would happen if you change `cumany(obs == 3)` to `cumany(obs > 2)`?
* What would happen if you change `cumall(obs < 4)` to `cumall(obs < 2)`?
* Can you think of circumstances in your own data where you might need to use `cumany()` or `cumall()`?
5\.7 Glossary
-------------
| term | definition |
| --- | --- |
| [data wrangling](https://psyteachr.github.io/glossary/d#data.wrangling) | The process of preparing data for visualisation and statistical analysis. |
5\.8 Exercises
--------------
Download the [exercises](exercises/05_dplyr_exercise.Rmd). See the [answers](exercises/05_dplyr_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(5)
# run this to access the answers
dataskills::exercise(5, answers = TRUE)
```
5\.1 Learning Objectives
------------------------
### 5\.1\.1 Basic
1. Be able to use the 6 main dplyr one\-table verbs: [(video)](https://youtu.be/l12tNKClTR0)
* [`select()`](dplyr.html#select)
* [`filter()`](dplyr.html#filter)
* [`arrange()`](dplyr.html#arrange)
* [`mutate()`](dplyr.html#mutate)
* [`summarise()`](dplyr.html#summarise)
* [`group_by()`](dplyr.html#group_by)
2. Be able to [wrangle data by chaining tidyr and dplyr functions](dplyr.html#all-together) [(video)](https://youtu.be/hzFFAkwrkqA)
3. Be able to use these additional one\-table verbs: [(video)](https://youtu.be/GmfF162mq4g)
* [`rename()`](dplyr.html#rename)
* [`distinct()`](dplyr.html#distinct)
* [`count()`](dplyr.html#count)
* [`slice()`](dplyr.html#slice)
* [`pull()`](dplyr.html#pull)
### 5\.1\.2 Intermediate
4. Fine control of [`select()` operations](dplyr.html#select_helpers) [(video)](https://youtu.be/R1bi1QwF9t0)
5. Use [window functions](dplyr.html#window) [(video)](https://youtu.be/uo4b0W9mqPc)
### 5\.1\.1 Basic
1. Be able to use the 6 main dplyr one\-table verbs: [(video)](https://youtu.be/l12tNKClTR0)
* [`select()`](dplyr.html#select)
* [`filter()`](dplyr.html#filter)
* [`arrange()`](dplyr.html#arrange)
* [`mutate()`](dplyr.html#mutate)
* [`summarise()`](dplyr.html#summarise)
* [`group_by()`](dplyr.html#group_by)
2. Be able to [wrangle data by chaining tidyr and dplyr functions](dplyr.html#all-together) [(video)](https://youtu.be/hzFFAkwrkqA)
3. Be able to use these additional one\-table verbs: [(video)](https://youtu.be/GmfF162mq4g)
* [`rename()`](dplyr.html#rename)
* [`distinct()`](dplyr.html#distinct)
* [`count()`](dplyr.html#count)
* [`slice()`](dplyr.html#slice)
* [`pull()`](dplyr.html#pull)
### 5\.1\.2 Intermediate
4. Fine control of [`select()` operations](dplyr.html#select_helpers) [(video)](https://youtu.be/R1bi1QwF9t0)
5. Use [window functions](dplyr.html#window) [(video)](https://youtu.be/uo4b0W9mqPc)
5\.2 Resources
--------------
* [Chapter 5: Data Transformation](http://r4ds.had.co.nz/transform.html) in *R for Data Science*
* [Data transformation cheat sheet](https://github.com/rstudio/cheatsheets/raw/master/data-transformation.pdf)
* [Chapter 16: Date and times](http://r4ds.had.co.nz/dates-and-times.html) in *R for Data Science*
5\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(lubridate)
library(dataskills)
set.seed(8675309) # makes sure random numbers are reproducible
```
### 5\.3\.1 The `disgust` dataset
These examples will use data from `dataskills::disgust`, which contains data from the [Three Domain Disgust Scale](http://digitalrepository.unm.edu/cgi/viewcontent.cgi?article=1139&context=psy_etds). Each participant is identified by a unique `user_id` and each questionnaire completion has a unique `id`. Look at the Help for this dataset to see the individual questions.
```
data("disgust", package = "dataskills")
#disgust <- read_csv("https://psyteachr.github.io/msc-data-skills/data/disgust.csv")
```
### 5\.3\.1 The `disgust` dataset
These examples will use data from `dataskills::disgust`, which contains data from the [Three Domain Disgust Scale](http://digitalrepository.unm.edu/cgi/viewcontent.cgi?article=1139&context=psy_etds). Each participant is identified by a unique `user_id` and each questionnaire completion has a unique `id`. Look at the Help for this dataset to see the individual questions.
```
data("disgust", package = "dataskills")
#disgust <- read_csv("https://psyteachr.github.io/msc-data-skills/data/disgust.csv")
```
5\.4 Six main dplyr verbs
-------------------------
Most of the [data wrangling](https://psyteachr.github.io/glossary/d#data-wrangling "The process of preparing data for visualisation and statistical analysis.") you’ll want to do with psychological data will involve the `tidyr` functions you learned in [Chapter 4](tidyr.html#tidyr) and the six main `dplyr` verbs: `select`, `filter`, `arrange`, `mutate`, `summarise`, and `group_by`.
### 5\.4\.1 select()
Select columns by name or number.
You can select each column individually, separated by commas (e.g., `col1, col2`). You can also select all columns between two columns by separating them with a colon (e.g., `start_col:end_col`).
```
moral <- disgust %>% select(user_id, moral1:moral7)
names(moral)
```
```
## [1] "user_id" "moral1" "moral2" "moral3" "moral4" "moral5" "moral6"
## [8] "moral7"
```
You can select columns by number, which is useful when the column names are long or complicated.
```
sexual <- disgust %>% select(2, 11:17)
names(sexual)
```
```
## [1] "user_id" "sexual1" "sexual2" "sexual3" "sexual4" "sexual5" "sexual6"
## [8] "sexual7"
```
You can use a minus symbol to unselect columns, leaving all of the other columns. If you want to exclude a span of columns, put parentheses around the span first (e.g., `-(moral1:moral7)`, not `-moral1:moral7`).
```
pathogen <- disgust %>% select(-id, -date, -(moral1:sexual7))
names(pathogen)
```
```
## [1] "user_id" "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5"
## [7] "pathogen6" "pathogen7"
```
#### 5\.4\.1\.1 Select helpers
You can select columns based on criteria about the column names.
##### 5\.4\.1\.1\.1 `starts_with()`
Select columns that start with a character string.
```
u <- disgust %>% select(starts_with("u"))
names(u)
```
```
## [1] "user_id"
```
##### 5\.4\.1\.1\.2 `ends_with()`
Select columns that end with a character string.
```
firstq <- disgust %>% select(ends_with("1"))
names(firstq)
```
```
## [1] "moral1" "sexual1" "pathogen1"
```
##### 5\.4\.1\.1\.3 `contains()`
Select columns that contain a character string.
```
pathogen <- disgust %>% select(contains("pathogen"))
names(pathogen)
```
```
## [1] "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5" "pathogen6"
## [7] "pathogen7"
```
##### 5\.4\.1\.1\.4 `num_range()`
Select columns with a name that matches the pattern `prefix`.
```
moral2_4 <- disgust %>% select(num_range("moral", 2:4))
names(moral2_4)
```
```
## [1] "moral2" "moral3" "moral4"
```
Use `width` to set the number of digits with leading zeros. For example, `num_range(‘var_,’ 8:10, width=2)` selects columns `var_08`, `var_09`, and `var_10`.
### 5\.4\.2 filter()
Select rows by matching column criteria.
Select all rows where the user\_id is 1 (that’s Lisa).
```
disgust %>% filter(user_id == 1)
```
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 |
Remember to use `==` and not `=` to check if two things are equivalent. A single `=` assigns the righthand value to the lefthand variable and (usually) evaluates to `TRUE`.
You can select on multiple criteria by separating them with commas.
```
amoral <- disgust %>% filter(
moral1 == 0,
moral2 == 0,
moral3 == 0,
moral4 == 0,
moral5 == 0,
moral6 == 0,
moral7 == 0
)
```
You can use the symbols `&`, `|`, and `!` to mean “and,” “or,” and “not.” You can also use other operators to make equations.
```
# everyone who chose either 0 or 7 for question moral1
moral_extremes <- disgust %>%
filter(moral1 == 0 | moral1 == 7)
# everyone who chose the same answer for all moral questions
moral_consistent <- disgust %>%
filter(
moral2 == moral1 &
moral3 == moral1 &
moral4 == moral1 &
moral5 == moral1 &
moral6 == moral1 &
moral7 == moral1
)
# everyone who did not answer 7 for all 7 moral questions
moral_no_ceiling <- disgust %>%
filter(moral1+moral2+moral3+moral4+moral5+moral6+moral7 != 7*7)
```
#### 5\.4\.2\.1 Match operator (%in%)
Sometimes you need to exclude some participant IDs for reasons that can’t be described in code. The match operator (`%in%`) is useful here for testing if a column value is in a list. Surround the equation with parentheses and put `!` in front to test that a value is not in the list.
```
no_researchers <- disgust %>%
filter(!(user_id %in% c(1,2)))
```
#### 5\.4\.2\.2 Dates
You can use the `lubridate` package to work with dates. For example, you can use the `year()` function to return just the year from the `date` column and then select only data collected in 2010\.
```
disgust2010 <- disgust %>%
filter(year(date) == 2010)
```
Table 5\.1: Rows 1\-6 from `disgust2010`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 6902 | 5469 | 2010\-12\-06 | 0 | 1 | 3 | 4 | 1 | 0 | 1 | 3 | 5 | 2 | 4 | 6 | 6 | 5 | 5 | 2 | 4 | 4 | 2 | 2 | 6 |
| 6158 | 6066 | 2010\-04\-18 | 4 | 5 | 6 | 5 | 5 | 4 | 4 | 3 | 0 | 1 | 6 | 3 | 5 | 3 | 6 | 5 | 5 | 5 | 5 | 5 | 5 |
| 6362 | 7129 | 2010\-06\-09 | 4 | 4 | 4 | 4 | 3 | 3 | 2 | 4 | 2 | 1 | 3 | 2 | 3 | 6 | 5 | 2 | 0 | 4 | 5 | 5 | 4 |
| 6302 | 39318 | 2010\-05\-20 | 2 | 4 | 1 | 4 | 5 | 6 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 3 | 2 | 3 | 2 | 3 | 2 | 4 |
| 5429 | 43029 | 2010\-01\-02 | 1 | 1 | 1 | 3 | 6 | 4 | 2 | 2 | 0 | 1 | 4 | 6 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 6 | 4 |
| 6732 | 71955 | 2010\-10\-15 | 2 | 5 | 3 | 6 | 3 | 2 | 5 | 4 | 3 | 3 | 6 | 6 | 6 | 5 | 4 | 2 | 6 | 5 | 6 | 6 | 3 |
Or select data from at least 5 years ago. You can use the `range` function to check the minimum and maximum dates in the resulting dataset.
```
disgust_5ago <- disgust %>%
filter(date < today() - dyears(5))
range(disgust_5ago$date)
```
```
## [1] "2008-07-10" "2016-08-04"
```
### 5\.4\.3 arrange()
Sort your dataset using `arrange()`. You will find yourself needing to sort data in R much less than you do in Excel, since you don’t need to have rows next to each other in order to, for example, calculate group means. But `arrange()` can be useful when preparing data from display in tables.
```
disgust_order <- disgust %>%
arrange(date, moral1)
```
Table 5\.2: Rows 1\-6 from `disgust_order`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 |
| 3 | 155324 | 2008\-07\-11 | 2 | 4 | 3 | 5 | 2 | 1 | 4 | 1 | 0 | 1 | 2 | 2 | 6 | 1 | 4 | 3 | 1 | 0 | 4 | 4 | 2 |
| 6 | 155386 | 2008\-07\-12 | 2 | 4 | 0 | 4 | 0 | 0 | 0 | 6 | 0 | 0 | 6 | 4 | 4 | 6 | 4 | 5 | 5 | 1 | 6 | 4 | 2 |
| 7 | 155409 | 2008\-07\-12 | 4 | 5 | 5 | 4 | 5 | 1 | 5 | 3 | 0 | 1 | 5 | 2 | 0 | 0 | 5 | 5 | 3 | 4 | 4 | 2 | 6 |
| 4 | 155366 | 2008\-07\-12 | 6 | 6 | 6 | 3 | 6 | 6 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 4 | 4 | 5 | 5 | 4 | 6 | 0 |
| 5 | 155370 | 2008\-07\-12 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 2 | 6 | 4 | 3 | 6 | 6 | 6 | 6 | 6 | 6 | 2 | 4 | 4 | 6 |
Reverse the order using `desc()`
```
disgust_order_desc <- disgust %>%
arrange(desc(date))
```
Table 5\.3: Rows 1\-6 from `disgust_order_desc`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 39456 | 356866 | 2017\-08\-21 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 39447 | 128727 | 2017\-08\-13 | 2 | 4 | 1 | 2 | 2 | 5 | 3 | 0 | 0 | 1 | 0 | 0 | 2 | 1 | 2 | 0 | 2 | 1 | 1 | 1 | 1 |
| 39371 | 152955 | 2017\-06\-13 | 6 | 6 | 3 | 6 | 6 | 6 | 6 | 1 | 0 | 0 | 2 | 1 | 4 | 4 | 5 | 0 | 5 | 4 | 3 | 6 | 3 |
| 39342 | 48303 | 2017\-05\-22 | 4 | 5 | 4 | 4 | 6 | 4 | 5 | 2 | 1 | 4 | 1 | 1 | 3 | 1 | 5 | 5 | 4 | 4 | 4 | 4 | 5 |
| 39159 | 151633 | 2017\-04\-04 | 4 | 5 | 6 | 5 | 3 | 6 | 2 | 6 | 4 | 0 | 4 | 0 | 3 | 6 | 4 | 4 | 6 | 6 | 6 | 6 | 4 |
| 38942 | 370464 | 2017\-02\-01 | 1 | 5 | 0 | 6 | 5 | 5 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 0 | 3 | 3 | 1 | 6 | 3 |
### 5\.4\.4 mutate()
Add new columns. This is one of the most useful functions in the tidyverse.
Refer to other columns by their names (unquoted). You can add more than one column in the same mutate function, just separate the columns with a comma. Once you make a new column, you can use it in further column definitions e.g., `total` below).
```
disgust_total <- disgust %>%
mutate(
pathogen = pathogen1 + pathogen2 + pathogen3 + pathogen4 + pathogen5 + pathogen6 + pathogen7,
moral = moral1 + moral2 + moral3 + moral4 + moral5 + moral6 + moral7,
sexual = sexual1 + sexual2 + sexual3 + sexual4 + sexual5 + sexual6 + sexual7,
total = pathogen + moral + sexual,
user_id = paste0("U", user_id)
)
```
Table 5\.4: Rows 1\-6 from `disgust_total`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 | pathogen | moral | sexual | total |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1199 | U0 | 2008\-10\-07 | 5 | 6 | 4 | 6 | 5 | 5 | 6 | 4 | 0 | 1 | 0 | 1 | 4 | 5 | 6 | 1 | 6 | 5 | 4 | 5 | 6 | 33 | 37 | 15 | 85 |
| 1 | U1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 | 19 | 10 | 12 | 41 |
| 1599 | U2 | 2008\-10\-27 | 1 | 1 | 1 | 1 | NA | NA | 1 | 1 | NA | 1 | NA | 1 | NA | NA | NA | NA | 1 | NA | NA | NA | NA | NA | NA | NA | NA |
| 13332 | U2118 | 2012\-01\-02 | 0 | 1 | 1 | 1 | 1 | 2 | 1 | 4 | 3 | 0 | 6 | 0 | 3 | 5 | 5 | 6 | 4 | 6 | 5 | 5 | 4 | 35 | 7 | 21 | 63 |
| 23 | U2311 | 2008\-07\-15 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 2 | 1 | 2 | 1 | 1 | 1 | 5 | 5 | 5 | 4 | 4 | 5 | 4 | 3 | 30 | 28 | 13 | 71 |
| 1160 | U3630 | 2008\-10\-06 | 1 | 5 | NA | 5 | 5 | 5 | 1 | 0 | 5 | 0 | 2 | 0 | 1 | 0 | 6 | 3 | 1 | 1 | 3 | 1 | 0 | 15 | NA | 8 | NA |
You can overwrite a column by giving a new column the same name as the old column (see `user_id`) above. Make sure that you mean to do this and that you aren’t trying to use the old column value after you redefine it.
### 5\.4\.5 summarise()
Create summary statistics for the dataset. Check the [Data Wrangling Cheat Sheet](https://www.rstudio.org/links/data_wrangling_cheat_sheet) or the [Data Transformation Cheat Sheet](https://github.com/rstudio/cheatsheets/raw/master/source/pdfs/data-transformation-cheatsheet.pdf) for various summary functions. Some common ones are: `mean()`, `sd()`, `n()`, `sum()`, and `quantile()`.
```
disgust_summary<- disgust_total %>%
summarise(
n = n(),
q25 = quantile(total, .25, na.rm = TRUE),
q50 = quantile(total, .50, na.rm = TRUE),
q75 = quantile(total, .75, na.rm = TRUE),
avg_total = mean(total, na.rm = TRUE),
sd_total = sd(total, na.rm = TRUE),
min_total = min(total, na.rm = TRUE),
max_total = max(total, na.rm = TRUE)
)
```
Table 5\.5: All rows from `disgust_summary`
| n | q25 | q50 | q75 | avg\_total | sd\_total | min\_total | max\_total |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 20000 | 59 | 71 | 83 | 70\.6868 | 18\.24253 | 0 | 126 |
### 5\.4\.6 group\_by()
Create subsets of the data. You can use this to create summaries,
like the mean value for all of your experimental groups.
Here, we’ll use `mutate` to create a new column called `year`, group by `year`, and calculate the average scores.
```
disgust_groups <- disgust_total %>%
mutate(year = year(date)) %>%
group_by(year) %>%
summarise(
n = n(),
avg_total = mean(total, na.rm = TRUE),
sd_total = sd(total, na.rm = TRUE),
min_total = min(total, na.rm = TRUE),
max_total = max(total, na.rm = TRUE),
.groups = "drop"
)
```
Table 5\.6: All rows from `disgust_groups`
| year | n | avg\_total | sd\_total | min\_total | max\_total |
| --- | --- | --- | --- | --- | --- |
| 2008 | 2578 | 70\.29975 | 18\.46251 | 0 | 126 |
| 2009 | 2580 | 69\.74481 | 18\.61959 | 3 | 126 |
| 2010 | 1514 | 70\.59238 | 18\.86846 | 6 | 126 |
| 2011 | 6046 | 71\.34425 | 17\.79446 | 0 | 126 |
| 2012 | 5938 | 70\.42530 | 18\.35782 | 0 | 126 |
| 2013 | 1251 | 71\.59574 | 17\.61375 | 0 | 126 |
| 2014 | 58 | 70\.46296 | 17\.23502 | 19 | 113 |
| 2015 | 21 | 74\.26316 | 16\.89787 | 43 | 107 |
| 2016 | 8 | 67\.87500 | 32\.62531 | 0 | 110 |
| 2017 | 6 | 57\.16667 | 27\.93862 | 21 | 90 |
If you don’t add `.groups = “drop”` at the end of the `summarise()` function, you will get the following message: “`summarise()` ungrouping output (override with `.groups` argument).” This just reminds you that the groups are still in effect and any further functions will also be grouped.
Older versions of dplyr didn’t do this, so older code will generate this warning if you run it with newer version of dplyr. Older code might `ungroup()` after `summarise()` to indicate that groupings should be dropped. The default behaviour is usually correct, so you don’t need to worry, but it’s best to explicitly set `.groups` in a `summarise()` function after `group_by()` if you want to “keep” or “drop” the groupings.
You can use `filter` after `group_by`. The following example returns the lowest total score from each year (i.e., the row where the `rank()` of the value in the column `total` is equivalent to `1`).
```
disgust_lowest <- disgust_total %>%
mutate(year = year(date)) %>%
select(user_id, year, total) %>%
group_by(year) %>%
filter(rank(total) == 1) %>%
arrange(year)
```
Table 5\.7: All rows from `disgust_lowest`
| user\_id | year | total |
| --- | --- | --- |
| U236585 | 2009 | 3 |
| U292359 | 2010 | 6 |
| U245384 | 2013 | 0 |
| U206293 | 2014 | 19 |
| U407089 | 2015 | 43 |
| U453237 | 2016 | 0 |
| U356866 | 2017 | 21 |
You can also use `mutate` after `group_by`. The following example calculates subject\-mean\-centered scores by grouping the scores by `user_id` and then subtracting the group\-specific mean from each score. Note the use of `gather` to tidy the data into a long format first.
```
disgust_smc <- disgust %>%
gather("question", "score", moral1:pathogen7) %>%
group_by(user_id) %>%
mutate(score_smc = score - mean(score, na.rm = TRUE)) %>%
ungroup()
```
Use `ungroup()` as soon as you are done with grouped functions, otherwise the data table will still be grouped when you use it in the future.
Table 5\.8: Rows 1\-6 from `disgust_smc`
| id | user\_id | date | question | score | score\_smc |
| --- | --- | --- | --- | --- | --- |
| 1199 | 0 | 2008\-10\-07 | moral1 | 5 | 0\.9523810 |
| 1 | 1 | 2008\-07\-10 | moral1 | 2 | 0\.0476190 |
| 1599 | 2 | 2008\-10\-27 | moral1 | 1 | 0\.0000000 |
| 13332 | 2118 | 2012\-01\-02 | moral1 | 0 | \-3\.0000000 |
| 23 | 2311 | 2008\-07\-15 | moral1 | 4 | 0\.6190476 |
| 1160 | 3630 | 2008\-10\-06 | moral1 | 1 | \-1\.2500000 |
### 5\.4\.7 All Together
A lot of what we did above would be easier if the data were tidy, so let’s do that first. Then we can use `group_by` to calculate the domain scores.
After that, we can spread out the 3 domains, calculate the total score, remove any rows with a missing (`NA`) total, and calculate mean values by year.
```
disgust_tidy <- dataskills::disgust %>%
gather("question", "score", moral1:pathogen7) %>%
separate(question, c("domain","q_num"), sep = -1) %>%
group_by(id, user_id, date, domain) %>%
summarise(score = mean(score), .groups = "drop")
```
Table 5\.9: Rows 1\-6 from `disgust_tidy`
| id | user\_id | date | domain | score |
| --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | moral | 1\.428571 |
| 1 | 1 | 2008\-07\-10 | pathogen | 2\.714286 |
| 1 | 1 | 2008\-07\-10 | sexual | 1\.714286 |
| 3 | 155324 | 2008\-07\-11 | moral | 3\.000000 |
| 3 | 155324 | 2008\-07\-11 | pathogen | 2\.571429 |
| 3 | 155324 | 2008\-07\-11 | sexual | 1\.857143 |
```
disgust_scored <- disgust_tidy %>%
spread(domain, score) %>%
mutate(
total = moral + sexual + pathogen,
year = year(date)
) %>%
filter(!is.na(total)) %>%
arrange(user_id)
```
Table 5\.10: Rows 1\-6 from `disgust_scored`
| id | user\_id | date | moral | pathogen | sexual | total | year |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1199 | 0 | 2008\-10\-07 | 5\.285714 | 4\.714286 | 2\.142857 | 12\.142857 | 2008 |
| 1 | 1 | 2008\-07\-10 | 1\.428571 | 2\.714286 | 1\.714286 | 5\.857143 | 2008 |
| 13332 | 2118 | 2012\-01\-02 | 1\.000000 | 5\.000000 | 3\.000000 | 9\.000000 | 2012 |
| 23 | 2311 | 2008\-07\-15 | 4\.000000 | 4\.285714 | 1\.857143 | 10\.142857 | 2008 |
| 7980 | 4458 | 2011\-09\-05 | 3\.428571 | 3\.571429 | 3\.000000 | 10\.000000 | 2011 |
| 552 | 4651 | 2008\-08\-23 | 3\.857143 | 4\.857143 | 4\.285714 | 13\.000000 | 2008 |
```
disgust_summarised <- disgust_scored %>%
group_by(year) %>%
summarise(
n = n(),
avg_pathogen = mean(pathogen),
avg_moral = mean(moral),
avg_sexual = mean(sexual),
first_user = first(user_id),
last_user = last(user_id),
.groups = "drop"
)
```
Table 5\.11: Rows 1\-6 from `disgust_summarised`
| year | n | avg\_pathogen | avg\_moral | avg\_sexual | first\_user | last\_user |
| --- | --- | --- | --- | --- | --- | --- |
| 2008 | 2392 | 3\.697265 | 3\.806259 | 2\.539298 | 0 | 188708 |
| 2009 | 2410 | 3\.674333 | 3\.760937 | 2\.528275 | 6093 | 251959 |
| 2010 | 1418 | 3\.731412 | 3\.843139 | 2\.510075 | 5469 | 319641 |
| 2011 | 5586 | 3\.756918 | 3\.806506 | 2\.628612 | 4458 | 406569 |
| 2012 | 5375 | 3\.740465 | 3\.774591 | 2\.545701 | 2118 | 458194 |
| 2013 | 1222 | 3\.771920 | 3\.906944 | 2\.549100 | 7646 | 462428 |
| 2014 | 54 | 3\.759259 | 4\.000000 | 2\.306878 | 11090 | 461307 |
| 2015 | 19 | 3\.781955 | 4\.451128 | 2\.375940 | 102699 | 460283 |
| 2016 | 8 | 3\.696429 | 3\.625000 | 2\.375000 | 4976 | 453237 |
| 2017 | 6 | 3\.071429 | 3\.690476 | 1\.404762 | 48303 | 370464 |
### 5\.4\.1 select()
Select columns by name or number.
You can select each column individually, separated by commas (e.g., `col1, col2`). You can also select all columns between two columns by separating them with a colon (e.g., `start_col:end_col`).
```
moral <- disgust %>% select(user_id, moral1:moral7)
names(moral)
```
```
## [1] "user_id" "moral1" "moral2" "moral3" "moral4" "moral5" "moral6"
## [8] "moral7"
```
You can select columns by number, which is useful when the column names are long or complicated.
```
sexual <- disgust %>% select(2, 11:17)
names(sexual)
```
```
## [1] "user_id" "sexual1" "sexual2" "sexual3" "sexual4" "sexual5" "sexual6"
## [8] "sexual7"
```
You can use a minus symbol to unselect columns, leaving all of the other columns. If you want to exclude a span of columns, put parentheses around the span first (e.g., `-(moral1:moral7)`, not `-moral1:moral7`).
```
pathogen <- disgust %>% select(-id, -date, -(moral1:sexual7))
names(pathogen)
```
```
## [1] "user_id" "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5"
## [7] "pathogen6" "pathogen7"
```
#### 5\.4\.1\.1 Select helpers
You can select columns based on criteria about the column names.
##### 5\.4\.1\.1\.1 `starts_with()`
Select columns that start with a character string.
```
u <- disgust %>% select(starts_with("u"))
names(u)
```
```
## [1] "user_id"
```
##### 5\.4\.1\.1\.2 `ends_with()`
Select columns that end with a character string.
```
firstq <- disgust %>% select(ends_with("1"))
names(firstq)
```
```
## [1] "moral1" "sexual1" "pathogen1"
```
##### 5\.4\.1\.1\.3 `contains()`
Select columns that contain a character string.
```
pathogen <- disgust %>% select(contains("pathogen"))
names(pathogen)
```
```
## [1] "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5" "pathogen6"
## [7] "pathogen7"
```
##### 5\.4\.1\.1\.4 `num_range()`
Select columns with a name that matches the pattern `prefix`.
```
moral2_4 <- disgust %>% select(num_range("moral", 2:4))
names(moral2_4)
```
```
## [1] "moral2" "moral3" "moral4"
```
Use `width` to set the number of digits with leading zeros. For example, `num_range(‘var_,’ 8:10, width=2)` selects columns `var_08`, `var_09`, and `var_10`.
#### 5\.4\.1\.1 Select helpers
You can select columns based on criteria about the column names.
##### 5\.4\.1\.1\.1 `starts_with()`
Select columns that start with a character string.
```
u <- disgust %>% select(starts_with("u"))
names(u)
```
```
## [1] "user_id"
```
##### 5\.4\.1\.1\.2 `ends_with()`
Select columns that end with a character string.
```
firstq <- disgust %>% select(ends_with("1"))
names(firstq)
```
```
## [1] "moral1" "sexual1" "pathogen1"
```
##### 5\.4\.1\.1\.3 `contains()`
Select columns that contain a character string.
```
pathogen <- disgust %>% select(contains("pathogen"))
names(pathogen)
```
```
## [1] "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5" "pathogen6"
## [7] "pathogen7"
```
##### 5\.4\.1\.1\.4 `num_range()`
Select columns with a name that matches the pattern `prefix`.
```
moral2_4 <- disgust %>% select(num_range("moral", 2:4))
names(moral2_4)
```
```
## [1] "moral2" "moral3" "moral4"
```
Use `width` to set the number of digits with leading zeros. For example, `num_range(‘var_,’ 8:10, width=2)` selects columns `var_08`, `var_09`, and `var_10`.
##### 5\.4\.1\.1\.1 `starts_with()`
Select columns that start with a character string.
```
u <- disgust %>% select(starts_with("u"))
names(u)
```
```
## [1] "user_id"
```
##### 5\.4\.1\.1\.2 `ends_with()`
Select columns that end with a character string.
```
firstq <- disgust %>% select(ends_with("1"))
names(firstq)
```
```
## [1] "moral1" "sexual1" "pathogen1"
```
##### 5\.4\.1\.1\.3 `contains()`
Select columns that contain a character string.
```
pathogen <- disgust %>% select(contains("pathogen"))
names(pathogen)
```
```
## [1] "pathogen1" "pathogen2" "pathogen3" "pathogen4" "pathogen5" "pathogen6"
## [7] "pathogen7"
```
##### 5\.4\.1\.1\.4 `num_range()`
Select columns with a name that matches the pattern `prefix`.
```
moral2_4 <- disgust %>% select(num_range("moral", 2:4))
names(moral2_4)
```
```
## [1] "moral2" "moral3" "moral4"
```
Use `width` to set the number of digits with leading zeros. For example, `num_range(‘var_,’ 8:10, width=2)` selects columns `var_08`, `var_09`, and `var_10`.
### 5\.4\.2 filter()
Select rows by matching column criteria.
Select all rows where the user\_id is 1 (that’s Lisa).
```
disgust %>% filter(user_id == 1)
```
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 |
Remember to use `==` and not `=` to check if two things are equivalent. A single `=` assigns the righthand value to the lefthand variable and (usually) evaluates to `TRUE`.
You can select on multiple criteria by separating them with commas.
```
amoral <- disgust %>% filter(
moral1 == 0,
moral2 == 0,
moral3 == 0,
moral4 == 0,
moral5 == 0,
moral6 == 0,
moral7 == 0
)
```
You can use the symbols `&`, `|`, and `!` to mean “and,” “or,” and “not.” You can also use other operators to make equations.
```
# everyone who chose either 0 or 7 for question moral1
moral_extremes <- disgust %>%
filter(moral1 == 0 | moral1 == 7)
# everyone who chose the same answer for all moral questions
moral_consistent <- disgust %>%
filter(
moral2 == moral1 &
moral3 == moral1 &
moral4 == moral1 &
moral5 == moral1 &
moral6 == moral1 &
moral7 == moral1
)
# everyone who did not answer 7 for all 7 moral questions
moral_no_ceiling <- disgust %>%
filter(moral1+moral2+moral3+moral4+moral5+moral6+moral7 != 7*7)
```
#### 5\.4\.2\.1 Match operator (%in%)
Sometimes you need to exclude some participant IDs for reasons that can’t be described in code. The match operator (`%in%`) is useful here for testing if a column value is in a list. Surround the equation with parentheses and put `!` in front to test that a value is not in the list.
```
no_researchers <- disgust %>%
filter(!(user_id %in% c(1,2)))
```
#### 5\.4\.2\.2 Dates
You can use the `lubridate` package to work with dates. For example, you can use the `year()` function to return just the year from the `date` column and then select only data collected in 2010\.
```
disgust2010 <- disgust %>%
filter(year(date) == 2010)
```
Table 5\.1: Rows 1\-6 from `disgust2010`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 6902 | 5469 | 2010\-12\-06 | 0 | 1 | 3 | 4 | 1 | 0 | 1 | 3 | 5 | 2 | 4 | 6 | 6 | 5 | 5 | 2 | 4 | 4 | 2 | 2 | 6 |
| 6158 | 6066 | 2010\-04\-18 | 4 | 5 | 6 | 5 | 5 | 4 | 4 | 3 | 0 | 1 | 6 | 3 | 5 | 3 | 6 | 5 | 5 | 5 | 5 | 5 | 5 |
| 6362 | 7129 | 2010\-06\-09 | 4 | 4 | 4 | 4 | 3 | 3 | 2 | 4 | 2 | 1 | 3 | 2 | 3 | 6 | 5 | 2 | 0 | 4 | 5 | 5 | 4 |
| 6302 | 39318 | 2010\-05\-20 | 2 | 4 | 1 | 4 | 5 | 6 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 3 | 2 | 3 | 2 | 3 | 2 | 4 |
| 5429 | 43029 | 2010\-01\-02 | 1 | 1 | 1 | 3 | 6 | 4 | 2 | 2 | 0 | 1 | 4 | 6 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 6 | 4 |
| 6732 | 71955 | 2010\-10\-15 | 2 | 5 | 3 | 6 | 3 | 2 | 5 | 4 | 3 | 3 | 6 | 6 | 6 | 5 | 4 | 2 | 6 | 5 | 6 | 6 | 3 |
Or select data from at least 5 years ago. You can use the `range` function to check the minimum and maximum dates in the resulting dataset.
```
disgust_5ago <- disgust %>%
filter(date < today() - dyears(5))
range(disgust_5ago$date)
```
```
## [1] "2008-07-10" "2016-08-04"
```
#### 5\.4\.2\.1 Match operator (%in%)
Sometimes you need to exclude some participant IDs for reasons that can’t be described in code. The match operator (`%in%`) is useful here for testing if a column value is in a list. Surround the equation with parentheses and put `!` in front to test that a value is not in the list.
```
no_researchers <- disgust %>%
filter(!(user_id %in% c(1,2)))
```
#### 5\.4\.2\.2 Dates
You can use the `lubridate` package to work with dates. For example, you can use the `year()` function to return just the year from the `date` column and then select only data collected in 2010\.
```
disgust2010 <- disgust %>%
filter(year(date) == 2010)
```
Table 5\.1: Rows 1\-6 from `disgust2010`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 6902 | 5469 | 2010\-12\-06 | 0 | 1 | 3 | 4 | 1 | 0 | 1 | 3 | 5 | 2 | 4 | 6 | 6 | 5 | 5 | 2 | 4 | 4 | 2 | 2 | 6 |
| 6158 | 6066 | 2010\-04\-18 | 4 | 5 | 6 | 5 | 5 | 4 | 4 | 3 | 0 | 1 | 6 | 3 | 5 | 3 | 6 | 5 | 5 | 5 | 5 | 5 | 5 |
| 6362 | 7129 | 2010\-06\-09 | 4 | 4 | 4 | 4 | 3 | 3 | 2 | 4 | 2 | 1 | 3 | 2 | 3 | 6 | 5 | 2 | 0 | 4 | 5 | 5 | 4 |
| 6302 | 39318 | 2010\-05\-20 | 2 | 4 | 1 | 4 | 5 | 6 | 0 | 1 | 0 | 0 | 1 | 0 | 0 | 1 | 3 | 2 | 3 | 2 | 3 | 2 | 4 |
| 5429 | 43029 | 2010\-01\-02 | 1 | 1 | 1 | 3 | 6 | 4 | 2 | 2 | 0 | 1 | 4 | 6 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 6 | 4 |
| 6732 | 71955 | 2010\-10\-15 | 2 | 5 | 3 | 6 | 3 | 2 | 5 | 4 | 3 | 3 | 6 | 6 | 6 | 5 | 4 | 2 | 6 | 5 | 6 | 6 | 3 |
Or select data from at least 5 years ago. You can use the `range` function to check the minimum and maximum dates in the resulting dataset.
```
disgust_5ago <- disgust %>%
filter(date < today() - dyears(5))
range(disgust_5ago$date)
```
```
## [1] "2008-07-10" "2016-08-04"
```
### 5\.4\.3 arrange()
Sort your dataset using `arrange()`. You will find yourself needing to sort data in R much less than you do in Excel, since you don’t need to have rows next to each other in order to, for example, calculate group means. But `arrange()` can be useful when preparing data from display in tables.
```
disgust_order <- disgust %>%
arrange(date, moral1)
```
Table 5\.2: Rows 1\-6 from `disgust_order`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 |
| 3 | 155324 | 2008\-07\-11 | 2 | 4 | 3 | 5 | 2 | 1 | 4 | 1 | 0 | 1 | 2 | 2 | 6 | 1 | 4 | 3 | 1 | 0 | 4 | 4 | 2 |
| 6 | 155386 | 2008\-07\-12 | 2 | 4 | 0 | 4 | 0 | 0 | 0 | 6 | 0 | 0 | 6 | 4 | 4 | 6 | 4 | 5 | 5 | 1 | 6 | 4 | 2 |
| 7 | 155409 | 2008\-07\-12 | 4 | 5 | 5 | 4 | 5 | 1 | 5 | 3 | 0 | 1 | 5 | 2 | 0 | 0 | 5 | 5 | 3 | 4 | 4 | 2 | 6 |
| 4 | 155366 | 2008\-07\-12 | 6 | 6 | 6 | 3 | 6 | 6 | 6 | 0 | 0 | 0 | 0 | 0 | 0 | 3 | 4 | 4 | 5 | 5 | 4 | 6 | 0 |
| 5 | 155370 | 2008\-07\-12 | 6 | 6 | 4 | 6 | 6 | 6 | 6 | 2 | 6 | 4 | 3 | 6 | 6 | 6 | 6 | 6 | 6 | 2 | 4 | 4 | 6 |
Reverse the order using `desc()`
```
disgust_order_desc <- disgust %>%
arrange(desc(date))
```
Table 5\.3: Rows 1\-6 from `disgust_order_desc`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 39456 | 356866 | 2017\-08\-21 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 | 1 |
| 39447 | 128727 | 2017\-08\-13 | 2 | 4 | 1 | 2 | 2 | 5 | 3 | 0 | 0 | 1 | 0 | 0 | 2 | 1 | 2 | 0 | 2 | 1 | 1 | 1 | 1 |
| 39371 | 152955 | 2017\-06\-13 | 6 | 6 | 3 | 6 | 6 | 6 | 6 | 1 | 0 | 0 | 2 | 1 | 4 | 4 | 5 | 0 | 5 | 4 | 3 | 6 | 3 |
| 39342 | 48303 | 2017\-05\-22 | 4 | 5 | 4 | 4 | 6 | 4 | 5 | 2 | 1 | 4 | 1 | 1 | 3 | 1 | 5 | 5 | 4 | 4 | 4 | 4 | 5 |
| 39159 | 151633 | 2017\-04\-04 | 4 | 5 | 6 | 5 | 3 | 6 | 2 | 6 | 4 | 0 | 4 | 0 | 3 | 6 | 4 | 4 | 6 | 6 | 6 | 6 | 4 |
| 38942 | 370464 | 2017\-02\-01 | 1 | 5 | 0 | 6 | 5 | 5 | 5 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 5 | 0 | 3 | 3 | 1 | 6 | 3 |
### 5\.4\.4 mutate()
Add new columns. This is one of the most useful functions in the tidyverse.
Refer to other columns by their names (unquoted). You can add more than one column in the same mutate function, just separate the columns with a comma. Once you make a new column, you can use it in further column definitions e.g., `total` below).
```
disgust_total <- disgust %>%
mutate(
pathogen = pathogen1 + pathogen2 + pathogen3 + pathogen4 + pathogen5 + pathogen6 + pathogen7,
moral = moral1 + moral2 + moral3 + moral4 + moral5 + moral6 + moral7,
sexual = sexual1 + sexual2 + sexual3 + sexual4 + sexual5 + sexual6 + sexual7,
total = pathogen + moral + sexual,
user_id = paste0("U", user_id)
)
```
Table 5\.4: Rows 1\-6 from `disgust_total`
| id | user\_id | date | moral1 | moral2 | moral3 | moral4 | moral5 | moral6 | moral7 | sexual1 | sexual2 | sexual3 | sexual4 | sexual5 | sexual6 | sexual7 | pathogen1 | pathogen2 | pathogen3 | pathogen4 | pathogen5 | pathogen6 | pathogen7 | pathogen | moral | sexual | total |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1199 | U0 | 2008\-10\-07 | 5 | 6 | 4 | 6 | 5 | 5 | 6 | 4 | 0 | 1 | 0 | 1 | 4 | 5 | 6 | 1 | 6 | 5 | 4 | 5 | 6 | 33 | 37 | 15 | 85 |
| 1 | U1 | 2008\-07\-10 | 2 | 2 | 1 | 2 | 1 | 1 | 1 | 3 | 1 | 1 | 2 | 1 | 2 | 2 | 3 | 2 | 3 | 3 | 2 | 3 | 3 | 19 | 10 | 12 | 41 |
| 1599 | U2 | 2008\-10\-27 | 1 | 1 | 1 | 1 | NA | NA | 1 | 1 | NA | 1 | NA | 1 | NA | NA | NA | NA | 1 | NA | NA | NA | NA | NA | NA | NA | NA |
| 13332 | U2118 | 2012\-01\-02 | 0 | 1 | 1 | 1 | 1 | 2 | 1 | 4 | 3 | 0 | 6 | 0 | 3 | 5 | 5 | 6 | 4 | 6 | 5 | 5 | 4 | 35 | 7 | 21 | 63 |
| 23 | U2311 | 2008\-07\-15 | 4 | 4 | 4 | 4 | 4 | 4 | 4 | 2 | 1 | 2 | 1 | 1 | 1 | 5 | 5 | 5 | 4 | 4 | 5 | 4 | 3 | 30 | 28 | 13 | 71 |
| 1160 | U3630 | 2008\-10\-06 | 1 | 5 | NA | 5 | 5 | 5 | 1 | 0 | 5 | 0 | 2 | 0 | 1 | 0 | 6 | 3 | 1 | 1 | 3 | 1 | 0 | 15 | NA | 8 | NA |
You can overwrite a column by giving a new column the same name as the old column (see `user_id`) above. Make sure that you mean to do this and that you aren’t trying to use the old column value after you redefine it.
### 5\.4\.5 summarise()
Create summary statistics for the dataset. Check the [Data Wrangling Cheat Sheet](https://www.rstudio.org/links/data_wrangling_cheat_sheet) or the [Data Transformation Cheat Sheet](https://github.com/rstudio/cheatsheets/raw/master/source/pdfs/data-transformation-cheatsheet.pdf) for various summary functions. Some common ones are: `mean()`, `sd()`, `n()`, `sum()`, and `quantile()`.
```
disgust_summary<- disgust_total %>%
summarise(
n = n(),
q25 = quantile(total, .25, na.rm = TRUE),
q50 = quantile(total, .50, na.rm = TRUE),
q75 = quantile(total, .75, na.rm = TRUE),
avg_total = mean(total, na.rm = TRUE),
sd_total = sd(total, na.rm = TRUE),
min_total = min(total, na.rm = TRUE),
max_total = max(total, na.rm = TRUE)
)
```
Table 5\.5: All rows from `disgust_summary`
| n | q25 | q50 | q75 | avg\_total | sd\_total | min\_total | max\_total |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 20000 | 59 | 71 | 83 | 70\.6868 | 18\.24253 | 0 | 126 |
### 5\.4\.6 group\_by()
Create subsets of the data. You can use this to create summaries,
like the mean value for all of your experimental groups.
Here, we’ll use `mutate` to create a new column called `year`, group by `year`, and calculate the average scores.
```
disgust_groups <- disgust_total %>%
mutate(year = year(date)) %>%
group_by(year) %>%
summarise(
n = n(),
avg_total = mean(total, na.rm = TRUE),
sd_total = sd(total, na.rm = TRUE),
min_total = min(total, na.rm = TRUE),
max_total = max(total, na.rm = TRUE),
.groups = "drop"
)
```
Table 5\.6: All rows from `disgust_groups`
| year | n | avg\_total | sd\_total | min\_total | max\_total |
| --- | --- | --- | --- | --- | --- |
| 2008 | 2578 | 70\.29975 | 18\.46251 | 0 | 126 |
| 2009 | 2580 | 69\.74481 | 18\.61959 | 3 | 126 |
| 2010 | 1514 | 70\.59238 | 18\.86846 | 6 | 126 |
| 2011 | 6046 | 71\.34425 | 17\.79446 | 0 | 126 |
| 2012 | 5938 | 70\.42530 | 18\.35782 | 0 | 126 |
| 2013 | 1251 | 71\.59574 | 17\.61375 | 0 | 126 |
| 2014 | 58 | 70\.46296 | 17\.23502 | 19 | 113 |
| 2015 | 21 | 74\.26316 | 16\.89787 | 43 | 107 |
| 2016 | 8 | 67\.87500 | 32\.62531 | 0 | 110 |
| 2017 | 6 | 57\.16667 | 27\.93862 | 21 | 90 |
If you don’t add `.groups = “drop”` at the end of the `summarise()` function, you will get the following message: “`summarise()` ungrouping output (override with `.groups` argument).” This just reminds you that the groups are still in effect and any further functions will also be grouped.
Older versions of dplyr didn’t do this, so older code will generate this warning if you run it with newer version of dplyr. Older code might `ungroup()` after `summarise()` to indicate that groupings should be dropped. The default behaviour is usually correct, so you don’t need to worry, but it’s best to explicitly set `.groups` in a `summarise()` function after `group_by()` if you want to “keep” or “drop” the groupings.
You can use `filter` after `group_by`. The following example returns the lowest total score from each year (i.e., the row where the `rank()` of the value in the column `total` is equivalent to `1`).
```
disgust_lowest <- disgust_total %>%
mutate(year = year(date)) %>%
select(user_id, year, total) %>%
group_by(year) %>%
filter(rank(total) == 1) %>%
arrange(year)
```
Table 5\.7: All rows from `disgust_lowest`
| user\_id | year | total |
| --- | --- | --- |
| U236585 | 2009 | 3 |
| U292359 | 2010 | 6 |
| U245384 | 2013 | 0 |
| U206293 | 2014 | 19 |
| U407089 | 2015 | 43 |
| U453237 | 2016 | 0 |
| U356866 | 2017 | 21 |
You can also use `mutate` after `group_by`. The following example calculates subject\-mean\-centered scores by grouping the scores by `user_id` and then subtracting the group\-specific mean from each score. Note the use of `gather` to tidy the data into a long format first.
```
disgust_smc <- disgust %>%
gather("question", "score", moral1:pathogen7) %>%
group_by(user_id) %>%
mutate(score_smc = score - mean(score, na.rm = TRUE)) %>%
ungroup()
```
Use `ungroup()` as soon as you are done with grouped functions, otherwise the data table will still be grouped when you use it in the future.
Table 5\.8: Rows 1\-6 from `disgust_smc`
| id | user\_id | date | question | score | score\_smc |
| --- | --- | --- | --- | --- | --- |
| 1199 | 0 | 2008\-10\-07 | moral1 | 5 | 0\.9523810 |
| 1 | 1 | 2008\-07\-10 | moral1 | 2 | 0\.0476190 |
| 1599 | 2 | 2008\-10\-27 | moral1 | 1 | 0\.0000000 |
| 13332 | 2118 | 2012\-01\-02 | moral1 | 0 | \-3\.0000000 |
| 23 | 2311 | 2008\-07\-15 | moral1 | 4 | 0\.6190476 |
| 1160 | 3630 | 2008\-10\-06 | moral1 | 1 | \-1\.2500000 |
### 5\.4\.7 All Together
A lot of what we did above would be easier if the data were tidy, so let’s do that first. Then we can use `group_by` to calculate the domain scores.
After that, we can spread out the 3 domains, calculate the total score, remove any rows with a missing (`NA`) total, and calculate mean values by year.
```
disgust_tidy <- dataskills::disgust %>%
gather("question", "score", moral1:pathogen7) %>%
separate(question, c("domain","q_num"), sep = -1) %>%
group_by(id, user_id, date, domain) %>%
summarise(score = mean(score), .groups = "drop")
```
Table 5\.9: Rows 1\-6 from `disgust_tidy`
| id | user\_id | date | domain | score |
| --- | --- | --- | --- | --- |
| 1 | 1 | 2008\-07\-10 | moral | 1\.428571 |
| 1 | 1 | 2008\-07\-10 | pathogen | 2\.714286 |
| 1 | 1 | 2008\-07\-10 | sexual | 1\.714286 |
| 3 | 155324 | 2008\-07\-11 | moral | 3\.000000 |
| 3 | 155324 | 2008\-07\-11 | pathogen | 2\.571429 |
| 3 | 155324 | 2008\-07\-11 | sexual | 1\.857143 |
```
disgust_scored <- disgust_tidy %>%
spread(domain, score) %>%
mutate(
total = moral + sexual + pathogen,
year = year(date)
) %>%
filter(!is.na(total)) %>%
arrange(user_id)
```
Table 5\.10: Rows 1\-6 from `disgust_scored`
| id | user\_id | date | moral | pathogen | sexual | total | year |
| --- | --- | --- | --- | --- | --- | --- | --- |
| 1199 | 0 | 2008\-10\-07 | 5\.285714 | 4\.714286 | 2\.142857 | 12\.142857 | 2008 |
| 1 | 1 | 2008\-07\-10 | 1\.428571 | 2\.714286 | 1\.714286 | 5\.857143 | 2008 |
| 13332 | 2118 | 2012\-01\-02 | 1\.000000 | 5\.000000 | 3\.000000 | 9\.000000 | 2012 |
| 23 | 2311 | 2008\-07\-15 | 4\.000000 | 4\.285714 | 1\.857143 | 10\.142857 | 2008 |
| 7980 | 4458 | 2011\-09\-05 | 3\.428571 | 3\.571429 | 3\.000000 | 10\.000000 | 2011 |
| 552 | 4651 | 2008\-08\-23 | 3\.857143 | 4\.857143 | 4\.285714 | 13\.000000 | 2008 |
```
disgust_summarised <- disgust_scored %>%
group_by(year) %>%
summarise(
n = n(),
avg_pathogen = mean(pathogen),
avg_moral = mean(moral),
avg_sexual = mean(sexual),
first_user = first(user_id),
last_user = last(user_id),
.groups = "drop"
)
```
Table 5\.11: Rows 1\-6 from `disgust_summarised`
| year | n | avg\_pathogen | avg\_moral | avg\_sexual | first\_user | last\_user |
| --- | --- | --- | --- | --- | --- | --- |
| 2008 | 2392 | 3\.697265 | 3\.806259 | 2\.539298 | 0 | 188708 |
| 2009 | 2410 | 3\.674333 | 3\.760937 | 2\.528275 | 6093 | 251959 |
| 2010 | 1418 | 3\.731412 | 3\.843139 | 2\.510075 | 5469 | 319641 |
| 2011 | 5586 | 3\.756918 | 3\.806506 | 2\.628612 | 4458 | 406569 |
| 2012 | 5375 | 3\.740465 | 3\.774591 | 2\.545701 | 2118 | 458194 |
| 2013 | 1222 | 3\.771920 | 3\.906944 | 2\.549100 | 7646 | 462428 |
| 2014 | 54 | 3\.759259 | 4\.000000 | 2\.306878 | 11090 | 461307 |
| 2015 | 19 | 3\.781955 | 4\.451128 | 2\.375940 | 102699 | 460283 |
| 2016 | 8 | 3\.696429 | 3\.625000 | 2\.375000 | 4976 | 453237 |
| 2017 | 6 | 3\.071429 | 3\.690476 | 1\.404762 | 48303 | 370464 |
5\.5 Additional dplyr one\-table verbs
--------------------------------------
Use the code examples below and the help pages to figure out what the following one\-table verbs do. Most have pretty self\-explanatory names.
### 5\.5\.1 rename()
You can rename columns with `rename()`. Set the argument name to the new name, and the value to the old name. You need to put a name in quotes or backticks if it doesn’t follow the rules for a good variable name (contains only letter, numbers, underscores, and full stops; and doesn’t start with a number).
```
sw <- starwars %>%
rename(Name = name,
Height = height,
Mass = mass,
`Hair Colour` = hair_color,
`Skin Colour` = skin_color,
`Eye Colour` = eye_color,
`Birth Year` = birth_year)
names(sw)
```
```
## [1] "Name" "Height" "Mass" "Hair Colour" "Skin Colour"
## [6] "Eye Colour" "Birth Year" "sex" "gender" "homeworld"
## [11] "species" "films" "vehicles" "starships"
```
Almost everyone gets confused at some point with `rename()` and tries to put the original names on the left and the new names on the right. Try it and see what the error message looks like.
### 5\.5\.2 distinct()
Get rid of exactly duplicate rows with `distinct()`. This can be helpful if, for example, you are merging data from multiple computers and some of the data got copied from one computer to another, creating duplicate rows.
```
# create a data table with duplicated values
dupes <- tibble(
id = c( 1, 2, 1, 2, 1, 2),
dv = c("A", "B", "C", "D", "A", "B")
)
distinct(dupes)
```
| id | dv |
| --- | --- |
| 1 | A |
| 2 | B |
| 1 | C |
| 2 | D |
### 5\.5\.3 count()
The function `count()` is a quick shortcut for the common combination of `group_by()` and `summarise()` used to count the number of rows per group.
```
starwars %>%
group_by(sex) %>%
summarise(n = n(), .groups = "drop")
```
| sex | n |
| --- | --- |
| female | 16 |
| hermaphroditic | 1 |
| male | 60 |
| none | 6 |
| NA | 4 |
```
count(starwars, sex)
```
| sex | n |
| --- | --- |
| female | 16 |
| hermaphroditic | 1 |
| male | 60 |
| none | 6 |
| NA | 4 |
### 5\.5\.4 slice()
```
slice(starwars, 1:3, 10)
```
| name | height | mass | hair\_color | skin\_color | eye\_color | birth\_year | sex | gender | homeworld | species | films | vehicles | starships |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Luke Skywalker | 172 | 77 | blond | fair | blue | 19 | male | masculine | Tatooine | Human | The Empire Strikes Back, Revenge of the Sith , Return of the Jedi , A New Hope , The Force Awakens | Snowspeeder , Imperial Speeder Bike | X\-wing , Imperial shuttle |
| C\-3PO | 167 | 75 | NA | gold | yellow | 112 | none | masculine | Tatooine | Droid | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope | | |
| R2\-D2 | 96 | 32 | NA | white, blue | red | 33 | none | masculine | Naboo | Droid | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope , The Force Awakens | | |
| Obi\-Wan Kenobi | 182 | 77 | auburn, white | fair | blue\-gray | 57 | male | masculine | Stewjon | Human | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope | Tribubble bongo | Jedi starfighter , Trade Federation cruiser, Naboo star skiff , Jedi Interceptor , Belbullab\-22 starfighter |
### 5\.5\.5 pull()
```
starwars %>%
filter(species == "Droid") %>%
pull(name)
```
```
## [1] "C-3PO" "R2-D2" "R5-D4" "IG-88" "R4-P17" "BB8"
```
### 5\.5\.1 rename()
You can rename columns with `rename()`. Set the argument name to the new name, and the value to the old name. You need to put a name in quotes or backticks if it doesn’t follow the rules for a good variable name (contains only letter, numbers, underscores, and full stops; and doesn’t start with a number).
```
sw <- starwars %>%
rename(Name = name,
Height = height,
Mass = mass,
`Hair Colour` = hair_color,
`Skin Colour` = skin_color,
`Eye Colour` = eye_color,
`Birth Year` = birth_year)
names(sw)
```
```
## [1] "Name" "Height" "Mass" "Hair Colour" "Skin Colour"
## [6] "Eye Colour" "Birth Year" "sex" "gender" "homeworld"
## [11] "species" "films" "vehicles" "starships"
```
Almost everyone gets confused at some point with `rename()` and tries to put the original names on the left and the new names on the right. Try it and see what the error message looks like.
### 5\.5\.2 distinct()
Get rid of exactly duplicate rows with `distinct()`. This can be helpful if, for example, you are merging data from multiple computers and some of the data got copied from one computer to another, creating duplicate rows.
```
# create a data table with duplicated values
dupes <- tibble(
id = c( 1, 2, 1, 2, 1, 2),
dv = c("A", "B", "C", "D", "A", "B")
)
distinct(dupes)
```
| id | dv |
| --- | --- |
| 1 | A |
| 2 | B |
| 1 | C |
| 2 | D |
### 5\.5\.3 count()
The function `count()` is a quick shortcut for the common combination of `group_by()` and `summarise()` used to count the number of rows per group.
```
starwars %>%
group_by(sex) %>%
summarise(n = n(), .groups = "drop")
```
| sex | n |
| --- | --- |
| female | 16 |
| hermaphroditic | 1 |
| male | 60 |
| none | 6 |
| NA | 4 |
```
count(starwars, sex)
```
| sex | n |
| --- | --- |
| female | 16 |
| hermaphroditic | 1 |
| male | 60 |
| none | 6 |
| NA | 4 |
### 5\.5\.4 slice()
```
slice(starwars, 1:3, 10)
```
| name | height | mass | hair\_color | skin\_color | eye\_color | birth\_year | sex | gender | homeworld | species | films | vehicles | starships |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Luke Skywalker | 172 | 77 | blond | fair | blue | 19 | male | masculine | Tatooine | Human | The Empire Strikes Back, Revenge of the Sith , Return of the Jedi , A New Hope , The Force Awakens | Snowspeeder , Imperial Speeder Bike | X\-wing , Imperial shuttle |
| C\-3PO | 167 | 75 | NA | gold | yellow | 112 | none | masculine | Tatooine | Droid | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope | | |
| R2\-D2 | 96 | 32 | NA | white, blue | red | 33 | none | masculine | Naboo | Droid | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope , The Force Awakens | | |
| Obi\-Wan Kenobi | 182 | 77 | auburn, white | fair | blue\-gray | 57 | male | masculine | Stewjon | Human | The Empire Strikes Back, Attack of the Clones , The Phantom Menace , Revenge of the Sith , Return of the Jedi , A New Hope | Tribubble bongo | Jedi starfighter , Trade Federation cruiser, Naboo star skiff , Jedi Interceptor , Belbullab\-22 starfighter |
### 5\.5\.5 pull()
```
starwars %>%
filter(species == "Droid") %>%
pull(name)
```
```
## [1] "C-3PO" "R2-D2" "R5-D4" "IG-88" "R4-P17" "BB8"
```
5\.6 Window functions
---------------------
Window functions use the order of rows to calculate values. You can use them to do things that require ranking or ordering, like choose the top scores in each class, or accessing the previous and next rows, like calculating cumulative sums or means.
The [dplyr window functions vignette](https://dplyr.tidyverse.org/articles/window-functions.html) has very good detailed explanations of these functions, but we’ve described a few of the most useful ones below.
### 5\.6\.1 Ranking functions
```
grades <- tibble(
id = 1:5,
"Data Skills" = c(16, 17, 17, 19, 20),
"Statistics" = c(14, 16, 18, 18, 19)
) %>%
gather(class, grade, 2:3) %>%
group_by(class) %>%
mutate(row_number = row_number(),
rank = rank(grade),
min_rank = min_rank(grade),
dense_rank = dense_rank(grade),
quartile = ntile(grade, 4),
percentile = ntile(grade, 100))
```
Table 5\.12: All rows from `grades`
| id | class | grade | row\_number | rank | min\_rank | dense\_rank | quartile | percentile |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | Data Skills | 16 | 1 | 1\.0 | 1 | 1 | 1 | 1 |
| 2 | Data Skills | 17 | 2 | 2\.5 | 2 | 2 | 1 | 2 |
| 3 | Data Skills | 17 | 3 | 2\.5 | 2 | 2 | 2 | 3 |
| 4 | Data Skills | 19 | 4 | 4\.0 | 4 | 3 | 3 | 4 |
| 5 | Data Skills | 20 | 5 | 5\.0 | 5 | 4 | 4 | 5 |
| 1 | Statistics | 14 | 1 | 1\.0 | 1 | 1 | 1 | 1 |
| 2 | Statistics | 16 | 2 | 2\.0 | 2 | 2 | 1 | 2 |
| 3 | Statistics | 18 | 3 | 3\.5 | 3 | 3 | 2 | 3 |
| 4 | Statistics | 18 | 4 | 3\.5 | 3 | 3 | 3 | 4 |
| 5 | Statistics | 19 | 5 | 5\.0 | 5 | 4 | 4 | 5 |
* What are the differences among `row_number()`, `rank()`, `min_rank()`, `dense_rank()`, and `ntile()`?
* Why doesn’t `row_number()` need an argument?
* What would happen if you gave it the argument `grade` or `class`?
* What do you think would happen if you removed the `group_by(class)` line above?
* What if you added `id` to the grouping?
* What happens if you change the order of the rows?
* What does the second argument in `ntile()` do?
You can use window functions to group your data into quantiles.
```
sw_mass <- starwars %>%
group_by(tertile = ntile(mass, 3)) %>%
summarise(min = min(mass),
max = max(mass),
mean = mean(mass),
.groups = "drop")
```
Table 5\.13: All rows from `sw_mass`
| tertile | min | max | mean |
| --- | --- | --- | --- |
| 1 | 15 | 68 | 45\.6600 |
| 2 | 74 | 82 | 78\.4100 |
| 3 | 83 | 1358 | 171\.5789 |
| NA | NA | NA | NA |
Why is there a row of `NA` values? How would you get rid of them?
### 5\.6\.2 Offset functions
The function `lag()` gives a previous row’s value. It defaults to 1 row back, but you can change that with the `n` argument. The function `lead()` gives values ahead of the current row.
```
lag_lead <- tibble(x = 1:6) %>%
mutate(lag = lag(x),
lag2 = lag(x, n = 2),
lead = lead(x, default = 0))
```
Table 5\.14: All rows from `lag_lead`
| x | lag | lag2 | lead |
| --- | --- | --- | --- |
| 1 | NA | NA | 2 |
| 2 | 1 | NA | 3 |
| 3 | 2 | 1 | 4 |
| 4 | 3 | 2 | 5 |
| 5 | 4 | 3 | 6 |
| 6 | 5 | 4 | 0 |
You can use offset functions to calculate change between trials or where a value changes. Use the `order_by` argument to specify the order of the rows. Alternatively, you can use `arrange()` before the offset functions.
```
trials <- tibble(
trial = sample(1:10, 10),
cond = sample(c("exp", "ctrl"), 10, T),
score = rpois(10, 4)
) %>%
mutate(
score_change = score - lag(score, order_by = trial),
change_cond = cond != lag(cond, order_by = trial,
default = "no condition")
) %>%
arrange(trial)
```
Table 5\.15: All rows from `trials`
| trial | cond | score | score\_change | change\_cond |
| --- | --- | --- | --- | --- |
| 1 | ctrl | 8 | NA | TRUE |
| 2 | ctrl | 4 | \-4 | FALSE |
| 3 | exp | 6 | 2 | TRUE |
| 4 | ctrl | 2 | \-4 | TRUE |
| 5 | ctrl | 3 | 1 | FALSE |
| 6 | ctrl | 6 | 3 | FALSE |
| 7 | ctrl | 2 | \-4 | FALSE |
| 8 | exp | 4 | 2 | TRUE |
| 9 | ctrl | 4 | 0 | TRUE |
| 10 | exp | 3 | \-1 | TRUE |
Look at the help pages for `lag()` and `lead()`.
* What happens if you remove the `order_by` argument or change it to `cond`?
* What does the `default` argument do?
* Can you think of circumstances in your own data where you might need to use `lag()` or `lead()`?
### 5\.6\.3 Cumulative aggregates
`cumsum()`, `cummin()`, and `cummax()` are base R functions for calculating cumulative means, minimums, and maximums. The dplyr package introduces `cumany()` and `cumall()`, which return `TRUE` if any or all of the previous values meet their criteria.
```
cumulative <- tibble(
time = 1:10,
obs = c(2, 2, 1, 2, 4, 3, 1, 0, 3, 5)
) %>%
mutate(
cumsum = cumsum(obs),
cummin = cummin(obs),
cummax = cummax(obs),
cumany = cumany(obs == 3),
cumall = cumall(obs < 4)
)
```
Table 5\.16: All rows from `cumulative`
| time | obs | cumsum | cummin | cummax | cumany | cumall |
| --- | --- | --- | --- | --- | --- | --- |
| 1 | 2 | 2 | 2 | 2 | FALSE | TRUE |
| 2 | 2 | 4 | 2 | 2 | FALSE | TRUE |
| 3 | 1 | 5 | 1 | 2 | FALSE | TRUE |
| 4 | 2 | 7 | 1 | 2 | FALSE | TRUE |
| 5 | 4 | 11 | 1 | 4 | FALSE | FALSE |
| 6 | 3 | 14 | 1 | 4 | TRUE | FALSE |
| 7 | 1 | 15 | 1 | 4 | TRUE | FALSE |
| 8 | 0 | 15 | 0 | 4 | TRUE | FALSE |
| 9 | 3 | 18 | 0 | 4 | TRUE | FALSE |
| 10 | 5 | 23 | 0 | 5 | TRUE | FALSE |
* What would happen if you change `cumany(obs == 3)` to `cumany(obs > 2)`?
* What would happen if you change `cumall(obs < 4)` to `cumall(obs < 2)`?
* Can you think of circumstances in your own data where you might need to use `cumany()` or `cumall()`?
### 5\.6\.1 Ranking functions
```
grades <- tibble(
id = 1:5,
"Data Skills" = c(16, 17, 17, 19, 20),
"Statistics" = c(14, 16, 18, 18, 19)
) %>%
gather(class, grade, 2:3) %>%
group_by(class) %>%
mutate(row_number = row_number(),
rank = rank(grade),
min_rank = min_rank(grade),
dense_rank = dense_rank(grade),
quartile = ntile(grade, 4),
percentile = ntile(grade, 100))
```
Table 5\.12: All rows from `grades`
| id | class | grade | row\_number | rank | min\_rank | dense\_rank | quartile | percentile |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 1 | Data Skills | 16 | 1 | 1\.0 | 1 | 1 | 1 | 1 |
| 2 | Data Skills | 17 | 2 | 2\.5 | 2 | 2 | 1 | 2 |
| 3 | Data Skills | 17 | 3 | 2\.5 | 2 | 2 | 2 | 3 |
| 4 | Data Skills | 19 | 4 | 4\.0 | 4 | 3 | 3 | 4 |
| 5 | Data Skills | 20 | 5 | 5\.0 | 5 | 4 | 4 | 5 |
| 1 | Statistics | 14 | 1 | 1\.0 | 1 | 1 | 1 | 1 |
| 2 | Statistics | 16 | 2 | 2\.0 | 2 | 2 | 1 | 2 |
| 3 | Statistics | 18 | 3 | 3\.5 | 3 | 3 | 2 | 3 |
| 4 | Statistics | 18 | 4 | 3\.5 | 3 | 3 | 3 | 4 |
| 5 | Statistics | 19 | 5 | 5\.0 | 5 | 4 | 4 | 5 |
* What are the differences among `row_number()`, `rank()`, `min_rank()`, `dense_rank()`, and `ntile()`?
* Why doesn’t `row_number()` need an argument?
* What would happen if you gave it the argument `grade` or `class`?
* What do you think would happen if you removed the `group_by(class)` line above?
* What if you added `id` to the grouping?
* What happens if you change the order of the rows?
* What does the second argument in `ntile()` do?
You can use window functions to group your data into quantiles.
```
sw_mass <- starwars %>%
group_by(tertile = ntile(mass, 3)) %>%
summarise(min = min(mass),
max = max(mass),
mean = mean(mass),
.groups = "drop")
```
Table 5\.13: All rows from `sw_mass`
| tertile | min | max | mean |
| --- | --- | --- | --- |
| 1 | 15 | 68 | 45\.6600 |
| 2 | 74 | 82 | 78\.4100 |
| 3 | 83 | 1358 | 171\.5789 |
| NA | NA | NA | NA |
Why is there a row of `NA` values? How would you get rid of them?
### 5\.6\.2 Offset functions
The function `lag()` gives a previous row’s value. It defaults to 1 row back, but you can change that with the `n` argument. The function `lead()` gives values ahead of the current row.
```
lag_lead <- tibble(x = 1:6) %>%
mutate(lag = lag(x),
lag2 = lag(x, n = 2),
lead = lead(x, default = 0))
```
Table 5\.14: All rows from `lag_lead`
| x | lag | lag2 | lead |
| --- | --- | --- | --- |
| 1 | NA | NA | 2 |
| 2 | 1 | NA | 3 |
| 3 | 2 | 1 | 4 |
| 4 | 3 | 2 | 5 |
| 5 | 4 | 3 | 6 |
| 6 | 5 | 4 | 0 |
You can use offset functions to calculate change between trials or where a value changes. Use the `order_by` argument to specify the order of the rows. Alternatively, you can use `arrange()` before the offset functions.
```
trials <- tibble(
trial = sample(1:10, 10),
cond = sample(c("exp", "ctrl"), 10, T),
score = rpois(10, 4)
) %>%
mutate(
score_change = score - lag(score, order_by = trial),
change_cond = cond != lag(cond, order_by = trial,
default = "no condition")
) %>%
arrange(trial)
```
Table 5\.15: All rows from `trials`
| trial | cond | score | score\_change | change\_cond |
| --- | --- | --- | --- | --- |
| 1 | ctrl | 8 | NA | TRUE |
| 2 | ctrl | 4 | \-4 | FALSE |
| 3 | exp | 6 | 2 | TRUE |
| 4 | ctrl | 2 | \-4 | TRUE |
| 5 | ctrl | 3 | 1 | FALSE |
| 6 | ctrl | 6 | 3 | FALSE |
| 7 | ctrl | 2 | \-4 | FALSE |
| 8 | exp | 4 | 2 | TRUE |
| 9 | ctrl | 4 | 0 | TRUE |
| 10 | exp | 3 | \-1 | TRUE |
Look at the help pages for `lag()` and `lead()`.
* What happens if you remove the `order_by` argument or change it to `cond`?
* What does the `default` argument do?
* Can you think of circumstances in your own data where you might need to use `lag()` or `lead()`?
### 5\.6\.3 Cumulative aggregates
`cumsum()`, `cummin()`, and `cummax()` are base R functions for calculating cumulative means, minimums, and maximums. The dplyr package introduces `cumany()` and `cumall()`, which return `TRUE` if any or all of the previous values meet their criteria.
```
cumulative <- tibble(
time = 1:10,
obs = c(2, 2, 1, 2, 4, 3, 1, 0, 3, 5)
) %>%
mutate(
cumsum = cumsum(obs),
cummin = cummin(obs),
cummax = cummax(obs),
cumany = cumany(obs == 3),
cumall = cumall(obs < 4)
)
```
Table 5\.16: All rows from `cumulative`
| time | obs | cumsum | cummin | cummax | cumany | cumall |
| --- | --- | --- | --- | --- | --- | --- |
| 1 | 2 | 2 | 2 | 2 | FALSE | TRUE |
| 2 | 2 | 4 | 2 | 2 | FALSE | TRUE |
| 3 | 1 | 5 | 1 | 2 | FALSE | TRUE |
| 4 | 2 | 7 | 1 | 2 | FALSE | TRUE |
| 5 | 4 | 11 | 1 | 4 | FALSE | FALSE |
| 6 | 3 | 14 | 1 | 4 | TRUE | FALSE |
| 7 | 1 | 15 | 1 | 4 | TRUE | FALSE |
| 8 | 0 | 15 | 0 | 4 | TRUE | FALSE |
| 9 | 3 | 18 | 0 | 4 | TRUE | FALSE |
| 10 | 5 | 23 | 0 | 5 | TRUE | FALSE |
* What would happen if you change `cumany(obs == 3)` to `cumany(obs > 2)`?
* What would happen if you change `cumall(obs < 4)` to `cumall(obs < 2)`?
* Can you think of circumstances in your own data where you might need to use `cumany()` or `cumall()`?
5\.7 Glossary
-------------
| term | definition |
| --- | --- |
| [data wrangling](https://psyteachr.github.io/glossary/d#data.wrangling) | The process of preparing data for visualisation and statistical analysis. |
5\.8 Exercises
--------------
Download the [exercises](exercises/05_dplyr_exercise.Rmd). See the [answers](exercises/05_dplyr_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(5)
# run this to access the answers
dataskills::exercise(5, answers = TRUE)
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/joins.html |
Chapter 6 Data Relations
========================
6\.1 Learning Objectives
------------------------
1. Be able to use the 4 mutating join verbs: [(video)](https://youtu.be/WV0yg6f3DNM)
* [`left_join()`](joins.html#left_join)
* [`right_join()`](joins.html#right_join)
* [`inner_join()`](joins.html#inner_join)
* [`full_join()`](joins.html#full_join)
2. Be able to use the 2 filtering join verbs: [(video)](https://youtu.be/ijoCEKifefQ)
* [`semi_join()`](joins.html#semi_join)
* [`anti_join()`](joins.html#anti_join)
3. Be able to use the 2 binding join verbs: [(video)](https://youtu.be/8RWdNhbVZ4I)
* [`bind_rows()`](joins.html#bind_rows)
* [`bind_cols()`](joins.html#bind_cols)
4. Be able to use the 3 set operations: [(video)](https://youtu.be/c3V33ElWUYI)
* [`intersect()`](joins.html#intersect)
* [`union()`](joins.html#union)
* [`setdiff()`](joins.html#setdiff)
6\.2 Resources
--------------
* [Chapter 13: Relational Data](http://r4ds.had.co.nz/relational-data.html) in *R for Data Science*
* [Cheatsheet for dplyr join functions](http://stat545.com/bit001_dplyr-cheatsheet.html)
* [Lecture slides on dplyr two\-table verbs](slides/05_joins_slides.pdf)
6\.3 Setup
----------
```
# libraries needed
library(tidyverse)
```
6\.4 Data
---------
First, we’ll create two small data tables.
`subject` has id, gender and age for subjects 1\-5\. Age and gender are missing for subject 3\.
```
subject <- tibble(
id = 1:5,
gender = c("m", "m", NA, "nb", "f"),
age = c(19, 22, NA, 19, 18)
)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
`exp` has subject id and the score from an experiment. Some subjects are missing, some completed twice, and some are not in the subject table.
```
exp <- tibble(
id = c(2, 3, 4, 4, 5, 5, 6, 6, 7),
score = c(10, 18, 21, 23, 9, 11, 11, 12, 3)
)
```
| id | score |
| --- | --- |
| 2 | 10 |
| 3 | 18 |
| 4 | 21 |
| 4 | 23 |
| 5 | 9 |
| 5 | 11 |
| 6 | 11 |
| 6 | 12 |
| 7 | 3 |
6\.5 Mutating Joins
-------------------
[Mutating joins](https://psyteachr.github.io/glossary/m#mutating-joins "Joins that act like the dplyr::mutate() function in that they add new columns to one table based on values in another table.") act like the `mutate()` function in that they add new columns to one table based on values in another table.
All the mutating joins have this basic syntax:
`****_join(x, y, by = NULL, suffix = c(".x", ".y")`
* `x` \= the first (left) table
* `y` \= the second (right) table
* `by` \= what columns to match on. If you leave this blank, it will match on all columns with the same names in the two tables.
* `suffix` \= if columns have the same name in the two tables, but you aren’t joining by them, they get a suffix to make them unambiguous. This defaults to “.x” and “.y,” but you can change it to something more meaningful.
You can leave out the `by` argument if you’re matching on all of the columns with the same name, but it’s good practice to always specify it so your code is robust to changes in the loaded data.
### 6\.5\.1 left\_join()
Figure 6\.1: Left Join
A `left_join` keeps all the data from the first (left) table and joins anything that matches from the second (right) table. If the right table has more than one match for a row in the right table, there will be more than one row in the joined table (see ids 4 and 5\).
```
left_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
Figure 6\.2: Left Join (reversed)
The order of tables is swapped here, so the result is all rows from the `exp` table joined to any matching rows from the `subject` table.
```
left_join(exp, subject, by = "id")
```
| id | score | gender | age |
| --- | --- | --- | --- |
| 2 | 10 | m | 22 |
| 3 | 18 | NA | NA |
| 4 | 21 | nb | 19 |
| 4 | 23 | nb | 19 |
| 5 | 9 | f | 18 |
| 5 | 11 | f | 18 |
| 6 | 11 | NA | NA |
| 6 | 12 | NA | NA |
| 7 | 3 | NA | NA |
### 6\.5\.2 right\_join()
Figure 6\.3: Right Join
A `right_join` keeps all the data from the second (right) table and joins anything that matches from the first (left) table.
```
right_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
| 6 | NA | NA | 11 |
| 6 | NA | NA | 12 |
| 7 | NA | NA | 3 |
This table has the same information as `left_join(exp, subject, by = "id")`, but the columns are in a different order (left table, then right table).
### 6\.5\.3 inner\_join()
Figure 6\.4: Inner Join
An `inner_join` returns all the rows that have a match in the other table.
```
inner_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
### 6\.5\.4 full\_join()
Figure 6\.5: Full Join
A `full_join` lets you join up rows in two tables while keeping all of the information from both tables. If a row doesn’t have a match in the other table, the other table’s column values are set to `NA`.
```
full_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
| 6 | NA | NA | 11 |
| 6 | NA | NA | 12 |
| 7 | NA | NA | 3 |
6\.6 Filtering Joins
--------------------
[Filtering joins](https://psyteachr.github.io/glossary/f#filtering-joins "Joins that act like the dplyr::filter() function in that they remove rows from the data in one table based on the values in another table.") act like the `filter()` function in that they remove rows from the data in one table based on the values in another table. The result of a filtering join will only contain rows from the left table and have the same number or fewer rows than the left table.
### 6\.6\.1 semi\_join()
Figure 6\.6: Semi Join
A `semi_join` returns all rows from the left table where there are matching values in the right table, keeping just columns from the left table.
```
semi_join(subject, exp, by = "id")
```
| id | gender | age |
| --- | --- | --- |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
Unlike an inner join, a semi join will never duplicate the rows in the left table if there is more than one matching row in the right table.
Figure 6\.7: Semi Join (Reversed)
Order matters in a semi join.
```
semi_join(exp, subject, by = "id")
```
| id | score |
| --- | --- |
| 2 | 10 |
| 3 | 18 |
| 4 | 21 |
| 4 | 23 |
| 5 | 9 |
| 5 | 11 |
### 6\.6\.2 anti\_join()
Figure 6\.8: Anti Join
An `anti_join` return all rows from the left table where there are *not* matching values in the right table, keeping just columns from the left table.
```
anti_join(subject, exp, by = "id")
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
Figure 6\.9: Anti Join (Reversed)
Order matters in an anti join.
```
anti_join(exp, subject, by = "id")
```
| id | score |
| --- | --- |
| 6 | 11 |
| 6 | 12 |
| 7 | 3 |
6\.7 Binding Joins
------------------
[Binding joins](https://psyteachr.github.io/glossary/b#binding-joins "Joins that bind one table to another by adding their rows or columns together.") bind one table to another by adding their rows or columns together.
### 6\.7\.1 bind\_rows()
You can combine the rows of two tables with `bind_rows`.
Here we’ll add subject data for subjects 6\-9 and bind that to the original subject table.
```
new_subjects <- tibble(
id = 6:9,
gender = c("nb", "m", "f", "f"),
age = c(19, 16, 20, 19)
)
bind_rows(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
| 6 | nb | 19 |
| 7 | m | 16 |
| 8 | f | 20 |
| 9 | f | 19 |
The columns just have to have the same names, they don’t have to be in the same order. Any columns that differ between the two tables will just have `NA` values for entries from the other table.
If a row is duplicated between the two tables (like id 5 below), the row will also be duplicated in the resulting table. If your tables have the exact same columns, you can use `union()` (see below) to avoid duplicates.
```
new_subjects <- tibble(
id = 5:9,
age = c(18, 19, 16, 20, 19),
gender = c("f", "nb", "m", "f", "f"),
new = c(1,2,3,4,5)
)
bind_rows(subject, new_subjects)
```
| id | gender | age | new |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | NA |
| 3 | NA | NA | NA |
| 4 | nb | 19 | NA |
| 5 | f | 18 | NA |
| 5 | f | 18 | 1 |
| 6 | nb | 19 | 2 |
| 7 | m | 16 | 3 |
| 8 | f | 20 | 4 |
| 9 | f | 19 | 5 |
### 6\.7\.2 bind\_cols()
You can merge two tables with the same number of rows using `bind_cols`. This is only useful if the two tables have their rows in the exact same order. The only advantage over a left join is when the tables don’t have any IDs to join by and you have to rely solely on their order.
```
new_info <- tibble(
colour = c("red", "orange", "yellow", "green", "blue")
)
bind_cols(subject, new_info)
```
| id | gender | age | colour |
| --- | --- | --- | --- |
| 1 | m | 19 | red |
| 2 | m | 22 | orange |
| 3 | NA | NA | yellow |
| 4 | nb | 19 | green |
| 5 | f | 18 | blue |
6\.8 Set Operations
-------------------
[Set operations](https://psyteachr.github.io/glossary/s#set-operations "Functions that compare two tables and return rows that match (intersect), are in either table (union), or are in one table but not the other (setdiff).") compare two tables and return rows that match (intersect), are in either table (union), or are in one table but not the other (setdiff).
### 6\.8\.1 intersect()
`intersect()` returns all rows in two tables that match exactly. The columns don’t have to be in the same order.
```
new_subjects <- tibble(
id = seq(4, 9),
age = c(19, 18, 19, 16, 20, 19),
gender = c("f", "f", "m", "m", "f", "f")
)
intersect(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 5 | f | 18 |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has an `intersect()` function. The error message can be confusing and looks something like this:
```
base::intersect(subject, new_subjects)
```
```
## Error: Must subset rows with a valid subscript vector.
## ℹ Logical subscripts must match the size of the indexed input.
## x Input has size 6 but subscript `!duplicated(x, fromLast = fromLast, ...)` has size 0.
```
### 6\.8\.2 union()
`union()` returns all the rows from both tables, removing duplicate rows.
```
union(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
| 4 | f | 19 |
| 6 | m | 19 |
| 7 | m | 16 |
| 8 | f | 20 |
| 9 | f | 19 |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has a `union()` function. You usually won’t get an error message, but the output won’t be what you expect.
```
base::union(subject, new_subjects)
```
```
## [[1]]
## [1] 1 2 3 4 5
##
## [[2]]
## [1] "m" "m" NA "nb" "f"
##
## [[3]]
## [1] 19 22 NA 19 18
##
## [[4]]
## [1] 4 5 6 7 8 9
##
## [[5]]
## [1] 19 18 19 16 20 19
##
## [[6]]
## [1] "f" "f" "m" "m" "f" "f"
```
### 6\.8\.3 setdiff()
`setdiff` returns rows that are in the first table, but not in the second table.
```
setdiff(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
Order matters for `setdiff`.
```
setdiff(new_subjects, subject)
```
| id | age | gender |
| --- | --- | --- |
| 4 | 19 | f |
| 6 | 19 | m |
| 7 | 16 | m |
| 8 | 20 | f |
| 9 | 19 | f |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has a `setdiff()` function. You usually won’t get an error message, but the output might not be what you expect because the base R `setdiff()` expects columns to be in the same order, so id 5 here registers as different between the two tables.
```
base::setdiff(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
6\.9 Glossary
-------------
| term | definition |
| --- | --- |
| [base r](https://psyteachr.github.io/glossary/b#base.r) | The set of R functions that come with a basic installation of R, before you add external packages |
| [binding joins](https://psyteachr.github.io/glossary/b#binding.joins) | Joins that bind one table to another by adding their rows or columns together. |
| [filtering joins](https://psyteachr.github.io/glossary/f#filtering.joins) | Joins that act like the dplyr::filter() function in that they remove rows from the data in one table based on the values in another table. |
| [mutating joins](https://psyteachr.github.io/glossary/m#mutating.joins) | Joins that act like the dplyr::mutate() function in that they add new columns to one table based on values in another table. |
| [set operations](https://psyteachr.github.io/glossary/s#set.operations) | Functions that compare two tables and return rows that match (intersect), are in either table (union), or are in one table but not the other (setdiff). |
6\.10 Exercises
---------------
Download the [exercises](exercises/06_joins_exercise.Rmd). See the [answers](exercises/06_joins_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(6)
# run this to access the answers
dataskills::exercise(6, answers = TRUE)
```
6\.1 Learning Objectives
------------------------
1. Be able to use the 4 mutating join verbs: [(video)](https://youtu.be/WV0yg6f3DNM)
* [`left_join()`](joins.html#left_join)
* [`right_join()`](joins.html#right_join)
* [`inner_join()`](joins.html#inner_join)
* [`full_join()`](joins.html#full_join)
2. Be able to use the 2 filtering join verbs: [(video)](https://youtu.be/ijoCEKifefQ)
* [`semi_join()`](joins.html#semi_join)
* [`anti_join()`](joins.html#anti_join)
3. Be able to use the 2 binding join verbs: [(video)](https://youtu.be/8RWdNhbVZ4I)
* [`bind_rows()`](joins.html#bind_rows)
* [`bind_cols()`](joins.html#bind_cols)
4. Be able to use the 3 set operations: [(video)](https://youtu.be/c3V33ElWUYI)
* [`intersect()`](joins.html#intersect)
* [`union()`](joins.html#union)
* [`setdiff()`](joins.html#setdiff)
6\.2 Resources
--------------
* [Chapter 13: Relational Data](http://r4ds.had.co.nz/relational-data.html) in *R for Data Science*
* [Cheatsheet for dplyr join functions](http://stat545.com/bit001_dplyr-cheatsheet.html)
* [Lecture slides on dplyr two\-table verbs](slides/05_joins_slides.pdf)
6\.3 Setup
----------
```
# libraries needed
library(tidyverse)
```
6\.4 Data
---------
First, we’ll create two small data tables.
`subject` has id, gender and age for subjects 1\-5\. Age and gender are missing for subject 3\.
```
subject <- tibble(
id = 1:5,
gender = c("m", "m", NA, "nb", "f"),
age = c(19, 22, NA, 19, 18)
)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
`exp` has subject id and the score from an experiment. Some subjects are missing, some completed twice, and some are not in the subject table.
```
exp <- tibble(
id = c(2, 3, 4, 4, 5, 5, 6, 6, 7),
score = c(10, 18, 21, 23, 9, 11, 11, 12, 3)
)
```
| id | score |
| --- | --- |
| 2 | 10 |
| 3 | 18 |
| 4 | 21 |
| 4 | 23 |
| 5 | 9 |
| 5 | 11 |
| 6 | 11 |
| 6 | 12 |
| 7 | 3 |
6\.5 Mutating Joins
-------------------
[Mutating joins](https://psyteachr.github.io/glossary/m#mutating-joins "Joins that act like the dplyr::mutate() function in that they add new columns to one table based on values in another table.") act like the `mutate()` function in that they add new columns to one table based on values in another table.
All the mutating joins have this basic syntax:
`****_join(x, y, by = NULL, suffix = c(".x", ".y")`
* `x` \= the first (left) table
* `y` \= the second (right) table
* `by` \= what columns to match on. If you leave this blank, it will match on all columns with the same names in the two tables.
* `suffix` \= if columns have the same name in the two tables, but you aren’t joining by them, they get a suffix to make them unambiguous. This defaults to “.x” and “.y,” but you can change it to something more meaningful.
You can leave out the `by` argument if you’re matching on all of the columns with the same name, but it’s good practice to always specify it so your code is robust to changes in the loaded data.
### 6\.5\.1 left\_join()
Figure 6\.1: Left Join
A `left_join` keeps all the data from the first (left) table and joins anything that matches from the second (right) table. If the right table has more than one match for a row in the right table, there will be more than one row in the joined table (see ids 4 and 5\).
```
left_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
Figure 6\.2: Left Join (reversed)
The order of tables is swapped here, so the result is all rows from the `exp` table joined to any matching rows from the `subject` table.
```
left_join(exp, subject, by = "id")
```
| id | score | gender | age |
| --- | --- | --- | --- |
| 2 | 10 | m | 22 |
| 3 | 18 | NA | NA |
| 4 | 21 | nb | 19 |
| 4 | 23 | nb | 19 |
| 5 | 9 | f | 18 |
| 5 | 11 | f | 18 |
| 6 | 11 | NA | NA |
| 6 | 12 | NA | NA |
| 7 | 3 | NA | NA |
### 6\.5\.2 right\_join()
Figure 6\.3: Right Join
A `right_join` keeps all the data from the second (right) table and joins anything that matches from the first (left) table.
```
right_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
| 6 | NA | NA | 11 |
| 6 | NA | NA | 12 |
| 7 | NA | NA | 3 |
This table has the same information as `left_join(exp, subject, by = "id")`, but the columns are in a different order (left table, then right table).
### 6\.5\.3 inner\_join()
Figure 6\.4: Inner Join
An `inner_join` returns all the rows that have a match in the other table.
```
inner_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
### 6\.5\.4 full\_join()
Figure 6\.5: Full Join
A `full_join` lets you join up rows in two tables while keeping all of the information from both tables. If a row doesn’t have a match in the other table, the other table’s column values are set to `NA`.
```
full_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
| 6 | NA | NA | 11 |
| 6 | NA | NA | 12 |
| 7 | NA | NA | 3 |
### 6\.5\.1 left\_join()
Figure 6\.1: Left Join
A `left_join` keeps all the data from the first (left) table and joins anything that matches from the second (right) table. If the right table has more than one match for a row in the right table, there will be more than one row in the joined table (see ids 4 and 5\).
```
left_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
Figure 6\.2: Left Join (reversed)
The order of tables is swapped here, so the result is all rows from the `exp` table joined to any matching rows from the `subject` table.
```
left_join(exp, subject, by = "id")
```
| id | score | gender | age |
| --- | --- | --- | --- |
| 2 | 10 | m | 22 |
| 3 | 18 | NA | NA |
| 4 | 21 | nb | 19 |
| 4 | 23 | nb | 19 |
| 5 | 9 | f | 18 |
| 5 | 11 | f | 18 |
| 6 | 11 | NA | NA |
| 6 | 12 | NA | NA |
| 7 | 3 | NA | NA |
### 6\.5\.2 right\_join()
Figure 6\.3: Right Join
A `right_join` keeps all the data from the second (right) table and joins anything that matches from the first (left) table.
```
right_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
| 6 | NA | NA | 11 |
| 6 | NA | NA | 12 |
| 7 | NA | NA | 3 |
This table has the same information as `left_join(exp, subject, by = "id")`, but the columns are in a different order (left table, then right table).
### 6\.5\.3 inner\_join()
Figure 6\.4: Inner Join
An `inner_join` returns all the rows that have a match in the other table.
```
inner_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
### 6\.5\.4 full\_join()
Figure 6\.5: Full Join
A `full_join` lets you join up rows in two tables while keeping all of the information from both tables. If a row doesn’t have a match in the other table, the other table’s column values are set to `NA`.
```
full_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
| 6 | NA | NA | 11 |
| 6 | NA | NA | 12 |
| 7 | NA | NA | 3 |
6\.6 Filtering Joins
--------------------
[Filtering joins](https://psyteachr.github.io/glossary/f#filtering-joins "Joins that act like the dplyr::filter() function in that they remove rows from the data in one table based on the values in another table.") act like the `filter()` function in that they remove rows from the data in one table based on the values in another table. The result of a filtering join will only contain rows from the left table and have the same number or fewer rows than the left table.
### 6\.6\.1 semi\_join()
Figure 6\.6: Semi Join
A `semi_join` returns all rows from the left table where there are matching values in the right table, keeping just columns from the left table.
```
semi_join(subject, exp, by = "id")
```
| id | gender | age |
| --- | --- | --- |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
Unlike an inner join, a semi join will never duplicate the rows in the left table if there is more than one matching row in the right table.
Figure 6\.7: Semi Join (Reversed)
Order matters in a semi join.
```
semi_join(exp, subject, by = "id")
```
| id | score |
| --- | --- |
| 2 | 10 |
| 3 | 18 |
| 4 | 21 |
| 4 | 23 |
| 5 | 9 |
| 5 | 11 |
### 6\.6\.2 anti\_join()
Figure 6\.8: Anti Join
An `anti_join` return all rows from the left table where there are *not* matching values in the right table, keeping just columns from the left table.
```
anti_join(subject, exp, by = "id")
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
Figure 6\.9: Anti Join (Reversed)
Order matters in an anti join.
```
anti_join(exp, subject, by = "id")
```
| id | score |
| --- | --- |
| 6 | 11 |
| 6 | 12 |
| 7 | 3 |
### 6\.6\.1 semi\_join()
Figure 6\.6: Semi Join
A `semi_join` returns all rows from the left table where there are matching values in the right table, keeping just columns from the left table.
```
semi_join(subject, exp, by = "id")
```
| id | gender | age |
| --- | --- | --- |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
Unlike an inner join, a semi join will never duplicate the rows in the left table if there is more than one matching row in the right table.
Figure 6\.7: Semi Join (Reversed)
Order matters in a semi join.
```
semi_join(exp, subject, by = "id")
```
| id | score |
| --- | --- |
| 2 | 10 |
| 3 | 18 |
| 4 | 21 |
| 4 | 23 |
| 5 | 9 |
| 5 | 11 |
### 6\.6\.2 anti\_join()
Figure 6\.8: Anti Join
An `anti_join` return all rows from the left table where there are *not* matching values in the right table, keeping just columns from the left table.
```
anti_join(subject, exp, by = "id")
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
Figure 6\.9: Anti Join (Reversed)
Order matters in an anti join.
```
anti_join(exp, subject, by = "id")
```
| id | score |
| --- | --- |
| 6 | 11 |
| 6 | 12 |
| 7 | 3 |
6\.7 Binding Joins
------------------
[Binding joins](https://psyteachr.github.io/glossary/b#binding-joins "Joins that bind one table to another by adding their rows or columns together.") bind one table to another by adding their rows or columns together.
### 6\.7\.1 bind\_rows()
You can combine the rows of two tables with `bind_rows`.
Here we’ll add subject data for subjects 6\-9 and bind that to the original subject table.
```
new_subjects <- tibble(
id = 6:9,
gender = c("nb", "m", "f", "f"),
age = c(19, 16, 20, 19)
)
bind_rows(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
| 6 | nb | 19 |
| 7 | m | 16 |
| 8 | f | 20 |
| 9 | f | 19 |
The columns just have to have the same names, they don’t have to be in the same order. Any columns that differ between the two tables will just have `NA` values for entries from the other table.
If a row is duplicated between the two tables (like id 5 below), the row will also be duplicated in the resulting table. If your tables have the exact same columns, you can use `union()` (see below) to avoid duplicates.
```
new_subjects <- tibble(
id = 5:9,
age = c(18, 19, 16, 20, 19),
gender = c("f", "nb", "m", "f", "f"),
new = c(1,2,3,4,5)
)
bind_rows(subject, new_subjects)
```
| id | gender | age | new |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | NA |
| 3 | NA | NA | NA |
| 4 | nb | 19 | NA |
| 5 | f | 18 | NA |
| 5 | f | 18 | 1 |
| 6 | nb | 19 | 2 |
| 7 | m | 16 | 3 |
| 8 | f | 20 | 4 |
| 9 | f | 19 | 5 |
### 6\.7\.2 bind\_cols()
You can merge two tables with the same number of rows using `bind_cols`. This is only useful if the two tables have their rows in the exact same order. The only advantage over a left join is when the tables don’t have any IDs to join by and you have to rely solely on their order.
```
new_info <- tibble(
colour = c("red", "orange", "yellow", "green", "blue")
)
bind_cols(subject, new_info)
```
| id | gender | age | colour |
| --- | --- | --- | --- |
| 1 | m | 19 | red |
| 2 | m | 22 | orange |
| 3 | NA | NA | yellow |
| 4 | nb | 19 | green |
| 5 | f | 18 | blue |
### 6\.7\.1 bind\_rows()
You can combine the rows of two tables with `bind_rows`.
Here we’ll add subject data for subjects 6\-9 and bind that to the original subject table.
```
new_subjects <- tibble(
id = 6:9,
gender = c("nb", "m", "f", "f"),
age = c(19, 16, 20, 19)
)
bind_rows(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
| 6 | nb | 19 |
| 7 | m | 16 |
| 8 | f | 20 |
| 9 | f | 19 |
The columns just have to have the same names, they don’t have to be in the same order. Any columns that differ between the two tables will just have `NA` values for entries from the other table.
If a row is duplicated between the two tables (like id 5 below), the row will also be duplicated in the resulting table. If your tables have the exact same columns, you can use `union()` (see below) to avoid duplicates.
```
new_subjects <- tibble(
id = 5:9,
age = c(18, 19, 16, 20, 19),
gender = c("f", "nb", "m", "f", "f"),
new = c(1,2,3,4,5)
)
bind_rows(subject, new_subjects)
```
| id | gender | age | new |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | NA |
| 3 | NA | NA | NA |
| 4 | nb | 19 | NA |
| 5 | f | 18 | NA |
| 5 | f | 18 | 1 |
| 6 | nb | 19 | 2 |
| 7 | m | 16 | 3 |
| 8 | f | 20 | 4 |
| 9 | f | 19 | 5 |
### 6\.7\.2 bind\_cols()
You can merge two tables with the same number of rows using `bind_cols`. This is only useful if the two tables have their rows in the exact same order. The only advantage over a left join is when the tables don’t have any IDs to join by and you have to rely solely on their order.
```
new_info <- tibble(
colour = c("red", "orange", "yellow", "green", "blue")
)
bind_cols(subject, new_info)
```
| id | gender | age | colour |
| --- | --- | --- | --- |
| 1 | m | 19 | red |
| 2 | m | 22 | orange |
| 3 | NA | NA | yellow |
| 4 | nb | 19 | green |
| 5 | f | 18 | blue |
6\.8 Set Operations
-------------------
[Set operations](https://psyteachr.github.io/glossary/s#set-operations "Functions that compare two tables and return rows that match (intersect), are in either table (union), or are in one table but not the other (setdiff).") compare two tables and return rows that match (intersect), are in either table (union), or are in one table but not the other (setdiff).
### 6\.8\.1 intersect()
`intersect()` returns all rows in two tables that match exactly. The columns don’t have to be in the same order.
```
new_subjects <- tibble(
id = seq(4, 9),
age = c(19, 18, 19, 16, 20, 19),
gender = c("f", "f", "m", "m", "f", "f")
)
intersect(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 5 | f | 18 |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has an `intersect()` function. The error message can be confusing and looks something like this:
```
base::intersect(subject, new_subjects)
```
```
## Error: Must subset rows with a valid subscript vector.
## ℹ Logical subscripts must match the size of the indexed input.
## x Input has size 6 but subscript `!duplicated(x, fromLast = fromLast, ...)` has size 0.
```
### 6\.8\.2 union()
`union()` returns all the rows from both tables, removing duplicate rows.
```
union(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
| 4 | f | 19 |
| 6 | m | 19 |
| 7 | m | 16 |
| 8 | f | 20 |
| 9 | f | 19 |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has a `union()` function. You usually won’t get an error message, but the output won’t be what you expect.
```
base::union(subject, new_subjects)
```
```
## [[1]]
## [1] 1 2 3 4 5
##
## [[2]]
## [1] "m" "m" NA "nb" "f"
##
## [[3]]
## [1] 19 22 NA 19 18
##
## [[4]]
## [1] 4 5 6 7 8 9
##
## [[5]]
## [1] 19 18 19 16 20 19
##
## [[6]]
## [1] "f" "f" "m" "m" "f" "f"
```
### 6\.8\.3 setdiff()
`setdiff` returns rows that are in the first table, but not in the second table.
```
setdiff(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
Order matters for `setdiff`.
```
setdiff(new_subjects, subject)
```
| id | age | gender |
| --- | --- | --- |
| 4 | 19 | f |
| 6 | 19 | m |
| 7 | 16 | m |
| 8 | 20 | f |
| 9 | 19 | f |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has a `setdiff()` function. You usually won’t get an error message, but the output might not be what you expect because the base R `setdiff()` expects columns to be in the same order, so id 5 here registers as different between the two tables.
```
base::setdiff(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
### 6\.8\.1 intersect()
`intersect()` returns all rows in two tables that match exactly. The columns don’t have to be in the same order.
```
new_subjects <- tibble(
id = seq(4, 9),
age = c(19, 18, 19, 16, 20, 19),
gender = c("f", "f", "m", "m", "f", "f")
)
intersect(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 5 | f | 18 |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has an `intersect()` function. The error message can be confusing and looks something like this:
```
base::intersect(subject, new_subjects)
```
```
## Error: Must subset rows with a valid subscript vector.
## ℹ Logical subscripts must match the size of the indexed input.
## x Input has size 6 but subscript `!duplicated(x, fromLast = fromLast, ...)` has size 0.
```
### 6\.8\.2 union()
`union()` returns all the rows from both tables, removing duplicate rows.
```
union(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
| 4 | f | 19 |
| 6 | m | 19 |
| 7 | m | 16 |
| 8 | f | 20 |
| 9 | f | 19 |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has a `union()` function. You usually won’t get an error message, but the output won’t be what you expect.
```
base::union(subject, new_subjects)
```
```
## [[1]]
## [1] 1 2 3 4 5
##
## [[2]]
## [1] "m" "m" NA "nb" "f"
##
## [[3]]
## [1] 19 22 NA 19 18
##
## [[4]]
## [1] 4 5 6 7 8 9
##
## [[5]]
## [1] 19 18 19 16 20 19
##
## [[6]]
## [1] "f" "f" "m" "m" "f" "f"
```
### 6\.8\.3 setdiff()
`setdiff` returns rows that are in the first table, but not in the second table.
```
setdiff(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
Order matters for `setdiff`.
```
setdiff(new_subjects, subject)
```
| id | age | gender |
| --- | --- | --- |
| 4 | 19 | f |
| 6 | 19 | m |
| 7 | 16 | m |
| 8 | 20 | f |
| 9 | 19 | f |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has a `setdiff()` function. You usually won’t get an error message, but the output might not be what you expect because the base R `setdiff()` expects columns to be in the same order, so id 5 here registers as different between the two tables.
```
base::setdiff(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
6\.9 Glossary
-------------
| term | definition |
| --- | --- |
| [base r](https://psyteachr.github.io/glossary/b#base.r) | The set of R functions that come with a basic installation of R, before you add external packages |
| [binding joins](https://psyteachr.github.io/glossary/b#binding.joins) | Joins that bind one table to another by adding their rows or columns together. |
| [filtering joins](https://psyteachr.github.io/glossary/f#filtering.joins) | Joins that act like the dplyr::filter() function in that they remove rows from the data in one table based on the values in another table. |
| [mutating joins](https://psyteachr.github.io/glossary/m#mutating.joins) | Joins that act like the dplyr::mutate() function in that they add new columns to one table based on values in another table. |
| [set operations](https://psyteachr.github.io/glossary/s#set.operations) | Functions that compare two tables and return rows that match (intersect), are in either table (union), or are in one table but not the other (setdiff). |
6\.10 Exercises
---------------
Download the [exercises](exercises/06_joins_exercise.Rmd). See the [answers](exercises/06_joins_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(6)
# run this to access the answers
dataskills::exercise(6, answers = TRUE)
```
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/joins.html |
Chapter 6 Data Relations
========================
6\.1 Learning Objectives
------------------------
1. Be able to use the 4 mutating join verbs: [(video)](https://youtu.be/WV0yg6f3DNM)
* [`left_join()`](joins.html#left_join)
* [`right_join()`](joins.html#right_join)
* [`inner_join()`](joins.html#inner_join)
* [`full_join()`](joins.html#full_join)
2. Be able to use the 2 filtering join verbs: [(video)](https://youtu.be/ijoCEKifefQ)
* [`semi_join()`](joins.html#semi_join)
* [`anti_join()`](joins.html#anti_join)
3. Be able to use the 2 binding join verbs: [(video)](https://youtu.be/8RWdNhbVZ4I)
* [`bind_rows()`](joins.html#bind_rows)
* [`bind_cols()`](joins.html#bind_cols)
4. Be able to use the 3 set operations: [(video)](https://youtu.be/c3V33ElWUYI)
* [`intersect()`](joins.html#intersect)
* [`union()`](joins.html#union)
* [`setdiff()`](joins.html#setdiff)
6\.2 Resources
--------------
* [Chapter 13: Relational Data](http://r4ds.had.co.nz/relational-data.html) in *R for Data Science*
* [Cheatsheet for dplyr join functions](http://stat545.com/bit001_dplyr-cheatsheet.html)
* [Lecture slides on dplyr two\-table verbs](slides/05_joins_slides.pdf)
6\.3 Setup
----------
```
# libraries needed
library(tidyverse)
```
6\.4 Data
---------
First, we’ll create two small data tables.
`subject` has id, gender and age for subjects 1\-5\. Age and gender are missing for subject 3\.
```
subject <- tibble(
id = 1:5,
gender = c("m", "m", NA, "nb", "f"),
age = c(19, 22, NA, 19, 18)
)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
`exp` has subject id and the score from an experiment. Some subjects are missing, some completed twice, and some are not in the subject table.
```
exp <- tibble(
id = c(2, 3, 4, 4, 5, 5, 6, 6, 7),
score = c(10, 18, 21, 23, 9, 11, 11, 12, 3)
)
```
| id | score |
| --- | --- |
| 2 | 10 |
| 3 | 18 |
| 4 | 21 |
| 4 | 23 |
| 5 | 9 |
| 5 | 11 |
| 6 | 11 |
| 6 | 12 |
| 7 | 3 |
6\.5 Mutating Joins
-------------------
[Mutating joins](https://psyteachr.github.io/glossary/m#mutating-joins "Joins that act like the dplyr::mutate() function in that they add new columns to one table based on values in another table.") act like the `mutate()` function in that they add new columns to one table based on values in another table.
All the mutating joins have this basic syntax:
`****_join(x, y, by = NULL, suffix = c(".x", ".y")`
* `x` \= the first (left) table
* `y` \= the second (right) table
* `by` \= what columns to match on. If you leave this blank, it will match on all columns with the same names in the two tables.
* `suffix` \= if columns have the same name in the two tables, but you aren’t joining by them, they get a suffix to make them unambiguous. This defaults to “.x” and “.y,” but you can change it to something more meaningful.
You can leave out the `by` argument if you’re matching on all of the columns with the same name, but it’s good practice to always specify it so your code is robust to changes in the loaded data.
### 6\.5\.1 left\_join()
Figure 6\.1: Left Join
A `left_join` keeps all the data from the first (left) table and joins anything that matches from the second (right) table. If the right table has more than one match for a row in the right table, there will be more than one row in the joined table (see ids 4 and 5\).
```
left_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
Figure 6\.2: Left Join (reversed)
The order of tables is swapped here, so the result is all rows from the `exp` table joined to any matching rows from the `subject` table.
```
left_join(exp, subject, by = "id")
```
| id | score | gender | age |
| --- | --- | --- | --- |
| 2 | 10 | m | 22 |
| 3 | 18 | NA | NA |
| 4 | 21 | nb | 19 |
| 4 | 23 | nb | 19 |
| 5 | 9 | f | 18 |
| 5 | 11 | f | 18 |
| 6 | 11 | NA | NA |
| 6 | 12 | NA | NA |
| 7 | 3 | NA | NA |
### 6\.5\.2 right\_join()
Figure 6\.3: Right Join
A `right_join` keeps all the data from the second (right) table and joins anything that matches from the first (left) table.
```
right_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
| 6 | NA | NA | 11 |
| 6 | NA | NA | 12 |
| 7 | NA | NA | 3 |
This table has the same information as `left_join(exp, subject, by = "id")`, but the columns are in a different order (left table, then right table).
### 6\.5\.3 inner\_join()
Figure 6\.4: Inner Join
An `inner_join` returns all the rows that have a match in the other table.
```
inner_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
### 6\.5\.4 full\_join()
Figure 6\.5: Full Join
A `full_join` lets you join up rows in two tables while keeping all of the information from both tables. If a row doesn’t have a match in the other table, the other table’s column values are set to `NA`.
```
full_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
| 6 | NA | NA | 11 |
| 6 | NA | NA | 12 |
| 7 | NA | NA | 3 |
6\.6 Filtering Joins
--------------------
[Filtering joins](https://psyteachr.github.io/glossary/f#filtering-joins "Joins that act like the dplyr::filter() function in that they remove rows from the data in one table based on the values in another table.") act like the `filter()` function in that they remove rows from the data in one table based on the values in another table. The result of a filtering join will only contain rows from the left table and have the same number or fewer rows than the left table.
### 6\.6\.1 semi\_join()
Figure 6\.6: Semi Join
A `semi_join` returns all rows from the left table where there are matching values in the right table, keeping just columns from the left table.
```
semi_join(subject, exp, by = "id")
```
| id | gender | age |
| --- | --- | --- |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
Unlike an inner join, a semi join will never duplicate the rows in the left table if there is more than one matching row in the right table.
Figure 6\.7: Semi Join (Reversed)
Order matters in a semi join.
```
semi_join(exp, subject, by = "id")
```
| id | score |
| --- | --- |
| 2 | 10 |
| 3 | 18 |
| 4 | 21 |
| 4 | 23 |
| 5 | 9 |
| 5 | 11 |
### 6\.6\.2 anti\_join()
Figure 6\.8: Anti Join
An `anti_join` return all rows from the left table where there are *not* matching values in the right table, keeping just columns from the left table.
```
anti_join(subject, exp, by = "id")
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
Figure 6\.9: Anti Join (Reversed)
Order matters in an anti join.
```
anti_join(exp, subject, by = "id")
```
| id | score |
| --- | --- |
| 6 | 11 |
| 6 | 12 |
| 7 | 3 |
6\.7 Binding Joins
------------------
[Binding joins](https://psyteachr.github.io/glossary/b#binding-joins "Joins that bind one table to another by adding their rows or columns together.") bind one table to another by adding their rows or columns together.
### 6\.7\.1 bind\_rows()
You can combine the rows of two tables with `bind_rows`.
Here we’ll add subject data for subjects 6\-9 and bind that to the original subject table.
```
new_subjects <- tibble(
id = 6:9,
gender = c("nb", "m", "f", "f"),
age = c(19, 16, 20, 19)
)
bind_rows(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
| 6 | nb | 19 |
| 7 | m | 16 |
| 8 | f | 20 |
| 9 | f | 19 |
The columns just have to have the same names, they don’t have to be in the same order. Any columns that differ between the two tables will just have `NA` values for entries from the other table.
If a row is duplicated between the two tables (like id 5 below), the row will also be duplicated in the resulting table. If your tables have the exact same columns, you can use `union()` (see below) to avoid duplicates.
```
new_subjects <- tibble(
id = 5:9,
age = c(18, 19, 16, 20, 19),
gender = c("f", "nb", "m", "f", "f"),
new = c(1,2,3,4,5)
)
bind_rows(subject, new_subjects)
```
| id | gender | age | new |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | NA |
| 3 | NA | NA | NA |
| 4 | nb | 19 | NA |
| 5 | f | 18 | NA |
| 5 | f | 18 | 1 |
| 6 | nb | 19 | 2 |
| 7 | m | 16 | 3 |
| 8 | f | 20 | 4 |
| 9 | f | 19 | 5 |
### 6\.7\.2 bind\_cols()
You can merge two tables with the same number of rows using `bind_cols`. This is only useful if the two tables have their rows in the exact same order. The only advantage over a left join is when the tables don’t have any IDs to join by and you have to rely solely on their order.
```
new_info <- tibble(
colour = c("red", "orange", "yellow", "green", "blue")
)
bind_cols(subject, new_info)
```
| id | gender | age | colour |
| --- | --- | --- | --- |
| 1 | m | 19 | red |
| 2 | m | 22 | orange |
| 3 | NA | NA | yellow |
| 4 | nb | 19 | green |
| 5 | f | 18 | blue |
6\.8 Set Operations
-------------------
[Set operations](https://psyteachr.github.io/glossary/s#set-operations "Functions that compare two tables and return rows that match (intersect), are in either table (union), or are in one table but not the other (setdiff).") compare two tables and return rows that match (intersect), are in either table (union), or are in one table but not the other (setdiff).
### 6\.8\.1 intersect()
`intersect()` returns all rows in two tables that match exactly. The columns don’t have to be in the same order.
```
new_subjects <- tibble(
id = seq(4, 9),
age = c(19, 18, 19, 16, 20, 19),
gender = c("f", "f", "m", "m", "f", "f")
)
intersect(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 5 | f | 18 |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has an `intersect()` function. The error message can be confusing and looks something like this:
```
base::intersect(subject, new_subjects)
```
```
## Error: Must subset rows with a valid subscript vector.
## ℹ Logical subscripts must match the size of the indexed input.
## x Input has size 6 but subscript `!duplicated(x, fromLast = fromLast, ...)` has size 0.
```
### 6\.8\.2 union()
`union()` returns all the rows from both tables, removing duplicate rows.
```
union(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
| 4 | f | 19 |
| 6 | m | 19 |
| 7 | m | 16 |
| 8 | f | 20 |
| 9 | f | 19 |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has a `union()` function. You usually won’t get an error message, but the output won’t be what you expect.
```
base::union(subject, new_subjects)
```
```
## [[1]]
## [1] 1 2 3 4 5
##
## [[2]]
## [1] "m" "m" NA "nb" "f"
##
## [[3]]
## [1] 19 22 NA 19 18
##
## [[4]]
## [1] 4 5 6 7 8 9
##
## [[5]]
## [1] 19 18 19 16 20 19
##
## [[6]]
## [1] "f" "f" "m" "m" "f" "f"
```
### 6\.8\.3 setdiff()
`setdiff` returns rows that are in the first table, but not in the second table.
```
setdiff(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
Order matters for `setdiff`.
```
setdiff(new_subjects, subject)
```
| id | age | gender |
| --- | --- | --- |
| 4 | 19 | f |
| 6 | 19 | m |
| 7 | 16 | m |
| 8 | 20 | f |
| 9 | 19 | f |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has a `setdiff()` function. You usually won’t get an error message, but the output might not be what you expect because the base R `setdiff()` expects columns to be in the same order, so id 5 here registers as different between the two tables.
```
base::setdiff(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
6\.9 Glossary
-------------
| term | definition |
| --- | --- |
| [base r](https://psyteachr.github.io/glossary/b#base.r) | The set of R functions that come with a basic installation of R, before you add external packages |
| [binding joins](https://psyteachr.github.io/glossary/b#binding.joins) | Joins that bind one table to another by adding their rows or columns together. |
| [filtering joins](https://psyteachr.github.io/glossary/f#filtering.joins) | Joins that act like the dplyr::filter() function in that they remove rows from the data in one table based on the values in another table. |
| [mutating joins](https://psyteachr.github.io/glossary/m#mutating.joins) | Joins that act like the dplyr::mutate() function in that they add new columns to one table based on values in another table. |
| [set operations](https://psyteachr.github.io/glossary/s#set.operations) | Functions that compare two tables and return rows that match (intersect), are in either table (union), or are in one table but not the other (setdiff). |
6\.10 Exercises
---------------
Download the [exercises](exercises/06_joins_exercise.Rmd). See the [answers](exercises/06_joins_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(6)
# run this to access the answers
dataskills::exercise(6, answers = TRUE)
```
6\.1 Learning Objectives
------------------------
1. Be able to use the 4 mutating join verbs: [(video)](https://youtu.be/WV0yg6f3DNM)
* [`left_join()`](joins.html#left_join)
* [`right_join()`](joins.html#right_join)
* [`inner_join()`](joins.html#inner_join)
* [`full_join()`](joins.html#full_join)
2. Be able to use the 2 filtering join verbs: [(video)](https://youtu.be/ijoCEKifefQ)
* [`semi_join()`](joins.html#semi_join)
* [`anti_join()`](joins.html#anti_join)
3. Be able to use the 2 binding join verbs: [(video)](https://youtu.be/8RWdNhbVZ4I)
* [`bind_rows()`](joins.html#bind_rows)
* [`bind_cols()`](joins.html#bind_cols)
4. Be able to use the 3 set operations: [(video)](https://youtu.be/c3V33ElWUYI)
* [`intersect()`](joins.html#intersect)
* [`union()`](joins.html#union)
* [`setdiff()`](joins.html#setdiff)
6\.2 Resources
--------------
* [Chapter 13: Relational Data](http://r4ds.had.co.nz/relational-data.html) in *R for Data Science*
* [Cheatsheet for dplyr join functions](http://stat545.com/bit001_dplyr-cheatsheet.html)
* [Lecture slides on dplyr two\-table verbs](slides/05_joins_slides.pdf)
6\.3 Setup
----------
```
# libraries needed
library(tidyverse)
```
6\.4 Data
---------
First, we’ll create two small data tables.
`subject` has id, gender and age for subjects 1\-5\. Age and gender are missing for subject 3\.
```
subject <- tibble(
id = 1:5,
gender = c("m", "m", NA, "nb", "f"),
age = c(19, 22, NA, 19, 18)
)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
`exp` has subject id and the score from an experiment. Some subjects are missing, some completed twice, and some are not in the subject table.
```
exp <- tibble(
id = c(2, 3, 4, 4, 5, 5, 6, 6, 7),
score = c(10, 18, 21, 23, 9, 11, 11, 12, 3)
)
```
| id | score |
| --- | --- |
| 2 | 10 |
| 3 | 18 |
| 4 | 21 |
| 4 | 23 |
| 5 | 9 |
| 5 | 11 |
| 6 | 11 |
| 6 | 12 |
| 7 | 3 |
6\.5 Mutating Joins
-------------------
[Mutating joins](https://psyteachr.github.io/glossary/m#mutating-joins "Joins that act like the dplyr::mutate() function in that they add new columns to one table based on values in another table.") act like the `mutate()` function in that they add new columns to one table based on values in another table.
All the mutating joins have this basic syntax:
`****_join(x, y, by = NULL, suffix = c(".x", ".y")`
* `x` \= the first (left) table
* `y` \= the second (right) table
* `by` \= what columns to match on. If you leave this blank, it will match on all columns with the same names in the two tables.
* `suffix` \= if columns have the same name in the two tables, but you aren’t joining by them, they get a suffix to make them unambiguous. This defaults to “.x” and “.y,” but you can change it to something more meaningful.
You can leave out the `by` argument if you’re matching on all of the columns with the same name, but it’s good practice to always specify it so your code is robust to changes in the loaded data.
### 6\.5\.1 left\_join()
Figure 6\.1: Left Join
A `left_join` keeps all the data from the first (left) table and joins anything that matches from the second (right) table. If the right table has more than one match for a row in the right table, there will be more than one row in the joined table (see ids 4 and 5\).
```
left_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
Figure 6\.2: Left Join (reversed)
The order of tables is swapped here, so the result is all rows from the `exp` table joined to any matching rows from the `subject` table.
```
left_join(exp, subject, by = "id")
```
| id | score | gender | age |
| --- | --- | --- | --- |
| 2 | 10 | m | 22 |
| 3 | 18 | NA | NA |
| 4 | 21 | nb | 19 |
| 4 | 23 | nb | 19 |
| 5 | 9 | f | 18 |
| 5 | 11 | f | 18 |
| 6 | 11 | NA | NA |
| 6 | 12 | NA | NA |
| 7 | 3 | NA | NA |
### 6\.5\.2 right\_join()
Figure 6\.3: Right Join
A `right_join` keeps all the data from the second (right) table and joins anything that matches from the first (left) table.
```
right_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
| 6 | NA | NA | 11 |
| 6 | NA | NA | 12 |
| 7 | NA | NA | 3 |
This table has the same information as `left_join(exp, subject, by = "id")`, but the columns are in a different order (left table, then right table).
### 6\.5\.3 inner\_join()
Figure 6\.4: Inner Join
An `inner_join` returns all the rows that have a match in the other table.
```
inner_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
### 6\.5\.4 full\_join()
Figure 6\.5: Full Join
A `full_join` lets you join up rows in two tables while keeping all of the information from both tables. If a row doesn’t have a match in the other table, the other table’s column values are set to `NA`.
```
full_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
| 6 | NA | NA | 11 |
| 6 | NA | NA | 12 |
| 7 | NA | NA | 3 |
### 6\.5\.1 left\_join()
Figure 6\.1: Left Join
A `left_join` keeps all the data from the first (left) table and joins anything that matches from the second (right) table. If the right table has more than one match for a row in the right table, there will be more than one row in the joined table (see ids 4 and 5\).
```
left_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
Figure 6\.2: Left Join (reversed)
The order of tables is swapped here, so the result is all rows from the `exp` table joined to any matching rows from the `subject` table.
```
left_join(exp, subject, by = "id")
```
| id | score | gender | age |
| --- | --- | --- | --- |
| 2 | 10 | m | 22 |
| 3 | 18 | NA | NA |
| 4 | 21 | nb | 19 |
| 4 | 23 | nb | 19 |
| 5 | 9 | f | 18 |
| 5 | 11 | f | 18 |
| 6 | 11 | NA | NA |
| 6 | 12 | NA | NA |
| 7 | 3 | NA | NA |
### 6\.5\.2 right\_join()
Figure 6\.3: Right Join
A `right_join` keeps all the data from the second (right) table and joins anything that matches from the first (left) table.
```
right_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
| 6 | NA | NA | 11 |
| 6 | NA | NA | 12 |
| 7 | NA | NA | 3 |
This table has the same information as `left_join(exp, subject, by = "id")`, but the columns are in a different order (left table, then right table).
### 6\.5\.3 inner\_join()
Figure 6\.4: Inner Join
An `inner_join` returns all the rows that have a match in the other table.
```
inner_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
### 6\.5\.4 full\_join()
Figure 6\.5: Full Join
A `full_join` lets you join up rows in two tables while keeping all of the information from both tables. If a row doesn’t have a match in the other table, the other table’s column values are set to `NA`.
```
full_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
| 6 | NA | NA | 11 |
| 6 | NA | NA | 12 |
| 7 | NA | NA | 3 |
6\.6 Filtering Joins
--------------------
[Filtering joins](https://psyteachr.github.io/glossary/f#filtering-joins "Joins that act like the dplyr::filter() function in that they remove rows from the data in one table based on the values in another table.") act like the `filter()` function in that they remove rows from the data in one table based on the values in another table. The result of a filtering join will only contain rows from the left table and have the same number or fewer rows than the left table.
### 6\.6\.1 semi\_join()
Figure 6\.6: Semi Join
A `semi_join` returns all rows from the left table where there are matching values in the right table, keeping just columns from the left table.
```
semi_join(subject, exp, by = "id")
```
| id | gender | age |
| --- | --- | --- |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
Unlike an inner join, a semi join will never duplicate the rows in the left table if there is more than one matching row in the right table.
Figure 6\.7: Semi Join (Reversed)
Order matters in a semi join.
```
semi_join(exp, subject, by = "id")
```
| id | score |
| --- | --- |
| 2 | 10 |
| 3 | 18 |
| 4 | 21 |
| 4 | 23 |
| 5 | 9 |
| 5 | 11 |
### 6\.6\.2 anti\_join()
Figure 6\.8: Anti Join
An `anti_join` return all rows from the left table where there are *not* matching values in the right table, keeping just columns from the left table.
```
anti_join(subject, exp, by = "id")
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
Figure 6\.9: Anti Join (Reversed)
Order matters in an anti join.
```
anti_join(exp, subject, by = "id")
```
| id | score |
| --- | --- |
| 6 | 11 |
| 6 | 12 |
| 7 | 3 |
### 6\.6\.1 semi\_join()
Figure 6\.6: Semi Join
A `semi_join` returns all rows from the left table where there are matching values in the right table, keeping just columns from the left table.
```
semi_join(subject, exp, by = "id")
```
| id | gender | age |
| --- | --- | --- |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
Unlike an inner join, a semi join will never duplicate the rows in the left table if there is more than one matching row in the right table.
Figure 6\.7: Semi Join (Reversed)
Order matters in a semi join.
```
semi_join(exp, subject, by = "id")
```
| id | score |
| --- | --- |
| 2 | 10 |
| 3 | 18 |
| 4 | 21 |
| 4 | 23 |
| 5 | 9 |
| 5 | 11 |
### 6\.6\.2 anti\_join()
Figure 6\.8: Anti Join
An `anti_join` return all rows from the left table where there are *not* matching values in the right table, keeping just columns from the left table.
```
anti_join(subject, exp, by = "id")
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
Figure 6\.9: Anti Join (Reversed)
Order matters in an anti join.
```
anti_join(exp, subject, by = "id")
```
| id | score |
| --- | --- |
| 6 | 11 |
| 6 | 12 |
| 7 | 3 |
6\.7 Binding Joins
------------------
[Binding joins](https://psyteachr.github.io/glossary/b#binding-joins "Joins that bind one table to another by adding their rows or columns together.") bind one table to another by adding their rows or columns together.
### 6\.7\.1 bind\_rows()
You can combine the rows of two tables with `bind_rows`.
Here we’ll add subject data for subjects 6\-9 and bind that to the original subject table.
```
new_subjects <- tibble(
id = 6:9,
gender = c("nb", "m", "f", "f"),
age = c(19, 16, 20, 19)
)
bind_rows(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
| 6 | nb | 19 |
| 7 | m | 16 |
| 8 | f | 20 |
| 9 | f | 19 |
The columns just have to have the same names, they don’t have to be in the same order. Any columns that differ between the two tables will just have `NA` values for entries from the other table.
If a row is duplicated between the two tables (like id 5 below), the row will also be duplicated in the resulting table. If your tables have the exact same columns, you can use `union()` (see below) to avoid duplicates.
```
new_subjects <- tibble(
id = 5:9,
age = c(18, 19, 16, 20, 19),
gender = c("f", "nb", "m", "f", "f"),
new = c(1,2,3,4,5)
)
bind_rows(subject, new_subjects)
```
| id | gender | age | new |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | NA |
| 3 | NA | NA | NA |
| 4 | nb | 19 | NA |
| 5 | f | 18 | NA |
| 5 | f | 18 | 1 |
| 6 | nb | 19 | 2 |
| 7 | m | 16 | 3 |
| 8 | f | 20 | 4 |
| 9 | f | 19 | 5 |
### 6\.7\.2 bind\_cols()
You can merge two tables with the same number of rows using `bind_cols`. This is only useful if the two tables have their rows in the exact same order. The only advantage over a left join is when the tables don’t have any IDs to join by and you have to rely solely on their order.
```
new_info <- tibble(
colour = c("red", "orange", "yellow", "green", "blue")
)
bind_cols(subject, new_info)
```
| id | gender | age | colour |
| --- | --- | --- | --- |
| 1 | m | 19 | red |
| 2 | m | 22 | orange |
| 3 | NA | NA | yellow |
| 4 | nb | 19 | green |
| 5 | f | 18 | blue |
### 6\.7\.1 bind\_rows()
You can combine the rows of two tables with `bind_rows`.
Here we’ll add subject data for subjects 6\-9 and bind that to the original subject table.
```
new_subjects <- tibble(
id = 6:9,
gender = c("nb", "m", "f", "f"),
age = c(19, 16, 20, 19)
)
bind_rows(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
| 6 | nb | 19 |
| 7 | m | 16 |
| 8 | f | 20 |
| 9 | f | 19 |
The columns just have to have the same names, they don’t have to be in the same order. Any columns that differ between the two tables will just have `NA` values for entries from the other table.
If a row is duplicated between the two tables (like id 5 below), the row will also be duplicated in the resulting table. If your tables have the exact same columns, you can use `union()` (see below) to avoid duplicates.
```
new_subjects <- tibble(
id = 5:9,
age = c(18, 19, 16, 20, 19),
gender = c("f", "nb", "m", "f", "f"),
new = c(1,2,3,4,5)
)
bind_rows(subject, new_subjects)
```
| id | gender | age | new |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | NA |
| 3 | NA | NA | NA |
| 4 | nb | 19 | NA |
| 5 | f | 18 | NA |
| 5 | f | 18 | 1 |
| 6 | nb | 19 | 2 |
| 7 | m | 16 | 3 |
| 8 | f | 20 | 4 |
| 9 | f | 19 | 5 |
### 6\.7\.2 bind\_cols()
You can merge two tables with the same number of rows using `bind_cols`. This is only useful if the two tables have their rows in the exact same order. The only advantage over a left join is when the tables don’t have any IDs to join by and you have to rely solely on their order.
```
new_info <- tibble(
colour = c("red", "orange", "yellow", "green", "blue")
)
bind_cols(subject, new_info)
```
| id | gender | age | colour |
| --- | --- | --- | --- |
| 1 | m | 19 | red |
| 2 | m | 22 | orange |
| 3 | NA | NA | yellow |
| 4 | nb | 19 | green |
| 5 | f | 18 | blue |
6\.8 Set Operations
-------------------
[Set operations](https://psyteachr.github.io/glossary/s#set-operations "Functions that compare two tables and return rows that match (intersect), are in either table (union), or are in one table but not the other (setdiff).") compare two tables and return rows that match (intersect), are in either table (union), or are in one table but not the other (setdiff).
### 6\.8\.1 intersect()
`intersect()` returns all rows in two tables that match exactly. The columns don’t have to be in the same order.
```
new_subjects <- tibble(
id = seq(4, 9),
age = c(19, 18, 19, 16, 20, 19),
gender = c("f", "f", "m", "m", "f", "f")
)
intersect(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 5 | f | 18 |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has an `intersect()` function. The error message can be confusing and looks something like this:
```
base::intersect(subject, new_subjects)
```
```
## Error: Must subset rows with a valid subscript vector.
## ℹ Logical subscripts must match the size of the indexed input.
## x Input has size 6 but subscript `!duplicated(x, fromLast = fromLast, ...)` has size 0.
```
### 6\.8\.2 union()
`union()` returns all the rows from both tables, removing duplicate rows.
```
union(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
| 4 | f | 19 |
| 6 | m | 19 |
| 7 | m | 16 |
| 8 | f | 20 |
| 9 | f | 19 |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has a `union()` function. You usually won’t get an error message, but the output won’t be what you expect.
```
base::union(subject, new_subjects)
```
```
## [[1]]
## [1] 1 2 3 4 5
##
## [[2]]
## [1] "m" "m" NA "nb" "f"
##
## [[3]]
## [1] 19 22 NA 19 18
##
## [[4]]
## [1] 4 5 6 7 8 9
##
## [[5]]
## [1] 19 18 19 16 20 19
##
## [[6]]
## [1] "f" "f" "m" "m" "f" "f"
```
### 6\.8\.3 setdiff()
`setdiff` returns rows that are in the first table, but not in the second table.
```
setdiff(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
Order matters for `setdiff`.
```
setdiff(new_subjects, subject)
```
| id | age | gender |
| --- | --- | --- |
| 4 | 19 | f |
| 6 | 19 | m |
| 7 | 16 | m |
| 8 | 20 | f |
| 9 | 19 | f |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has a `setdiff()` function. You usually won’t get an error message, but the output might not be what you expect because the base R `setdiff()` expects columns to be in the same order, so id 5 here registers as different between the two tables.
```
base::setdiff(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
### 6\.8\.1 intersect()
`intersect()` returns all rows in two tables that match exactly. The columns don’t have to be in the same order.
```
new_subjects <- tibble(
id = seq(4, 9),
age = c(19, 18, 19, 16, 20, 19),
gender = c("f", "f", "m", "m", "f", "f")
)
intersect(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 5 | f | 18 |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has an `intersect()` function. The error message can be confusing and looks something like this:
```
base::intersect(subject, new_subjects)
```
```
## Error: Must subset rows with a valid subscript vector.
## ℹ Logical subscripts must match the size of the indexed input.
## x Input has size 6 but subscript `!duplicated(x, fromLast = fromLast, ...)` has size 0.
```
### 6\.8\.2 union()
`union()` returns all the rows from both tables, removing duplicate rows.
```
union(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
| 4 | f | 19 |
| 6 | m | 19 |
| 7 | m | 16 |
| 8 | f | 20 |
| 9 | f | 19 |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has a `union()` function. You usually won’t get an error message, but the output won’t be what you expect.
```
base::union(subject, new_subjects)
```
```
## [[1]]
## [1] 1 2 3 4 5
##
## [[2]]
## [1] "m" "m" NA "nb" "f"
##
## [[3]]
## [1] 19 22 NA 19 18
##
## [[4]]
## [1] 4 5 6 7 8 9
##
## [[5]]
## [1] 19 18 19 16 20 19
##
## [[6]]
## [1] "f" "f" "m" "m" "f" "f"
```
### 6\.8\.3 setdiff()
`setdiff` returns rows that are in the first table, but not in the second table.
```
setdiff(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
Order matters for `setdiff`.
```
setdiff(new_subjects, subject)
```
| id | age | gender |
| --- | --- | --- |
| 4 | 19 | f |
| 6 | 19 | m |
| 7 | 16 | m |
| 8 | 20 | f |
| 9 | 19 | f |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has a `setdiff()` function. You usually won’t get an error message, but the output might not be what you expect because the base R `setdiff()` expects columns to be in the same order, so id 5 here registers as different between the two tables.
```
base::setdiff(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
6\.9 Glossary
-------------
| term | definition |
| --- | --- |
| [base r](https://psyteachr.github.io/glossary/b#base.r) | The set of R functions that come with a basic installation of R, before you add external packages |
| [binding joins](https://psyteachr.github.io/glossary/b#binding.joins) | Joins that bind one table to another by adding their rows or columns together. |
| [filtering joins](https://psyteachr.github.io/glossary/f#filtering.joins) | Joins that act like the dplyr::filter() function in that they remove rows from the data in one table based on the values in another table. |
| [mutating joins](https://psyteachr.github.io/glossary/m#mutating.joins) | Joins that act like the dplyr::mutate() function in that they add new columns to one table based on values in another table. |
| [set operations](https://psyteachr.github.io/glossary/s#set.operations) | Functions that compare two tables and return rows that match (intersect), are in either table (union), or are in one table but not the other (setdiff). |
6\.10 Exercises
---------------
Download the [exercises](exercises/06_joins_exercise.Rmd). See the [answers](exercises/06_joins_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(6)
# run this to access the answers
dataskills::exercise(6, answers = TRUE)
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/joins.html |
Chapter 6 Data Relations
========================
6\.1 Learning Objectives
------------------------
1. Be able to use the 4 mutating join verbs: [(video)](https://youtu.be/WV0yg6f3DNM)
* [`left_join()`](joins.html#left_join)
* [`right_join()`](joins.html#right_join)
* [`inner_join()`](joins.html#inner_join)
* [`full_join()`](joins.html#full_join)
2. Be able to use the 2 filtering join verbs: [(video)](https://youtu.be/ijoCEKifefQ)
* [`semi_join()`](joins.html#semi_join)
* [`anti_join()`](joins.html#anti_join)
3. Be able to use the 2 binding join verbs: [(video)](https://youtu.be/8RWdNhbVZ4I)
* [`bind_rows()`](joins.html#bind_rows)
* [`bind_cols()`](joins.html#bind_cols)
4. Be able to use the 3 set operations: [(video)](https://youtu.be/c3V33ElWUYI)
* [`intersect()`](joins.html#intersect)
* [`union()`](joins.html#union)
* [`setdiff()`](joins.html#setdiff)
6\.2 Resources
--------------
* [Chapter 13: Relational Data](http://r4ds.had.co.nz/relational-data.html) in *R for Data Science*
* [Cheatsheet for dplyr join functions](http://stat545.com/bit001_dplyr-cheatsheet.html)
* [Lecture slides on dplyr two\-table verbs](slides/05_joins_slides.pdf)
6\.3 Setup
----------
```
# libraries needed
library(tidyverse)
```
6\.4 Data
---------
First, we’ll create two small data tables.
`subject` has id, gender and age for subjects 1\-5\. Age and gender are missing for subject 3\.
```
subject <- tibble(
id = 1:5,
gender = c("m", "m", NA, "nb", "f"),
age = c(19, 22, NA, 19, 18)
)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
`exp` has subject id and the score from an experiment. Some subjects are missing, some completed twice, and some are not in the subject table.
```
exp <- tibble(
id = c(2, 3, 4, 4, 5, 5, 6, 6, 7),
score = c(10, 18, 21, 23, 9, 11, 11, 12, 3)
)
```
| id | score |
| --- | --- |
| 2 | 10 |
| 3 | 18 |
| 4 | 21 |
| 4 | 23 |
| 5 | 9 |
| 5 | 11 |
| 6 | 11 |
| 6 | 12 |
| 7 | 3 |
6\.5 Mutating Joins
-------------------
[Mutating joins](https://psyteachr.github.io/glossary/m#mutating-joins "Joins that act like the dplyr::mutate() function in that they add new columns to one table based on values in another table.") act like the `mutate()` function in that they add new columns to one table based on values in another table.
All the mutating joins have this basic syntax:
`****_join(x, y, by = NULL, suffix = c(".x", ".y")`
* `x` \= the first (left) table
* `y` \= the second (right) table
* `by` \= what columns to match on. If you leave this blank, it will match on all columns with the same names in the two tables.
* `suffix` \= if columns have the same name in the two tables, but you aren’t joining by them, they get a suffix to make them unambiguous. This defaults to “.x” and “.y,” but you can change it to something more meaningful.
You can leave out the `by` argument if you’re matching on all of the columns with the same name, but it’s good practice to always specify it so your code is robust to changes in the loaded data.
### 6\.5\.1 left\_join()
Figure 6\.1: Left Join
A `left_join` keeps all the data from the first (left) table and joins anything that matches from the second (right) table. If the right table has more than one match for a row in the right table, there will be more than one row in the joined table (see ids 4 and 5\).
```
left_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
Figure 6\.2: Left Join (reversed)
The order of tables is swapped here, so the result is all rows from the `exp` table joined to any matching rows from the `subject` table.
```
left_join(exp, subject, by = "id")
```
| id | score | gender | age |
| --- | --- | --- | --- |
| 2 | 10 | m | 22 |
| 3 | 18 | NA | NA |
| 4 | 21 | nb | 19 |
| 4 | 23 | nb | 19 |
| 5 | 9 | f | 18 |
| 5 | 11 | f | 18 |
| 6 | 11 | NA | NA |
| 6 | 12 | NA | NA |
| 7 | 3 | NA | NA |
### 6\.5\.2 right\_join()
Figure 6\.3: Right Join
A `right_join` keeps all the data from the second (right) table and joins anything that matches from the first (left) table.
```
right_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
| 6 | NA | NA | 11 |
| 6 | NA | NA | 12 |
| 7 | NA | NA | 3 |
This table has the same information as `left_join(exp, subject, by = "id")`, but the columns are in a different order (left table, then right table).
### 6\.5\.3 inner\_join()
Figure 6\.4: Inner Join
An `inner_join` returns all the rows that have a match in the other table.
```
inner_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
### 6\.5\.4 full\_join()
Figure 6\.5: Full Join
A `full_join` lets you join up rows in two tables while keeping all of the information from both tables. If a row doesn’t have a match in the other table, the other table’s column values are set to `NA`.
```
full_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
| 6 | NA | NA | 11 |
| 6 | NA | NA | 12 |
| 7 | NA | NA | 3 |
6\.6 Filtering Joins
--------------------
[Filtering joins](https://psyteachr.github.io/glossary/f#filtering-joins "Joins that act like the dplyr::filter() function in that they remove rows from the data in one table based on the values in another table.") act like the `filter()` function in that they remove rows from the data in one table based on the values in another table. The result of a filtering join will only contain rows from the left table and have the same number or fewer rows than the left table.
### 6\.6\.1 semi\_join()
Figure 6\.6: Semi Join
A `semi_join` returns all rows from the left table where there are matching values in the right table, keeping just columns from the left table.
```
semi_join(subject, exp, by = "id")
```
| id | gender | age |
| --- | --- | --- |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
Unlike an inner join, a semi join will never duplicate the rows in the left table if there is more than one matching row in the right table.
Figure 6\.7: Semi Join (Reversed)
Order matters in a semi join.
```
semi_join(exp, subject, by = "id")
```
| id | score |
| --- | --- |
| 2 | 10 |
| 3 | 18 |
| 4 | 21 |
| 4 | 23 |
| 5 | 9 |
| 5 | 11 |
### 6\.6\.2 anti\_join()
Figure 6\.8: Anti Join
An `anti_join` return all rows from the left table where there are *not* matching values in the right table, keeping just columns from the left table.
```
anti_join(subject, exp, by = "id")
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
Figure 6\.9: Anti Join (Reversed)
Order matters in an anti join.
```
anti_join(exp, subject, by = "id")
```
| id | score |
| --- | --- |
| 6 | 11 |
| 6 | 12 |
| 7 | 3 |
6\.7 Binding Joins
------------------
[Binding joins](https://psyteachr.github.io/glossary/b#binding-joins "Joins that bind one table to another by adding their rows or columns together.") bind one table to another by adding their rows or columns together.
### 6\.7\.1 bind\_rows()
You can combine the rows of two tables with `bind_rows`.
Here we’ll add subject data for subjects 6\-9 and bind that to the original subject table.
```
new_subjects <- tibble(
id = 6:9,
gender = c("nb", "m", "f", "f"),
age = c(19, 16, 20, 19)
)
bind_rows(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
| 6 | nb | 19 |
| 7 | m | 16 |
| 8 | f | 20 |
| 9 | f | 19 |
The columns just have to have the same names, they don’t have to be in the same order. Any columns that differ between the two tables will just have `NA` values for entries from the other table.
If a row is duplicated between the two tables (like id 5 below), the row will also be duplicated in the resulting table. If your tables have the exact same columns, you can use `union()` (see below) to avoid duplicates.
```
new_subjects <- tibble(
id = 5:9,
age = c(18, 19, 16, 20, 19),
gender = c("f", "nb", "m", "f", "f"),
new = c(1,2,3,4,5)
)
bind_rows(subject, new_subjects)
```
| id | gender | age | new |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | NA |
| 3 | NA | NA | NA |
| 4 | nb | 19 | NA |
| 5 | f | 18 | NA |
| 5 | f | 18 | 1 |
| 6 | nb | 19 | 2 |
| 7 | m | 16 | 3 |
| 8 | f | 20 | 4 |
| 9 | f | 19 | 5 |
### 6\.7\.2 bind\_cols()
You can merge two tables with the same number of rows using `bind_cols`. This is only useful if the two tables have their rows in the exact same order. The only advantage over a left join is when the tables don’t have any IDs to join by and you have to rely solely on their order.
```
new_info <- tibble(
colour = c("red", "orange", "yellow", "green", "blue")
)
bind_cols(subject, new_info)
```
| id | gender | age | colour |
| --- | --- | --- | --- |
| 1 | m | 19 | red |
| 2 | m | 22 | orange |
| 3 | NA | NA | yellow |
| 4 | nb | 19 | green |
| 5 | f | 18 | blue |
6\.8 Set Operations
-------------------
[Set operations](https://psyteachr.github.io/glossary/s#set-operations "Functions that compare two tables and return rows that match (intersect), are in either table (union), or are in one table but not the other (setdiff).") compare two tables and return rows that match (intersect), are in either table (union), or are in one table but not the other (setdiff).
### 6\.8\.1 intersect()
`intersect()` returns all rows in two tables that match exactly. The columns don’t have to be in the same order.
```
new_subjects <- tibble(
id = seq(4, 9),
age = c(19, 18, 19, 16, 20, 19),
gender = c("f", "f", "m", "m", "f", "f")
)
intersect(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 5 | f | 18 |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has an `intersect()` function. The error message can be confusing and looks something like this:
```
base::intersect(subject, new_subjects)
```
```
## Error: Must subset rows with a valid subscript vector.
## ℹ Logical subscripts must match the size of the indexed input.
## x Input has size 6 but subscript `!duplicated(x, fromLast = fromLast, ...)` has size 0.
```
### 6\.8\.2 union()
`union()` returns all the rows from both tables, removing duplicate rows.
```
union(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
| 4 | f | 19 |
| 6 | m | 19 |
| 7 | m | 16 |
| 8 | f | 20 |
| 9 | f | 19 |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has a `union()` function. You usually won’t get an error message, but the output won’t be what you expect.
```
base::union(subject, new_subjects)
```
```
## [[1]]
## [1] 1 2 3 4 5
##
## [[2]]
## [1] "m" "m" NA "nb" "f"
##
## [[3]]
## [1] 19 22 NA 19 18
##
## [[4]]
## [1] 4 5 6 7 8 9
##
## [[5]]
## [1] 19 18 19 16 20 19
##
## [[6]]
## [1] "f" "f" "m" "m" "f" "f"
```
### 6\.8\.3 setdiff()
`setdiff` returns rows that are in the first table, but not in the second table.
```
setdiff(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
Order matters for `setdiff`.
```
setdiff(new_subjects, subject)
```
| id | age | gender |
| --- | --- | --- |
| 4 | 19 | f |
| 6 | 19 | m |
| 7 | 16 | m |
| 8 | 20 | f |
| 9 | 19 | f |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has a `setdiff()` function. You usually won’t get an error message, but the output might not be what you expect because the base R `setdiff()` expects columns to be in the same order, so id 5 here registers as different between the two tables.
```
base::setdiff(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
6\.9 Glossary
-------------
| term | definition |
| --- | --- |
| [base r](https://psyteachr.github.io/glossary/b#base.r) | The set of R functions that come with a basic installation of R, before you add external packages |
| [binding joins](https://psyteachr.github.io/glossary/b#binding.joins) | Joins that bind one table to another by adding their rows or columns together. |
| [filtering joins](https://psyteachr.github.io/glossary/f#filtering.joins) | Joins that act like the dplyr::filter() function in that they remove rows from the data in one table based on the values in another table. |
| [mutating joins](https://psyteachr.github.io/glossary/m#mutating.joins) | Joins that act like the dplyr::mutate() function in that they add new columns to one table based on values in another table. |
| [set operations](https://psyteachr.github.io/glossary/s#set.operations) | Functions that compare two tables and return rows that match (intersect), are in either table (union), or are in one table but not the other (setdiff). |
6\.10 Exercises
---------------
Download the [exercises](exercises/06_joins_exercise.Rmd). See the [answers](exercises/06_joins_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(6)
# run this to access the answers
dataskills::exercise(6, answers = TRUE)
```
6\.1 Learning Objectives
------------------------
1. Be able to use the 4 mutating join verbs: [(video)](https://youtu.be/WV0yg6f3DNM)
* [`left_join()`](joins.html#left_join)
* [`right_join()`](joins.html#right_join)
* [`inner_join()`](joins.html#inner_join)
* [`full_join()`](joins.html#full_join)
2. Be able to use the 2 filtering join verbs: [(video)](https://youtu.be/ijoCEKifefQ)
* [`semi_join()`](joins.html#semi_join)
* [`anti_join()`](joins.html#anti_join)
3. Be able to use the 2 binding join verbs: [(video)](https://youtu.be/8RWdNhbVZ4I)
* [`bind_rows()`](joins.html#bind_rows)
* [`bind_cols()`](joins.html#bind_cols)
4. Be able to use the 3 set operations: [(video)](https://youtu.be/c3V33ElWUYI)
* [`intersect()`](joins.html#intersect)
* [`union()`](joins.html#union)
* [`setdiff()`](joins.html#setdiff)
6\.2 Resources
--------------
* [Chapter 13: Relational Data](http://r4ds.had.co.nz/relational-data.html) in *R for Data Science*
* [Cheatsheet for dplyr join functions](http://stat545.com/bit001_dplyr-cheatsheet.html)
* [Lecture slides on dplyr two\-table verbs](slides/05_joins_slides.pdf)
6\.3 Setup
----------
```
# libraries needed
library(tidyverse)
```
6\.4 Data
---------
First, we’ll create two small data tables.
`subject` has id, gender and age for subjects 1\-5\. Age and gender are missing for subject 3\.
```
subject <- tibble(
id = 1:5,
gender = c("m", "m", NA, "nb", "f"),
age = c(19, 22, NA, 19, 18)
)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
`exp` has subject id and the score from an experiment. Some subjects are missing, some completed twice, and some are not in the subject table.
```
exp <- tibble(
id = c(2, 3, 4, 4, 5, 5, 6, 6, 7),
score = c(10, 18, 21, 23, 9, 11, 11, 12, 3)
)
```
| id | score |
| --- | --- |
| 2 | 10 |
| 3 | 18 |
| 4 | 21 |
| 4 | 23 |
| 5 | 9 |
| 5 | 11 |
| 6 | 11 |
| 6 | 12 |
| 7 | 3 |
6\.5 Mutating Joins
-------------------
[Mutating joins](https://psyteachr.github.io/glossary/m#mutating-joins "Joins that act like the dplyr::mutate() function in that they add new columns to one table based on values in another table.") act like the `mutate()` function in that they add new columns to one table based on values in another table.
All the mutating joins have this basic syntax:
`****_join(x, y, by = NULL, suffix = c(".x", ".y")`
* `x` \= the first (left) table
* `y` \= the second (right) table
* `by` \= what columns to match on. If you leave this blank, it will match on all columns with the same names in the two tables.
* `suffix` \= if columns have the same name in the two tables, but you aren’t joining by them, they get a suffix to make them unambiguous. This defaults to “.x” and “.y,” but you can change it to something more meaningful.
You can leave out the `by` argument if you’re matching on all of the columns with the same name, but it’s good practice to always specify it so your code is robust to changes in the loaded data.
### 6\.5\.1 left\_join()
Figure 6\.1: Left Join
A `left_join` keeps all the data from the first (left) table and joins anything that matches from the second (right) table. If the right table has more than one match for a row in the right table, there will be more than one row in the joined table (see ids 4 and 5\).
```
left_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
Figure 6\.2: Left Join (reversed)
The order of tables is swapped here, so the result is all rows from the `exp` table joined to any matching rows from the `subject` table.
```
left_join(exp, subject, by = "id")
```
| id | score | gender | age |
| --- | --- | --- | --- |
| 2 | 10 | m | 22 |
| 3 | 18 | NA | NA |
| 4 | 21 | nb | 19 |
| 4 | 23 | nb | 19 |
| 5 | 9 | f | 18 |
| 5 | 11 | f | 18 |
| 6 | 11 | NA | NA |
| 6 | 12 | NA | NA |
| 7 | 3 | NA | NA |
### 6\.5\.2 right\_join()
Figure 6\.3: Right Join
A `right_join` keeps all the data from the second (right) table and joins anything that matches from the first (left) table.
```
right_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
| 6 | NA | NA | 11 |
| 6 | NA | NA | 12 |
| 7 | NA | NA | 3 |
This table has the same information as `left_join(exp, subject, by = "id")`, but the columns are in a different order (left table, then right table).
### 6\.5\.3 inner\_join()
Figure 6\.4: Inner Join
An `inner_join` returns all the rows that have a match in the other table.
```
inner_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
### 6\.5\.4 full\_join()
Figure 6\.5: Full Join
A `full_join` lets you join up rows in two tables while keeping all of the information from both tables. If a row doesn’t have a match in the other table, the other table’s column values are set to `NA`.
```
full_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
| 6 | NA | NA | 11 |
| 6 | NA | NA | 12 |
| 7 | NA | NA | 3 |
### 6\.5\.1 left\_join()
Figure 6\.1: Left Join
A `left_join` keeps all the data from the first (left) table and joins anything that matches from the second (right) table. If the right table has more than one match for a row in the right table, there will be more than one row in the joined table (see ids 4 and 5\).
```
left_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
Figure 6\.2: Left Join (reversed)
The order of tables is swapped here, so the result is all rows from the `exp` table joined to any matching rows from the `subject` table.
```
left_join(exp, subject, by = "id")
```
| id | score | gender | age |
| --- | --- | --- | --- |
| 2 | 10 | m | 22 |
| 3 | 18 | NA | NA |
| 4 | 21 | nb | 19 |
| 4 | 23 | nb | 19 |
| 5 | 9 | f | 18 |
| 5 | 11 | f | 18 |
| 6 | 11 | NA | NA |
| 6 | 12 | NA | NA |
| 7 | 3 | NA | NA |
### 6\.5\.2 right\_join()
Figure 6\.3: Right Join
A `right_join` keeps all the data from the second (right) table and joins anything that matches from the first (left) table.
```
right_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
| 6 | NA | NA | 11 |
| 6 | NA | NA | 12 |
| 7 | NA | NA | 3 |
This table has the same information as `left_join(exp, subject, by = "id")`, but the columns are in a different order (left table, then right table).
### 6\.5\.3 inner\_join()
Figure 6\.4: Inner Join
An `inner_join` returns all the rows that have a match in the other table.
```
inner_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
### 6\.5\.4 full\_join()
Figure 6\.5: Full Join
A `full_join` lets you join up rows in two tables while keeping all of the information from both tables. If a row doesn’t have a match in the other table, the other table’s column values are set to `NA`.
```
full_join(subject, exp, by = "id")
```
| id | gender | age | score |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | 10 |
| 3 | NA | NA | 18 |
| 4 | nb | 19 | 21 |
| 4 | nb | 19 | 23 |
| 5 | f | 18 | 9 |
| 5 | f | 18 | 11 |
| 6 | NA | NA | 11 |
| 6 | NA | NA | 12 |
| 7 | NA | NA | 3 |
6\.6 Filtering Joins
--------------------
[Filtering joins](https://psyteachr.github.io/glossary/f#filtering-joins "Joins that act like the dplyr::filter() function in that they remove rows from the data in one table based on the values in another table.") act like the `filter()` function in that they remove rows from the data in one table based on the values in another table. The result of a filtering join will only contain rows from the left table and have the same number or fewer rows than the left table.
### 6\.6\.1 semi\_join()
Figure 6\.6: Semi Join
A `semi_join` returns all rows from the left table where there are matching values in the right table, keeping just columns from the left table.
```
semi_join(subject, exp, by = "id")
```
| id | gender | age |
| --- | --- | --- |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
Unlike an inner join, a semi join will never duplicate the rows in the left table if there is more than one matching row in the right table.
Figure 6\.7: Semi Join (Reversed)
Order matters in a semi join.
```
semi_join(exp, subject, by = "id")
```
| id | score |
| --- | --- |
| 2 | 10 |
| 3 | 18 |
| 4 | 21 |
| 4 | 23 |
| 5 | 9 |
| 5 | 11 |
### 6\.6\.2 anti\_join()
Figure 6\.8: Anti Join
An `anti_join` return all rows from the left table where there are *not* matching values in the right table, keeping just columns from the left table.
```
anti_join(subject, exp, by = "id")
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
Figure 6\.9: Anti Join (Reversed)
Order matters in an anti join.
```
anti_join(exp, subject, by = "id")
```
| id | score |
| --- | --- |
| 6 | 11 |
| 6 | 12 |
| 7 | 3 |
### 6\.6\.1 semi\_join()
Figure 6\.6: Semi Join
A `semi_join` returns all rows from the left table where there are matching values in the right table, keeping just columns from the left table.
```
semi_join(subject, exp, by = "id")
```
| id | gender | age |
| --- | --- | --- |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
Unlike an inner join, a semi join will never duplicate the rows in the left table if there is more than one matching row in the right table.
Figure 6\.7: Semi Join (Reversed)
Order matters in a semi join.
```
semi_join(exp, subject, by = "id")
```
| id | score |
| --- | --- |
| 2 | 10 |
| 3 | 18 |
| 4 | 21 |
| 4 | 23 |
| 5 | 9 |
| 5 | 11 |
### 6\.6\.2 anti\_join()
Figure 6\.8: Anti Join
An `anti_join` return all rows from the left table where there are *not* matching values in the right table, keeping just columns from the left table.
```
anti_join(subject, exp, by = "id")
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
Figure 6\.9: Anti Join (Reversed)
Order matters in an anti join.
```
anti_join(exp, subject, by = "id")
```
| id | score |
| --- | --- |
| 6 | 11 |
| 6 | 12 |
| 7 | 3 |
6\.7 Binding Joins
------------------
[Binding joins](https://psyteachr.github.io/glossary/b#binding-joins "Joins that bind one table to another by adding their rows or columns together.") bind one table to another by adding their rows or columns together.
### 6\.7\.1 bind\_rows()
You can combine the rows of two tables with `bind_rows`.
Here we’ll add subject data for subjects 6\-9 and bind that to the original subject table.
```
new_subjects <- tibble(
id = 6:9,
gender = c("nb", "m", "f", "f"),
age = c(19, 16, 20, 19)
)
bind_rows(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
| 6 | nb | 19 |
| 7 | m | 16 |
| 8 | f | 20 |
| 9 | f | 19 |
The columns just have to have the same names, they don’t have to be in the same order. Any columns that differ between the two tables will just have `NA` values for entries from the other table.
If a row is duplicated between the two tables (like id 5 below), the row will also be duplicated in the resulting table. If your tables have the exact same columns, you can use `union()` (see below) to avoid duplicates.
```
new_subjects <- tibble(
id = 5:9,
age = c(18, 19, 16, 20, 19),
gender = c("f", "nb", "m", "f", "f"),
new = c(1,2,3,4,5)
)
bind_rows(subject, new_subjects)
```
| id | gender | age | new |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | NA |
| 3 | NA | NA | NA |
| 4 | nb | 19 | NA |
| 5 | f | 18 | NA |
| 5 | f | 18 | 1 |
| 6 | nb | 19 | 2 |
| 7 | m | 16 | 3 |
| 8 | f | 20 | 4 |
| 9 | f | 19 | 5 |
### 6\.7\.2 bind\_cols()
You can merge two tables with the same number of rows using `bind_cols`. This is only useful if the two tables have their rows in the exact same order. The only advantage over a left join is when the tables don’t have any IDs to join by and you have to rely solely on their order.
```
new_info <- tibble(
colour = c("red", "orange", "yellow", "green", "blue")
)
bind_cols(subject, new_info)
```
| id | gender | age | colour |
| --- | --- | --- | --- |
| 1 | m | 19 | red |
| 2 | m | 22 | orange |
| 3 | NA | NA | yellow |
| 4 | nb | 19 | green |
| 5 | f | 18 | blue |
### 6\.7\.1 bind\_rows()
You can combine the rows of two tables with `bind_rows`.
Here we’ll add subject data for subjects 6\-9 and bind that to the original subject table.
```
new_subjects <- tibble(
id = 6:9,
gender = c("nb", "m", "f", "f"),
age = c(19, 16, 20, 19)
)
bind_rows(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
| 6 | nb | 19 |
| 7 | m | 16 |
| 8 | f | 20 |
| 9 | f | 19 |
The columns just have to have the same names, they don’t have to be in the same order. Any columns that differ between the two tables will just have `NA` values for entries from the other table.
If a row is duplicated between the two tables (like id 5 below), the row will also be duplicated in the resulting table. If your tables have the exact same columns, you can use `union()` (see below) to avoid duplicates.
```
new_subjects <- tibble(
id = 5:9,
age = c(18, 19, 16, 20, 19),
gender = c("f", "nb", "m", "f", "f"),
new = c(1,2,3,4,5)
)
bind_rows(subject, new_subjects)
```
| id | gender | age | new |
| --- | --- | --- | --- |
| 1 | m | 19 | NA |
| 2 | m | 22 | NA |
| 3 | NA | NA | NA |
| 4 | nb | 19 | NA |
| 5 | f | 18 | NA |
| 5 | f | 18 | 1 |
| 6 | nb | 19 | 2 |
| 7 | m | 16 | 3 |
| 8 | f | 20 | 4 |
| 9 | f | 19 | 5 |
### 6\.7\.2 bind\_cols()
You can merge two tables with the same number of rows using `bind_cols`. This is only useful if the two tables have their rows in the exact same order. The only advantage over a left join is when the tables don’t have any IDs to join by and you have to rely solely on their order.
```
new_info <- tibble(
colour = c("red", "orange", "yellow", "green", "blue")
)
bind_cols(subject, new_info)
```
| id | gender | age | colour |
| --- | --- | --- | --- |
| 1 | m | 19 | red |
| 2 | m | 22 | orange |
| 3 | NA | NA | yellow |
| 4 | nb | 19 | green |
| 5 | f | 18 | blue |
6\.8 Set Operations
-------------------
[Set operations](https://psyteachr.github.io/glossary/s#set-operations "Functions that compare two tables and return rows that match (intersect), are in either table (union), or are in one table but not the other (setdiff).") compare two tables and return rows that match (intersect), are in either table (union), or are in one table but not the other (setdiff).
### 6\.8\.1 intersect()
`intersect()` returns all rows in two tables that match exactly. The columns don’t have to be in the same order.
```
new_subjects <- tibble(
id = seq(4, 9),
age = c(19, 18, 19, 16, 20, 19),
gender = c("f", "f", "m", "m", "f", "f")
)
intersect(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 5 | f | 18 |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has an `intersect()` function. The error message can be confusing and looks something like this:
```
base::intersect(subject, new_subjects)
```
```
## Error: Must subset rows with a valid subscript vector.
## ℹ Logical subscripts must match the size of the indexed input.
## x Input has size 6 but subscript `!duplicated(x, fromLast = fromLast, ...)` has size 0.
```
### 6\.8\.2 union()
`union()` returns all the rows from both tables, removing duplicate rows.
```
union(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
| 4 | f | 19 |
| 6 | m | 19 |
| 7 | m | 16 |
| 8 | f | 20 |
| 9 | f | 19 |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has a `union()` function. You usually won’t get an error message, but the output won’t be what you expect.
```
base::union(subject, new_subjects)
```
```
## [[1]]
## [1] 1 2 3 4 5
##
## [[2]]
## [1] "m" "m" NA "nb" "f"
##
## [[3]]
## [1] 19 22 NA 19 18
##
## [[4]]
## [1] 4 5 6 7 8 9
##
## [[5]]
## [1] 19 18 19 16 20 19
##
## [[6]]
## [1] "f" "f" "m" "m" "f" "f"
```
### 6\.8\.3 setdiff()
`setdiff` returns rows that are in the first table, but not in the second table.
```
setdiff(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
Order matters for `setdiff`.
```
setdiff(new_subjects, subject)
```
| id | age | gender |
| --- | --- | --- |
| 4 | 19 | f |
| 6 | 19 | m |
| 7 | 16 | m |
| 8 | 20 | f |
| 9 | 19 | f |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has a `setdiff()` function. You usually won’t get an error message, but the output might not be what you expect because the base R `setdiff()` expects columns to be in the same order, so id 5 here registers as different between the two tables.
```
base::setdiff(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
### 6\.8\.1 intersect()
`intersect()` returns all rows in two tables that match exactly. The columns don’t have to be in the same order.
```
new_subjects <- tibble(
id = seq(4, 9),
age = c(19, 18, 19, 16, 20, 19),
gender = c("f", "f", "m", "m", "f", "f")
)
intersect(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 5 | f | 18 |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has an `intersect()` function. The error message can be confusing and looks something like this:
```
base::intersect(subject, new_subjects)
```
```
## Error: Must subset rows with a valid subscript vector.
## ℹ Logical subscripts must match the size of the indexed input.
## x Input has size 6 but subscript `!duplicated(x, fromLast = fromLast, ...)` has size 0.
```
### 6\.8\.2 union()
`union()` returns all the rows from both tables, removing duplicate rows.
```
union(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
| 4 | f | 19 |
| 6 | m | 19 |
| 7 | m | 16 |
| 8 | f | 20 |
| 9 | f | 19 |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has a `union()` function. You usually won’t get an error message, but the output won’t be what you expect.
```
base::union(subject, new_subjects)
```
```
## [[1]]
## [1] 1 2 3 4 5
##
## [[2]]
## [1] "m" "m" NA "nb" "f"
##
## [[3]]
## [1] 19 22 NA 19 18
##
## [[4]]
## [1] 4 5 6 7 8 9
##
## [[5]]
## [1] 19 18 19 16 20 19
##
## [[6]]
## [1] "f" "f" "m" "m" "f" "f"
```
### 6\.8\.3 setdiff()
`setdiff` returns rows that are in the first table, but not in the second table.
```
setdiff(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
Order matters for `setdiff`.
```
setdiff(new_subjects, subject)
```
| id | age | gender |
| --- | --- | --- |
| 4 | 19 | f |
| 6 | 19 | m |
| 7 | 16 | m |
| 8 | 20 | f |
| 9 | 19 | f |
If you’ve forgotten to load dplyr or the tidyverse, [base R](https://psyteachr.github.io/glossary/b#base-r "The set of R functions that come with a basic installation of R, before you add external packages") also has a `setdiff()` function. You usually won’t get an error message, but the output might not be what you expect because the base R `setdiff()` expects columns to be in the same order, so id 5 here registers as different between the two tables.
```
base::setdiff(subject, new_subjects)
```
| id | gender | age |
| --- | --- | --- |
| 1 | m | 19 |
| 2 | m | 22 |
| 3 | NA | NA |
| 4 | nb | 19 |
| 5 | f | 18 |
6\.9 Glossary
-------------
| term | definition |
| --- | --- |
| [base r](https://psyteachr.github.io/glossary/b#base.r) | The set of R functions that come with a basic installation of R, before you add external packages |
| [binding joins](https://psyteachr.github.io/glossary/b#binding.joins) | Joins that bind one table to another by adding their rows or columns together. |
| [filtering joins](https://psyteachr.github.io/glossary/f#filtering.joins) | Joins that act like the dplyr::filter() function in that they remove rows from the data in one table based on the values in another table. |
| [mutating joins](https://psyteachr.github.io/glossary/m#mutating.joins) | Joins that act like the dplyr::mutate() function in that they add new columns to one table based on values in another table. |
| [set operations](https://psyteachr.github.io/glossary/s#set.operations) | Functions that compare two tables and return rows that match (intersect), are in either table (union), or are in one table but not the other (setdiff). |
6\.10 Exercises
---------------
Download the [exercises](exercises/06_joins_exercise.Rmd). See the [answers](exercises/06_joins_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(6)
# run this to access the answers
dataskills::exercise(6, answers = TRUE)
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/func.html |
Chapter 7 Iteration \& Functions
================================
7\.1 Learning Objectives
------------------------
You will learn about functions and iteration by using simulation to calculate a power analysis for an independent samples t\-test.
### 7\.1\.1 Basic
1. Work with basic [iteration functions](func.html#iteration-functions) `rep`, `seq`, `replicate` [(video)](https://youtu.be/X3zFA71JzgE)
2. Use [`map()` and `apply()` functions](func.html#map-apply) [(video)](https://youtu.be/HcZxQZwJ8T4)
3. Write your own [custom functions](func.html#custom-functions) with `function()` [(video)](https://youtu.be/Qqjva0xgYC4)
4. Set [default values](func.html#defaults) for the arguments in your functions
### 7\.1\.2 Intermediate
5. Understand [scope](func.html#scope)
6. Use [error handling and warnings](func.html#warnings-errors) in a function
### 7\.1\.3 Advanced
The topics below are not (yet) covered in these materials, but they are directions for independent learning.
7. Repeat commands having multiple arguments using `purrr::map2_*()` and `purrr::pmap_*()`
8. Create **nested data frames** using `dplyr::group_by()` and `tidyr::nest()`
9. Work with **nested data frames** in `dplyr`
10. Capture and deal with errors using ‘adverb’ functions `purrr::safely()` and `purrr::possibly()`
7\.2 Resources
--------------
* Chapters 19 and 21 of [R for Data Science](http://r4ds.had.co.nz)
* [RStudio Apply Functions Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/purrr.pdf)
In the next two lectures, we are going to learn more about [iteration](https://psyteachr.github.io/glossary/i#iteration "Repeating a process or function") (doing the same commands over and over) and custom [functions](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") through a data simulation exercise, which will also lead us into more traditional statistical topics.
7\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse) ## contains purrr, tidyr, dplyr
library(broom) ## converts test output to tidy tables
set.seed(8675309) # makes sure random numbers are reproducible
```
7\.4 Iteration functions
------------------------
We first learned about the two basic iteration functions, `rep()` and `seq()` in the [Working with Data](data.html#rep_seq) chapter.
### 7\.4\.1 rep()
The function `rep()` lets you repeat the first argument a number of times.
Use `rep()` to create a vector of alternating `"A"` and `"B"` values of length 24\.
```
rep(c("A", "B"), 12)
```
```
## [1] "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A"
## [20] "B" "A" "B" "A" "B"
```
If you don’t specify what the second argument is, it defaults to `times`, repeating the vector in the first argument that many times. Make the same vector as above, setting the second argument explicitly.
```
rep(c("A", "B"), times = 12)
```
```
## [1] "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A"
## [20] "B" "A" "B" "A" "B"
```
If the second argument is a vector that is the same length as the first argument, each element in the first vector is repeated than many times. Use `rep()` to create a vector of 11 `"A"` values followed by 3 `"B"` values.
```
rep(c("A", "B"), c(11, 3))
```
```
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "B" "B" "B"
```
You can repeat each element of the vector a sepcified number of times using the `each` argument, Use `rep()` to create a vector of 12 `"A"` values followed by 12 `"B"` values.
```
rep(c("A", "B"), each = 12)
```
```
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "B" "B" "B" "B" "B" "B" "B"
## [20] "B" "B" "B" "B" "B"
```
What do you think will happen if you set both `times` to 3 and `each` to 2?
```
rep(c("A", "B"), times = 3, each = 2)
```
```
## [1] "A" "A" "B" "B" "A" "A" "B" "B" "A" "A" "B" "B"
```
### 7\.4\.2 seq()
The function `seq()` is useful for generating a sequence of numbers with some pattern.
Use `seq()` to create a vector of the integers 0 to 10\.
```
seq(0, 10)
```
```
## [1] 0 1 2 3 4 5 6 7 8 9 10
```
You can set the `by` argument to count by numbers other than 1 (the default). Use `seq()` to create a vector of the numbers 0 to 100 by 10s.
```
seq(0, 100, by = 10)
```
```
## [1] 0 10 20 30 40 50 60 70 80 90 100
```
The argument `length.out` is useful if you know how many steps you want to divide something into. Use `seq()` to create a vector that starts with 0, ends with 100, and has 12 equally spaced steps (hint: how many numbers would be in a vector with 2 *steps*?).
```
seq(0, 100, length.out = 13)
```
```
## [1] 0.000000 8.333333 16.666667 25.000000 33.333333 41.666667
## [7] 50.000000 58.333333 66.666667 75.000000 83.333333 91.666667
## [13] 100.000000
```
### 7\.4\.3 replicate()
You can use the `replicate()` function to run a function `n` times.
For example, you can get 3 sets of 5 numbers from a random normal distribution by setting `n` to `3` and `expr` to `rnorm(5)`.
```
replicate(n = 3, expr = rnorm(5))
```
```
## [,1] [,2] [,3]
## [1,] -0.9965824 0.98721974 -1.5495524
## [2,] 0.7218241 0.02745393 1.0226378
## [3,] -0.6172088 0.67287232 0.1500832
## [4,] 2.0293916 0.57206650 -0.6599640
## [5,] 1.0654161 0.90367770 -0.9945890
```
By default, `replicate()` simplifies your result into a [matrix](https://psyteachr.github.io/glossary/m#matrix "A container data type consisting of numbers arranged into a fixed number of rows and columns") that is easy to convert into a table if your function returns vectors that are the same length. If you’d rather have a list of vectors, set `simplify = FALSE`.
```
replicate(n = 3, expr = rnorm(5), simplify = FALSE)
```
```
## [[1]]
## [1] 1.9724587 -0.4418016 -0.9006372 -0.1505882 -0.8278942
##
## [[2]]
## [1] 1.98582582 0.04400503 -0.40428231 -0.47299855 -0.41482324
##
## [[3]]
## [1] 0.6832342 0.6902011 0.5334919 -0.1861048 0.3829458
```
### 7\.4\.4 map() and apply() functions
`purrr::map()` and `lapply()` return a list of the same length as a vector or list, each element of which is the result of applying a function to the corresponding element. They function much the same, but purrr functions have some optimisations for working with the tidyverse. We’ll be working mostly with purrr functions in this course, but apply functions are very common in code that you might see in examples on the web.
Imagine you want to calculate the power for a two\-sample t\-test with a mean difference of 0\.2 and SD of 1, for all the sample sizes 100 to 1000 (by 100s). You could run the `power.t.test()` function 20 times and extract the values for “power” from the resulting list and put it in a table.
```
p100 <- power.t.test(n = 100, delta = 0.2, sd = 1, type="two.sample")
# 18 more lines
p1000 <- power.t.test(n = 500, delta = 0.2, sd = 1, type="two.sample")
tibble(
n = c(100, "...", 1000),
power = c(p100$power, "...", p1000$power)
)
```
| n | power |
| --- | --- |
| 100 | 0\.290266404572217 |
| … | … |
| 1000 | 0\.884788352886661 |
However, the `apply()` and `map()` functions allow you to perform a function on each item in a vector or list. First make an object `n` that is the vector of the sample sizes you want to test, then use `lapply()` or `map()` to run the function `power.t.test()` on each item. You can set other arguments to `power.t.test()` after the function argument.
```
n <- seq(100, 1000, 100)
pcalc <- lapply(n, power.t.test,
delta = 0.2, sd = 1, type="two.sample")
# or
pcalc <- purrr::map(n, power.t.test,
delta = 0.2, sd = 1, type="two.sample")
```
These functions return a list where each item is the result of `power.t.test()`, which returns a list of results that includes the named item “power.” This is a special list that has a summary format if you just print it directly:
```
pcalc[[1]]
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
But you can see the individual items using the `str()` function.
```
pcalc[[1]] %>% str()
```
```
## List of 8
## $ n : num 100
## $ delta : num 0.2
## $ sd : num 1
## $ sig.level : num 0.05
## $ power : num 0.29
## $ alternative: chr "two.sided"
## $ note : chr "n is number in *each* group"
## $ method : chr "Two-sample t test power calculation"
## - attr(*, "class")= chr "power.htest"
```
`sapply()` is a version of `lapply()` that returns a vector or array instead of a list, where appropriate. The corresponding purrr functions are `map_dbl()`, `map_chr()`, `map_int()` and `map_lgl()`, which return vectors with the corresponding [data type](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object.").
You can extract a value from a list with the function `[[`. You usually see this written as `pcalc[[1]]`, but if you put it inside backticks, you can use it in apply and map functions.
```
sapply(pcalc, `[[`, "power")
```
```
## [1] 0.2902664 0.5140434 0.6863712 0.8064964 0.8847884 0.9333687 0.9623901
## [8] 0.9792066 0.9887083 0.9939638
```
We use `map_dbl()` here because the value for “power” is a [double](https://psyteachr.github.io/glossary/d#double "A data type representing a real decimal number").
```
purrr::map_dbl(pcalc, `[[`, "power")
```
```
## [1] 0.2902664 0.5140434 0.6863712 0.8064964 0.8847884 0.9333687 0.9623901
## [8] 0.9792066 0.9887083 0.9939638
```
We can use the `map()` functions inside a `mutate()` function to run the `power.t.test()` function on the value of `n` from each row of a table, then extract the value for “power,” and delete the column with the power calculations.
```
mypower <- tibble(
n = seq(100, 1000, 100)) %>%
mutate(pcalc = purrr::map(n, power.t.test,
delta = 0.2,
sd = 1,
type="two.sample"),
power = purrr::map_dbl(pcalc, `[[`, "power")) %>%
select(-pcalc)
```
Figure 7\.1: Power for a two\-sample t\-test with d \= 0\.2
7\.5 Custom functions
---------------------
In addition to the built\-in functions and functions you can access from packages, you can also write your own functions (and eventually even packages!).
### 7\.5\.1 Structuring a function
The general structure of a function is as follows:
```
function_name <- function(my_args) {
# process the arguments
# return some value
}
```
Here is a very simple function. Can you guess what it does?
```
add1 <- function(my_number) {
my_number + 1
}
add1(10)
```
```
## [1] 11
```
Let’s make a function that reports p\-values in APA format (with “p \= \[rounded value]” when p \>\= .001 and “p \< .001” when p \< .001\).
First, we have to name the function. You can name it anything, but try not to duplicate existing functions or you will overwrite them. For example, if you call your function `rep`, then you will need to use `base::rep()` to access the normal `rep` function. Let’s call our p\-value function `report_p` and set up the framework of the function.
```
report_p <- function() {
}
```
### 7\.5\.2 Arguments
We need to add one [argument](https://psyteachr.github.io/glossary/a#argument "A variable that provides input to a function."), the p\-value you want to report. The names you choose for the arguments are private to that argument, so it is not a problem if they conflict with other variables in your script. You put the arguments in the parentheses of `function()` in the order you want them to default (just like the built\-in functions you’ve used before).
```
report_p <- function(p) {
}
```
### 7\.5\.3 Argument defaults
You can add a default value to any argument. If that argument is skipped, then the function uses the default argument. It probably doesn’t make sense to run this function without specifying the p\-value, but we can add a second argument called `digits` that defaults to 3, so we can round p\-values to any number of digits.
```
report_p <- function(p, digits = 3) {
}
```
Now we need to write some code inside the function to process the input arguments and turn them into a **return**ed output. Put the output as the last item in function.
```
report_p <- function(p, digits = 3) {
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
reported
}
```
You might also see the returned output inside of the `return()` function. This does the same thing.
```
report_p <- function(p, digits = 3) {
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
return(reported)
}
```
When you run the code defining your function, it doesn’t output anything, but makes a new object in the Environment tab under **`Functions`**. Now you can run the function.
```
report_p(0.04869)
report_p(0.0000023)
```
```
## [1] "p = 0.049"
## [1] "p < .001"
```
### 7\.5\.4 Scope
What happens in a function stays in a function. You can change the value of a variable passed to a function, but that won’t change the value of the variable outside of the function, even if that variable has the same name as the one in the function.
```
reported <- "not changed"
# inside this function, reported == "p = 0.002"
report_p(0.0023)
reported # still "not changed"
```
```
## [1] "p = 0.002"
## [1] "not changed"
```
### 7\.5\.5 Warnings and errors
What happens when you omit the argument for `p`? Or if you set `p` to 1\.5 or “a?”
You might want to add a more specific warning and stop running the function code if someone enters a value that isn’t a number. You can do this with the `stop()` function.
If someone enters a number that isn’t possible for a p\-value (0\-1\), you might want to warn them that this is probably not what they intended, but still continue with the function. You can do this with `warning()`.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
reported
}
```
```
report_p()
```
```
## Error in report_p(): argument "p" is missing, with no default
```
```
report_p("a")
```
```
## Error in report_p("a"): p must be a number
```
```
report_p(-2)
```
```
## Warning in report_p(-2): p-values are normally greater than 0
```
```
report_p(2)
```
```
## Warning in report_p(2): p-values are normally less than 1
```
```
## [1] "p < .001"
## [1] "p = 2"
```
7\.6 Iterating your own functions
---------------------------------
First, let’s build up the code that we want to iterate.
### 7\.6\.1 rnorm()
Create a vector of 20 random numbers drawn from a normal distribution with a mean of 5 and standard deviation of 1 using the `rnorm()` function and store them in the variable `A`.
```
A <- rnorm(20, mean = 5, sd = 1)
```
### 7\.6\.2 tibble::tibble()
A `tibble` is a type of table or `data.frame`. The function `tibble::tibble()` creates a tibble with a column for each argument. Each argument takes the form `column_name = data_vector`.
Create a table called `dat` including two vectors: `A` that is a vector of 20 random normally distributed numbers with a mean of 5 and SD of 1, and `B` that is a vector of 20 random normally distributed numbers with a mean of 5\.5 and SD of 1\.
```
dat <- tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
)
```
### 7\.6\.3 t.test()
You can run a Welch two\-sample t\-test by including the two samples you made as the first two arguments to the function `t.test`. You can reference one column of a table by its names using the format `table_name$column_name`
```
t.test(dat$A, dat$B)
```
```
##
## Welch Two Sample t-test
##
## data: dat$A and dat$B
## t = -1.7716, df = 36.244, p-value = 0.08487
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -1.2445818 0.0838683
## sample estimates:
## mean of x mean of y
## 4.886096 5.466453
```
You can also convert the table to long format using the `gather` function and specify the t\-test using the format `dv_column~grouping_column`.
```
longdat <- gather(dat, group, score, A:B)
t.test(score~group, data = longdat)
```
```
##
## Welch Two Sample t-test
##
## data: score by group
## t = -1.7716, df = 36.244, p-value = 0.08487
## alternative hypothesis: true difference in means between group A and group B is not equal to 0
## 95 percent confidence interval:
## -1.2445818 0.0838683
## sample estimates:
## mean in group A mean in group B
## 4.886096 5.466453
```
### 7\.6\.4 broom::tidy()
You can use the function `broom::tidy()` to extract the data from a statistical test in a table format. The example below pipes everything together.
```
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy()
```
| estimate | estimate1 | estimate2 | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| \-0\.6422108 | 5\.044009 | 5\.68622 | \-2\.310591 | 0\.0264905 | 37\.27083 | \-1\.205237 | \-0\.0791844 | Welch Two Sample t\-test | two.sided |
In the pipeline above, `t.test(score~group, data = .)` uses the `.` notation to change the location of the piped\-in data table from it’s default position as the first argument to a different position.
Finally, we can extract a single value from this results table using `pull()`.
```
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
```
```
## [1] 0.7075268
```
### 7\.6\.5 Custom function: t\_sim()
First, name your function `t_sim` and wrap the code above in a function with no arguments.
```
t_sim <- function() {
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
}
```
Run it a few times to see what happens.
```
t_sim()
```
```
## [1] 0.00997552
```
### 7\.6\.6 Iterate t\_sim()
Let’s run the `t_sim` function 1000 times, assign the resulting p\-values to a vector called `reps`, and check what proportion of p\-values are lower than alpha (e.g., .05\). This number is the power for this analysis.
```
reps <- replicate(1000, t_sim())
alpha <- .05
power <- mean(reps < alpha)
power
```
```
## [1] 0.328
```
### 7\.6\.7 Set seed
You can use the `set.seed` function before you run a function that uses random numbers to make sure that you get the same random data back each time. You can use any integer you like as the seed.
```
set.seed(90201)
```
Make sure you don’t ever use `set.seed()` **inside** of a simulation function, or you will just simulate the exact same data over and over again.
Figure 7\.2: @KellyBodwin
### 7\.6\.8 Add arguments
You can just edit your function each time you want to cacluate power for a different sample n, but it is more efficent to build this into your fuction as an arguments. Redefine `t_sim`, setting arguments for the mean and SD of group A, the mean and SD of group B, and the number of subjects per group. Give them all default values.
```
t_sim <- function(n = 10, m1=0, sd1=1, m2=0, sd2=1) {
tibble(
A = rnorm(n, m1, sd1),
B = rnorm(n, m2, sd2)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
}
```
Test your function with some different values to see if the results make sense.
```
t_sim(100)
t_sim(100, 0, 1, 0.5, 1)
```
```
## [1] 0.5065619
## [1] 0.001844064
```
Use `replicate` to calculate power for 100 subjects/group with an effect size of 0\.2 (e.g., A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1\). Use 1000 replications.
```
reps <- replicate(1000, t_sim(100, 0, 1, 0.2, 1))
power <- mean(reps < .05)
power
```
```
## [1] 0.268
```
Compare this to power calculated from the `power.t.test` function.
```
power.t.test(n = 100, delta = 0.2, sd = 1, type="two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
Calculate power via simulation and `power.t.test` for the following tests:
* 20 subjects/group, A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1
* 40 subjects/group, A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1
* 20 subjects/group, A: m \= 10, SD \= 1; B: m \= 12, SD \= 1\.5
7\.7 Glossary
-------------
| term | definition |
| --- | --- |
| [argument](https://psyteachr.github.io/glossary/a#argument) | A variable that provides input to a function. |
| [data type](https://psyteachr.github.io/glossary/d#data.type) | The kind of data represented by an object. |
| [double](https://psyteachr.github.io/glossary/d#double) | A data type representing a real decimal number |
| [function](https://psyteachr.github.io/glossary/f#function.) | A named section of code that can be reused. |
| [iteration](https://psyteachr.github.io/glossary/i#iteration) | Repeating a process or function |
| [matrix](https://psyteachr.github.io/glossary/m#matrix) | A container data type consisting of numbers arranged into a fixed number of rows and columns |
7\.8 Exercises
--------------
Download the [exercises](exercises/07_func_exercise.Rmd). See the [answers](exercises/07_func_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(7)
# run this to access the answers
dataskills::exercise(7, answers = TRUE)
```
7\.1 Learning Objectives
------------------------
You will learn about functions and iteration by using simulation to calculate a power analysis for an independent samples t\-test.
### 7\.1\.1 Basic
1. Work with basic [iteration functions](func.html#iteration-functions) `rep`, `seq`, `replicate` [(video)](https://youtu.be/X3zFA71JzgE)
2. Use [`map()` and `apply()` functions](func.html#map-apply) [(video)](https://youtu.be/HcZxQZwJ8T4)
3. Write your own [custom functions](func.html#custom-functions) with `function()` [(video)](https://youtu.be/Qqjva0xgYC4)
4. Set [default values](func.html#defaults) for the arguments in your functions
### 7\.1\.2 Intermediate
5. Understand [scope](func.html#scope)
6. Use [error handling and warnings](func.html#warnings-errors) in a function
### 7\.1\.3 Advanced
The topics below are not (yet) covered in these materials, but they are directions for independent learning.
7. Repeat commands having multiple arguments using `purrr::map2_*()` and `purrr::pmap_*()`
8. Create **nested data frames** using `dplyr::group_by()` and `tidyr::nest()`
9. Work with **nested data frames** in `dplyr`
10. Capture and deal with errors using ‘adverb’ functions `purrr::safely()` and `purrr::possibly()`
### 7\.1\.1 Basic
1. Work with basic [iteration functions](func.html#iteration-functions) `rep`, `seq`, `replicate` [(video)](https://youtu.be/X3zFA71JzgE)
2. Use [`map()` and `apply()` functions](func.html#map-apply) [(video)](https://youtu.be/HcZxQZwJ8T4)
3. Write your own [custom functions](func.html#custom-functions) with `function()` [(video)](https://youtu.be/Qqjva0xgYC4)
4. Set [default values](func.html#defaults) for the arguments in your functions
### 7\.1\.2 Intermediate
5. Understand [scope](func.html#scope)
6. Use [error handling and warnings](func.html#warnings-errors) in a function
### 7\.1\.3 Advanced
The topics below are not (yet) covered in these materials, but they are directions for independent learning.
7. Repeat commands having multiple arguments using `purrr::map2_*()` and `purrr::pmap_*()`
8. Create **nested data frames** using `dplyr::group_by()` and `tidyr::nest()`
9. Work with **nested data frames** in `dplyr`
10. Capture and deal with errors using ‘adverb’ functions `purrr::safely()` and `purrr::possibly()`
7\.2 Resources
--------------
* Chapters 19 and 21 of [R for Data Science](http://r4ds.had.co.nz)
* [RStudio Apply Functions Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/purrr.pdf)
In the next two lectures, we are going to learn more about [iteration](https://psyteachr.github.io/glossary/i#iteration "Repeating a process or function") (doing the same commands over and over) and custom [functions](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") through a data simulation exercise, which will also lead us into more traditional statistical topics.
7\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse) ## contains purrr, tidyr, dplyr
library(broom) ## converts test output to tidy tables
set.seed(8675309) # makes sure random numbers are reproducible
```
7\.4 Iteration functions
------------------------
We first learned about the two basic iteration functions, `rep()` and `seq()` in the [Working with Data](data.html#rep_seq) chapter.
### 7\.4\.1 rep()
The function `rep()` lets you repeat the first argument a number of times.
Use `rep()` to create a vector of alternating `"A"` and `"B"` values of length 24\.
```
rep(c("A", "B"), 12)
```
```
## [1] "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A"
## [20] "B" "A" "B" "A" "B"
```
If you don’t specify what the second argument is, it defaults to `times`, repeating the vector in the first argument that many times. Make the same vector as above, setting the second argument explicitly.
```
rep(c("A", "B"), times = 12)
```
```
## [1] "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A"
## [20] "B" "A" "B" "A" "B"
```
If the second argument is a vector that is the same length as the first argument, each element in the first vector is repeated than many times. Use `rep()` to create a vector of 11 `"A"` values followed by 3 `"B"` values.
```
rep(c("A", "B"), c(11, 3))
```
```
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "B" "B" "B"
```
You can repeat each element of the vector a sepcified number of times using the `each` argument, Use `rep()` to create a vector of 12 `"A"` values followed by 12 `"B"` values.
```
rep(c("A", "B"), each = 12)
```
```
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "B" "B" "B" "B" "B" "B" "B"
## [20] "B" "B" "B" "B" "B"
```
What do you think will happen if you set both `times` to 3 and `each` to 2?
```
rep(c("A", "B"), times = 3, each = 2)
```
```
## [1] "A" "A" "B" "B" "A" "A" "B" "B" "A" "A" "B" "B"
```
### 7\.4\.2 seq()
The function `seq()` is useful for generating a sequence of numbers with some pattern.
Use `seq()` to create a vector of the integers 0 to 10\.
```
seq(0, 10)
```
```
## [1] 0 1 2 3 4 5 6 7 8 9 10
```
You can set the `by` argument to count by numbers other than 1 (the default). Use `seq()` to create a vector of the numbers 0 to 100 by 10s.
```
seq(0, 100, by = 10)
```
```
## [1] 0 10 20 30 40 50 60 70 80 90 100
```
The argument `length.out` is useful if you know how many steps you want to divide something into. Use `seq()` to create a vector that starts with 0, ends with 100, and has 12 equally spaced steps (hint: how many numbers would be in a vector with 2 *steps*?).
```
seq(0, 100, length.out = 13)
```
```
## [1] 0.000000 8.333333 16.666667 25.000000 33.333333 41.666667
## [7] 50.000000 58.333333 66.666667 75.000000 83.333333 91.666667
## [13] 100.000000
```
### 7\.4\.3 replicate()
You can use the `replicate()` function to run a function `n` times.
For example, you can get 3 sets of 5 numbers from a random normal distribution by setting `n` to `3` and `expr` to `rnorm(5)`.
```
replicate(n = 3, expr = rnorm(5))
```
```
## [,1] [,2] [,3]
## [1,] -0.9965824 0.98721974 -1.5495524
## [2,] 0.7218241 0.02745393 1.0226378
## [3,] -0.6172088 0.67287232 0.1500832
## [4,] 2.0293916 0.57206650 -0.6599640
## [5,] 1.0654161 0.90367770 -0.9945890
```
By default, `replicate()` simplifies your result into a [matrix](https://psyteachr.github.io/glossary/m#matrix "A container data type consisting of numbers arranged into a fixed number of rows and columns") that is easy to convert into a table if your function returns vectors that are the same length. If you’d rather have a list of vectors, set `simplify = FALSE`.
```
replicate(n = 3, expr = rnorm(5), simplify = FALSE)
```
```
## [[1]]
## [1] 1.9724587 -0.4418016 -0.9006372 -0.1505882 -0.8278942
##
## [[2]]
## [1] 1.98582582 0.04400503 -0.40428231 -0.47299855 -0.41482324
##
## [[3]]
## [1] 0.6832342 0.6902011 0.5334919 -0.1861048 0.3829458
```
### 7\.4\.4 map() and apply() functions
`purrr::map()` and `lapply()` return a list of the same length as a vector or list, each element of which is the result of applying a function to the corresponding element. They function much the same, but purrr functions have some optimisations for working with the tidyverse. We’ll be working mostly with purrr functions in this course, but apply functions are very common in code that you might see in examples on the web.
Imagine you want to calculate the power for a two\-sample t\-test with a mean difference of 0\.2 and SD of 1, for all the sample sizes 100 to 1000 (by 100s). You could run the `power.t.test()` function 20 times and extract the values for “power” from the resulting list and put it in a table.
```
p100 <- power.t.test(n = 100, delta = 0.2, sd = 1, type="two.sample")
# 18 more lines
p1000 <- power.t.test(n = 500, delta = 0.2, sd = 1, type="two.sample")
tibble(
n = c(100, "...", 1000),
power = c(p100$power, "...", p1000$power)
)
```
| n | power |
| --- | --- |
| 100 | 0\.290266404572217 |
| … | … |
| 1000 | 0\.884788352886661 |
However, the `apply()` and `map()` functions allow you to perform a function on each item in a vector or list. First make an object `n` that is the vector of the sample sizes you want to test, then use `lapply()` or `map()` to run the function `power.t.test()` on each item. You can set other arguments to `power.t.test()` after the function argument.
```
n <- seq(100, 1000, 100)
pcalc <- lapply(n, power.t.test,
delta = 0.2, sd = 1, type="two.sample")
# or
pcalc <- purrr::map(n, power.t.test,
delta = 0.2, sd = 1, type="two.sample")
```
These functions return a list where each item is the result of `power.t.test()`, which returns a list of results that includes the named item “power.” This is a special list that has a summary format if you just print it directly:
```
pcalc[[1]]
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
But you can see the individual items using the `str()` function.
```
pcalc[[1]] %>% str()
```
```
## List of 8
## $ n : num 100
## $ delta : num 0.2
## $ sd : num 1
## $ sig.level : num 0.05
## $ power : num 0.29
## $ alternative: chr "two.sided"
## $ note : chr "n is number in *each* group"
## $ method : chr "Two-sample t test power calculation"
## - attr(*, "class")= chr "power.htest"
```
`sapply()` is a version of `lapply()` that returns a vector or array instead of a list, where appropriate. The corresponding purrr functions are `map_dbl()`, `map_chr()`, `map_int()` and `map_lgl()`, which return vectors with the corresponding [data type](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object.").
You can extract a value from a list with the function `[[`. You usually see this written as `pcalc[[1]]`, but if you put it inside backticks, you can use it in apply and map functions.
```
sapply(pcalc, `[[`, "power")
```
```
## [1] 0.2902664 0.5140434 0.6863712 0.8064964 0.8847884 0.9333687 0.9623901
## [8] 0.9792066 0.9887083 0.9939638
```
We use `map_dbl()` here because the value for “power” is a [double](https://psyteachr.github.io/glossary/d#double "A data type representing a real decimal number").
```
purrr::map_dbl(pcalc, `[[`, "power")
```
```
## [1] 0.2902664 0.5140434 0.6863712 0.8064964 0.8847884 0.9333687 0.9623901
## [8] 0.9792066 0.9887083 0.9939638
```
We can use the `map()` functions inside a `mutate()` function to run the `power.t.test()` function on the value of `n` from each row of a table, then extract the value for “power,” and delete the column with the power calculations.
```
mypower <- tibble(
n = seq(100, 1000, 100)) %>%
mutate(pcalc = purrr::map(n, power.t.test,
delta = 0.2,
sd = 1,
type="two.sample"),
power = purrr::map_dbl(pcalc, `[[`, "power")) %>%
select(-pcalc)
```
Figure 7\.1: Power for a two\-sample t\-test with d \= 0\.2
### 7\.4\.1 rep()
The function `rep()` lets you repeat the first argument a number of times.
Use `rep()` to create a vector of alternating `"A"` and `"B"` values of length 24\.
```
rep(c("A", "B"), 12)
```
```
## [1] "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A"
## [20] "B" "A" "B" "A" "B"
```
If you don’t specify what the second argument is, it defaults to `times`, repeating the vector in the first argument that many times. Make the same vector as above, setting the second argument explicitly.
```
rep(c("A", "B"), times = 12)
```
```
## [1] "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A"
## [20] "B" "A" "B" "A" "B"
```
If the second argument is a vector that is the same length as the first argument, each element in the first vector is repeated than many times. Use `rep()` to create a vector of 11 `"A"` values followed by 3 `"B"` values.
```
rep(c("A", "B"), c(11, 3))
```
```
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "B" "B" "B"
```
You can repeat each element of the vector a sepcified number of times using the `each` argument, Use `rep()` to create a vector of 12 `"A"` values followed by 12 `"B"` values.
```
rep(c("A", "B"), each = 12)
```
```
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "B" "B" "B" "B" "B" "B" "B"
## [20] "B" "B" "B" "B" "B"
```
What do you think will happen if you set both `times` to 3 and `each` to 2?
```
rep(c("A", "B"), times = 3, each = 2)
```
```
## [1] "A" "A" "B" "B" "A" "A" "B" "B" "A" "A" "B" "B"
```
### 7\.4\.2 seq()
The function `seq()` is useful for generating a sequence of numbers with some pattern.
Use `seq()` to create a vector of the integers 0 to 10\.
```
seq(0, 10)
```
```
## [1] 0 1 2 3 4 5 6 7 8 9 10
```
You can set the `by` argument to count by numbers other than 1 (the default). Use `seq()` to create a vector of the numbers 0 to 100 by 10s.
```
seq(0, 100, by = 10)
```
```
## [1] 0 10 20 30 40 50 60 70 80 90 100
```
The argument `length.out` is useful if you know how many steps you want to divide something into. Use `seq()` to create a vector that starts with 0, ends with 100, and has 12 equally spaced steps (hint: how many numbers would be in a vector with 2 *steps*?).
```
seq(0, 100, length.out = 13)
```
```
## [1] 0.000000 8.333333 16.666667 25.000000 33.333333 41.666667
## [7] 50.000000 58.333333 66.666667 75.000000 83.333333 91.666667
## [13] 100.000000
```
### 7\.4\.3 replicate()
You can use the `replicate()` function to run a function `n` times.
For example, you can get 3 sets of 5 numbers from a random normal distribution by setting `n` to `3` and `expr` to `rnorm(5)`.
```
replicate(n = 3, expr = rnorm(5))
```
```
## [,1] [,2] [,3]
## [1,] -0.9965824 0.98721974 -1.5495524
## [2,] 0.7218241 0.02745393 1.0226378
## [3,] -0.6172088 0.67287232 0.1500832
## [4,] 2.0293916 0.57206650 -0.6599640
## [5,] 1.0654161 0.90367770 -0.9945890
```
By default, `replicate()` simplifies your result into a [matrix](https://psyteachr.github.io/glossary/m#matrix "A container data type consisting of numbers arranged into a fixed number of rows and columns") that is easy to convert into a table if your function returns vectors that are the same length. If you’d rather have a list of vectors, set `simplify = FALSE`.
```
replicate(n = 3, expr = rnorm(5), simplify = FALSE)
```
```
## [[1]]
## [1] 1.9724587 -0.4418016 -0.9006372 -0.1505882 -0.8278942
##
## [[2]]
## [1] 1.98582582 0.04400503 -0.40428231 -0.47299855 -0.41482324
##
## [[3]]
## [1] 0.6832342 0.6902011 0.5334919 -0.1861048 0.3829458
```
### 7\.4\.4 map() and apply() functions
`purrr::map()` and `lapply()` return a list of the same length as a vector or list, each element of which is the result of applying a function to the corresponding element. They function much the same, but purrr functions have some optimisations for working with the tidyverse. We’ll be working mostly with purrr functions in this course, but apply functions are very common in code that you might see in examples on the web.
Imagine you want to calculate the power for a two\-sample t\-test with a mean difference of 0\.2 and SD of 1, for all the sample sizes 100 to 1000 (by 100s). You could run the `power.t.test()` function 20 times and extract the values for “power” from the resulting list and put it in a table.
```
p100 <- power.t.test(n = 100, delta = 0.2, sd = 1, type="two.sample")
# 18 more lines
p1000 <- power.t.test(n = 500, delta = 0.2, sd = 1, type="two.sample")
tibble(
n = c(100, "...", 1000),
power = c(p100$power, "...", p1000$power)
)
```
| n | power |
| --- | --- |
| 100 | 0\.290266404572217 |
| … | … |
| 1000 | 0\.884788352886661 |
However, the `apply()` and `map()` functions allow you to perform a function on each item in a vector or list. First make an object `n` that is the vector of the sample sizes you want to test, then use `lapply()` or `map()` to run the function `power.t.test()` on each item. You can set other arguments to `power.t.test()` after the function argument.
```
n <- seq(100, 1000, 100)
pcalc <- lapply(n, power.t.test,
delta = 0.2, sd = 1, type="two.sample")
# or
pcalc <- purrr::map(n, power.t.test,
delta = 0.2, sd = 1, type="two.sample")
```
These functions return a list where each item is the result of `power.t.test()`, which returns a list of results that includes the named item “power.” This is a special list that has a summary format if you just print it directly:
```
pcalc[[1]]
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
But you can see the individual items using the `str()` function.
```
pcalc[[1]] %>% str()
```
```
## List of 8
## $ n : num 100
## $ delta : num 0.2
## $ sd : num 1
## $ sig.level : num 0.05
## $ power : num 0.29
## $ alternative: chr "two.sided"
## $ note : chr "n is number in *each* group"
## $ method : chr "Two-sample t test power calculation"
## - attr(*, "class")= chr "power.htest"
```
`sapply()` is a version of `lapply()` that returns a vector or array instead of a list, where appropriate. The corresponding purrr functions are `map_dbl()`, `map_chr()`, `map_int()` and `map_lgl()`, which return vectors with the corresponding [data type](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object.").
You can extract a value from a list with the function `[[`. You usually see this written as `pcalc[[1]]`, but if you put it inside backticks, you can use it in apply and map functions.
```
sapply(pcalc, `[[`, "power")
```
```
## [1] 0.2902664 0.5140434 0.6863712 0.8064964 0.8847884 0.9333687 0.9623901
## [8] 0.9792066 0.9887083 0.9939638
```
We use `map_dbl()` here because the value for “power” is a [double](https://psyteachr.github.io/glossary/d#double "A data type representing a real decimal number").
```
purrr::map_dbl(pcalc, `[[`, "power")
```
```
## [1] 0.2902664 0.5140434 0.6863712 0.8064964 0.8847884 0.9333687 0.9623901
## [8] 0.9792066 0.9887083 0.9939638
```
We can use the `map()` functions inside a `mutate()` function to run the `power.t.test()` function on the value of `n` from each row of a table, then extract the value for “power,” and delete the column with the power calculations.
```
mypower <- tibble(
n = seq(100, 1000, 100)) %>%
mutate(pcalc = purrr::map(n, power.t.test,
delta = 0.2,
sd = 1,
type="two.sample"),
power = purrr::map_dbl(pcalc, `[[`, "power")) %>%
select(-pcalc)
```
Figure 7\.1: Power for a two\-sample t\-test with d \= 0\.2
7\.5 Custom functions
---------------------
In addition to the built\-in functions and functions you can access from packages, you can also write your own functions (and eventually even packages!).
### 7\.5\.1 Structuring a function
The general structure of a function is as follows:
```
function_name <- function(my_args) {
# process the arguments
# return some value
}
```
Here is a very simple function. Can you guess what it does?
```
add1 <- function(my_number) {
my_number + 1
}
add1(10)
```
```
## [1] 11
```
Let’s make a function that reports p\-values in APA format (with “p \= \[rounded value]” when p \>\= .001 and “p \< .001” when p \< .001\).
First, we have to name the function. You can name it anything, but try not to duplicate existing functions or you will overwrite them. For example, if you call your function `rep`, then you will need to use `base::rep()` to access the normal `rep` function. Let’s call our p\-value function `report_p` and set up the framework of the function.
```
report_p <- function() {
}
```
### 7\.5\.2 Arguments
We need to add one [argument](https://psyteachr.github.io/glossary/a#argument "A variable that provides input to a function."), the p\-value you want to report. The names you choose for the arguments are private to that argument, so it is not a problem if they conflict with other variables in your script. You put the arguments in the parentheses of `function()` in the order you want them to default (just like the built\-in functions you’ve used before).
```
report_p <- function(p) {
}
```
### 7\.5\.3 Argument defaults
You can add a default value to any argument. If that argument is skipped, then the function uses the default argument. It probably doesn’t make sense to run this function without specifying the p\-value, but we can add a second argument called `digits` that defaults to 3, so we can round p\-values to any number of digits.
```
report_p <- function(p, digits = 3) {
}
```
Now we need to write some code inside the function to process the input arguments and turn them into a **return**ed output. Put the output as the last item in function.
```
report_p <- function(p, digits = 3) {
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
reported
}
```
You might also see the returned output inside of the `return()` function. This does the same thing.
```
report_p <- function(p, digits = 3) {
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
return(reported)
}
```
When you run the code defining your function, it doesn’t output anything, but makes a new object in the Environment tab under **`Functions`**. Now you can run the function.
```
report_p(0.04869)
report_p(0.0000023)
```
```
## [1] "p = 0.049"
## [1] "p < .001"
```
### 7\.5\.4 Scope
What happens in a function stays in a function. You can change the value of a variable passed to a function, but that won’t change the value of the variable outside of the function, even if that variable has the same name as the one in the function.
```
reported <- "not changed"
# inside this function, reported == "p = 0.002"
report_p(0.0023)
reported # still "not changed"
```
```
## [1] "p = 0.002"
## [1] "not changed"
```
### 7\.5\.5 Warnings and errors
What happens when you omit the argument for `p`? Or if you set `p` to 1\.5 or “a?”
You might want to add a more specific warning and stop running the function code if someone enters a value that isn’t a number. You can do this with the `stop()` function.
If someone enters a number that isn’t possible for a p\-value (0\-1\), you might want to warn them that this is probably not what they intended, but still continue with the function. You can do this with `warning()`.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
reported
}
```
```
report_p()
```
```
## Error in report_p(): argument "p" is missing, with no default
```
```
report_p("a")
```
```
## Error in report_p("a"): p must be a number
```
```
report_p(-2)
```
```
## Warning in report_p(-2): p-values are normally greater than 0
```
```
report_p(2)
```
```
## Warning in report_p(2): p-values are normally less than 1
```
```
## [1] "p < .001"
## [1] "p = 2"
```
### 7\.5\.1 Structuring a function
The general structure of a function is as follows:
```
function_name <- function(my_args) {
# process the arguments
# return some value
}
```
Here is a very simple function. Can you guess what it does?
```
add1 <- function(my_number) {
my_number + 1
}
add1(10)
```
```
## [1] 11
```
Let’s make a function that reports p\-values in APA format (with “p \= \[rounded value]” when p \>\= .001 and “p \< .001” when p \< .001\).
First, we have to name the function. You can name it anything, but try not to duplicate existing functions or you will overwrite them. For example, if you call your function `rep`, then you will need to use `base::rep()` to access the normal `rep` function. Let’s call our p\-value function `report_p` and set up the framework of the function.
```
report_p <- function() {
}
```
### 7\.5\.2 Arguments
We need to add one [argument](https://psyteachr.github.io/glossary/a#argument "A variable that provides input to a function."), the p\-value you want to report. The names you choose for the arguments are private to that argument, so it is not a problem if they conflict with other variables in your script. You put the arguments in the parentheses of `function()` in the order you want them to default (just like the built\-in functions you’ve used before).
```
report_p <- function(p) {
}
```
### 7\.5\.3 Argument defaults
You can add a default value to any argument. If that argument is skipped, then the function uses the default argument. It probably doesn’t make sense to run this function without specifying the p\-value, but we can add a second argument called `digits` that defaults to 3, so we can round p\-values to any number of digits.
```
report_p <- function(p, digits = 3) {
}
```
Now we need to write some code inside the function to process the input arguments and turn them into a **return**ed output. Put the output as the last item in function.
```
report_p <- function(p, digits = 3) {
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
reported
}
```
You might also see the returned output inside of the `return()` function. This does the same thing.
```
report_p <- function(p, digits = 3) {
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
return(reported)
}
```
When you run the code defining your function, it doesn’t output anything, but makes a new object in the Environment tab under **`Functions`**. Now you can run the function.
```
report_p(0.04869)
report_p(0.0000023)
```
```
## [1] "p = 0.049"
## [1] "p < .001"
```
### 7\.5\.4 Scope
What happens in a function stays in a function. You can change the value of a variable passed to a function, but that won’t change the value of the variable outside of the function, even if that variable has the same name as the one in the function.
```
reported <- "not changed"
# inside this function, reported == "p = 0.002"
report_p(0.0023)
reported # still "not changed"
```
```
## [1] "p = 0.002"
## [1] "not changed"
```
### 7\.5\.5 Warnings and errors
What happens when you omit the argument for `p`? Or if you set `p` to 1\.5 or “a?”
You might want to add a more specific warning and stop running the function code if someone enters a value that isn’t a number. You can do this with the `stop()` function.
If someone enters a number that isn’t possible for a p\-value (0\-1\), you might want to warn them that this is probably not what they intended, but still continue with the function. You can do this with `warning()`.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
reported
}
```
```
report_p()
```
```
## Error in report_p(): argument "p" is missing, with no default
```
```
report_p("a")
```
```
## Error in report_p("a"): p must be a number
```
```
report_p(-2)
```
```
## Warning in report_p(-2): p-values are normally greater than 0
```
```
report_p(2)
```
```
## Warning in report_p(2): p-values are normally less than 1
```
```
## [1] "p < .001"
## [1] "p = 2"
```
7\.6 Iterating your own functions
---------------------------------
First, let’s build up the code that we want to iterate.
### 7\.6\.1 rnorm()
Create a vector of 20 random numbers drawn from a normal distribution with a mean of 5 and standard deviation of 1 using the `rnorm()` function and store them in the variable `A`.
```
A <- rnorm(20, mean = 5, sd = 1)
```
### 7\.6\.2 tibble::tibble()
A `tibble` is a type of table or `data.frame`. The function `tibble::tibble()` creates a tibble with a column for each argument. Each argument takes the form `column_name = data_vector`.
Create a table called `dat` including two vectors: `A` that is a vector of 20 random normally distributed numbers with a mean of 5 and SD of 1, and `B` that is a vector of 20 random normally distributed numbers with a mean of 5\.5 and SD of 1\.
```
dat <- tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
)
```
### 7\.6\.3 t.test()
You can run a Welch two\-sample t\-test by including the two samples you made as the first two arguments to the function `t.test`. You can reference one column of a table by its names using the format `table_name$column_name`
```
t.test(dat$A, dat$B)
```
```
##
## Welch Two Sample t-test
##
## data: dat$A and dat$B
## t = -1.7716, df = 36.244, p-value = 0.08487
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -1.2445818 0.0838683
## sample estimates:
## mean of x mean of y
## 4.886096 5.466453
```
You can also convert the table to long format using the `gather` function and specify the t\-test using the format `dv_column~grouping_column`.
```
longdat <- gather(dat, group, score, A:B)
t.test(score~group, data = longdat)
```
```
##
## Welch Two Sample t-test
##
## data: score by group
## t = -1.7716, df = 36.244, p-value = 0.08487
## alternative hypothesis: true difference in means between group A and group B is not equal to 0
## 95 percent confidence interval:
## -1.2445818 0.0838683
## sample estimates:
## mean in group A mean in group B
## 4.886096 5.466453
```
### 7\.6\.4 broom::tidy()
You can use the function `broom::tidy()` to extract the data from a statistical test in a table format. The example below pipes everything together.
```
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy()
```
| estimate | estimate1 | estimate2 | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| \-0\.6422108 | 5\.044009 | 5\.68622 | \-2\.310591 | 0\.0264905 | 37\.27083 | \-1\.205237 | \-0\.0791844 | Welch Two Sample t\-test | two.sided |
In the pipeline above, `t.test(score~group, data = .)` uses the `.` notation to change the location of the piped\-in data table from it’s default position as the first argument to a different position.
Finally, we can extract a single value from this results table using `pull()`.
```
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
```
```
## [1] 0.7075268
```
### 7\.6\.5 Custom function: t\_sim()
First, name your function `t_sim` and wrap the code above in a function with no arguments.
```
t_sim <- function() {
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
}
```
Run it a few times to see what happens.
```
t_sim()
```
```
## [1] 0.00997552
```
### 7\.6\.6 Iterate t\_sim()
Let’s run the `t_sim` function 1000 times, assign the resulting p\-values to a vector called `reps`, and check what proportion of p\-values are lower than alpha (e.g., .05\). This number is the power for this analysis.
```
reps <- replicate(1000, t_sim())
alpha <- .05
power <- mean(reps < alpha)
power
```
```
## [1] 0.328
```
### 7\.6\.7 Set seed
You can use the `set.seed` function before you run a function that uses random numbers to make sure that you get the same random data back each time. You can use any integer you like as the seed.
```
set.seed(90201)
```
Make sure you don’t ever use `set.seed()` **inside** of a simulation function, or you will just simulate the exact same data over and over again.
Figure 7\.2: @KellyBodwin
### 7\.6\.8 Add arguments
You can just edit your function each time you want to cacluate power for a different sample n, but it is more efficent to build this into your fuction as an arguments. Redefine `t_sim`, setting arguments for the mean and SD of group A, the mean and SD of group B, and the number of subjects per group. Give them all default values.
```
t_sim <- function(n = 10, m1=0, sd1=1, m2=0, sd2=1) {
tibble(
A = rnorm(n, m1, sd1),
B = rnorm(n, m2, sd2)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
}
```
Test your function with some different values to see if the results make sense.
```
t_sim(100)
t_sim(100, 0, 1, 0.5, 1)
```
```
## [1] 0.5065619
## [1] 0.001844064
```
Use `replicate` to calculate power for 100 subjects/group with an effect size of 0\.2 (e.g., A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1\). Use 1000 replications.
```
reps <- replicate(1000, t_sim(100, 0, 1, 0.2, 1))
power <- mean(reps < .05)
power
```
```
## [1] 0.268
```
Compare this to power calculated from the `power.t.test` function.
```
power.t.test(n = 100, delta = 0.2, sd = 1, type="two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
Calculate power via simulation and `power.t.test` for the following tests:
* 20 subjects/group, A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1
* 40 subjects/group, A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1
* 20 subjects/group, A: m \= 10, SD \= 1; B: m \= 12, SD \= 1\.5
### 7\.6\.1 rnorm()
Create a vector of 20 random numbers drawn from a normal distribution with a mean of 5 and standard deviation of 1 using the `rnorm()` function and store them in the variable `A`.
```
A <- rnorm(20, mean = 5, sd = 1)
```
### 7\.6\.2 tibble::tibble()
A `tibble` is a type of table or `data.frame`. The function `tibble::tibble()` creates a tibble with a column for each argument. Each argument takes the form `column_name = data_vector`.
Create a table called `dat` including two vectors: `A` that is a vector of 20 random normally distributed numbers with a mean of 5 and SD of 1, and `B` that is a vector of 20 random normally distributed numbers with a mean of 5\.5 and SD of 1\.
```
dat <- tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
)
```
### 7\.6\.3 t.test()
You can run a Welch two\-sample t\-test by including the two samples you made as the first two arguments to the function `t.test`. You can reference one column of a table by its names using the format `table_name$column_name`
```
t.test(dat$A, dat$B)
```
```
##
## Welch Two Sample t-test
##
## data: dat$A and dat$B
## t = -1.7716, df = 36.244, p-value = 0.08487
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -1.2445818 0.0838683
## sample estimates:
## mean of x mean of y
## 4.886096 5.466453
```
You can also convert the table to long format using the `gather` function and specify the t\-test using the format `dv_column~grouping_column`.
```
longdat <- gather(dat, group, score, A:B)
t.test(score~group, data = longdat)
```
```
##
## Welch Two Sample t-test
##
## data: score by group
## t = -1.7716, df = 36.244, p-value = 0.08487
## alternative hypothesis: true difference in means between group A and group B is not equal to 0
## 95 percent confidence interval:
## -1.2445818 0.0838683
## sample estimates:
## mean in group A mean in group B
## 4.886096 5.466453
```
### 7\.6\.4 broom::tidy()
You can use the function `broom::tidy()` to extract the data from a statistical test in a table format. The example below pipes everything together.
```
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy()
```
| estimate | estimate1 | estimate2 | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| \-0\.6422108 | 5\.044009 | 5\.68622 | \-2\.310591 | 0\.0264905 | 37\.27083 | \-1\.205237 | \-0\.0791844 | Welch Two Sample t\-test | two.sided |
In the pipeline above, `t.test(score~group, data = .)` uses the `.` notation to change the location of the piped\-in data table from it’s default position as the first argument to a different position.
Finally, we can extract a single value from this results table using `pull()`.
```
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
```
```
## [1] 0.7075268
```
### 7\.6\.5 Custom function: t\_sim()
First, name your function `t_sim` and wrap the code above in a function with no arguments.
```
t_sim <- function() {
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
}
```
Run it a few times to see what happens.
```
t_sim()
```
```
## [1] 0.00997552
```
### 7\.6\.6 Iterate t\_sim()
Let’s run the `t_sim` function 1000 times, assign the resulting p\-values to a vector called `reps`, and check what proportion of p\-values are lower than alpha (e.g., .05\). This number is the power for this analysis.
```
reps <- replicate(1000, t_sim())
alpha <- .05
power <- mean(reps < alpha)
power
```
```
## [1] 0.328
```
### 7\.6\.7 Set seed
You can use the `set.seed` function before you run a function that uses random numbers to make sure that you get the same random data back each time. You can use any integer you like as the seed.
```
set.seed(90201)
```
Make sure you don’t ever use `set.seed()` **inside** of a simulation function, or you will just simulate the exact same data over and over again.
Figure 7\.2: @KellyBodwin
### 7\.6\.8 Add arguments
You can just edit your function each time you want to cacluate power for a different sample n, but it is more efficent to build this into your fuction as an arguments. Redefine `t_sim`, setting arguments for the mean and SD of group A, the mean and SD of group B, and the number of subjects per group. Give them all default values.
```
t_sim <- function(n = 10, m1=0, sd1=1, m2=0, sd2=1) {
tibble(
A = rnorm(n, m1, sd1),
B = rnorm(n, m2, sd2)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
}
```
Test your function with some different values to see if the results make sense.
```
t_sim(100)
t_sim(100, 0, 1, 0.5, 1)
```
```
## [1] 0.5065619
## [1] 0.001844064
```
Use `replicate` to calculate power for 100 subjects/group with an effect size of 0\.2 (e.g., A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1\). Use 1000 replications.
```
reps <- replicate(1000, t_sim(100, 0, 1, 0.2, 1))
power <- mean(reps < .05)
power
```
```
## [1] 0.268
```
Compare this to power calculated from the `power.t.test` function.
```
power.t.test(n = 100, delta = 0.2, sd = 1, type="two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
Calculate power via simulation and `power.t.test` for the following tests:
* 20 subjects/group, A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1
* 40 subjects/group, A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1
* 20 subjects/group, A: m \= 10, SD \= 1; B: m \= 12, SD \= 1\.5
7\.7 Glossary
-------------
| term | definition |
| --- | --- |
| [argument](https://psyteachr.github.io/glossary/a#argument) | A variable that provides input to a function. |
| [data type](https://psyteachr.github.io/glossary/d#data.type) | The kind of data represented by an object. |
| [double](https://psyteachr.github.io/glossary/d#double) | A data type representing a real decimal number |
| [function](https://psyteachr.github.io/glossary/f#function.) | A named section of code that can be reused. |
| [iteration](https://psyteachr.github.io/glossary/i#iteration) | Repeating a process or function |
| [matrix](https://psyteachr.github.io/glossary/m#matrix) | A container data type consisting of numbers arranged into a fixed number of rows and columns |
7\.8 Exercises
--------------
Download the [exercises](exercises/07_func_exercise.Rmd). See the [answers](exercises/07_func_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(7)
# run this to access the answers
dataskills::exercise(7, answers = TRUE)
```
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/func.html |
Chapter 7 Iteration \& Functions
================================
7\.1 Learning Objectives
------------------------
You will learn about functions and iteration by using simulation to calculate a power analysis for an independent samples t\-test.
### 7\.1\.1 Basic
1. Work with basic [iteration functions](func.html#iteration-functions) `rep`, `seq`, `replicate` [(video)](https://youtu.be/X3zFA71JzgE)
2. Use [`map()` and `apply()` functions](func.html#map-apply) [(video)](https://youtu.be/HcZxQZwJ8T4)
3. Write your own [custom functions](func.html#custom-functions) with `function()` [(video)](https://youtu.be/Qqjva0xgYC4)
4. Set [default values](func.html#defaults) for the arguments in your functions
### 7\.1\.2 Intermediate
5. Understand [scope](func.html#scope)
6. Use [error handling and warnings](func.html#warnings-errors) in a function
### 7\.1\.3 Advanced
The topics below are not (yet) covered in these materials, but they are directions for independent learning.
7. Repeat commands having multiple arguments using `purrr::map2_*()` and `purrr::pmap_*()`
8. Create **nested data frames** using `dplyr::group_by()` and `tidyr::nest()`
9. Work with **nested data frames** in `dplyr`
10. Capture and deal with errors using ‘adverb’ functions `purrr::safely()` and `purrr::possibly()`
7\.2 Resources
--------------
* Chapters 19 and 21 of [R for Data Science](http://r4ds.had.co.nz)
* [RStudio Apply Functions Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/purrr.pdf)
In the next two lectures, we are going to learn more about [iteration](https://psyteachr.github.io/glossary/i#iteration "Repeating a process or function") (doing the same commands over and over) and custom [functions](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") through a data simulation exercise, which will also lead us into more traditional statistical topics.
7\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse) ## contains purrr, tidyr, dplyr
library(broom) ## converts test output to tidy tables
set.seed(8675309) # makes sure random numbers are reproducible
```
7\.4 Iteration functions
------------------------
We first learned about the two basic iteration functions, `rep()` and `seq()` in the [Working with Data](data.html#rep_seq) chapter.
### 7\.4\.1 rep()
The function `rep()` lets you repeat the first argument a number of times.
Use `rep()` to create a vector of alternating `"A"` and `"B"` values of length 24\.
```
rep(c("A", "B"), 12)
```
```
## [1] "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A"
## [20] "B" "A" "B" "A" "B"
```
If you don’t specify what the second argument is, it defaults to `times`, repeating the vector in the first argument that many times. Make the same vector as above, setting the second argument explicitly.
```
rep(c("A", "B"), times = 12)
```
```
## [1] "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A"
## [20] "B" "A" "B" "A" "B"
```
If the second argument is a vector that is the same length as the first argument, each element in the first vector is repeated than many times. Use `rep()` to create a vector of 11 `"A"` values followed by 3 `"B"` values.
```
rep(c("A", "B"), c(11, 3))
```
```
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "B" "B" "B"
```
You can repeat each element of the vector a sepcified number of times using the `each` argument, Use `rep()` to create a vector of 12 `"A"` values followed by 12 `"B"` values.
```
rep(c("A", "B"), each = 12)
```
```
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "B" "B" "B" "B" "B" "B" "B"
## [20] "B" "B" "B" "B" "B"
```
What do you think will happen if you set both `times` to 3 and `each` to 2?
```
rep(c("A", "B"), times = 3, each = 2)
```
```
## [1] "A" "A" "B" "B" "A" "A" "B" "B" "A" "A" "B" "B"
```
### 7\.4\.2 seq()
The function `seq()` is useful for generating a sequence of numbers with some pattern.
Use `seq()` to create a vector of the integers 0 to 10\.
```
seq(0, 10)
```
```
## [1] 0 1 2 3 4 5 6 7 8 9 10
```
You can set the `by` argument to count by numbers other than 1 (the default). Use `seq()` to create a vector of the numbers 0 to 100 by 10s.
```
seq(0, 100, by = 10)
```
```
## [1] 0 10 20 30 40 50 60 70 80 90 100
```
The argument `length.out` is useful if you know how many steps you want to divide something into. Use `seq()` to create a vector that starts with 0, ends with 100, and has 12 equally spaced steps (hint: how many numbers would be in a vector with 2 *steps*?).
```
seq(0, 100, length.out = 13)
```
```
## [1] 0.000000 8.333333 16.666667 25.000000 33.333333 41.666667
## [7] 50.000000 58.333333 66.666667 75.000000 83.333333 91.666667
## [13] 100.000000
```
### 7\.4\.3 replicate()
You can use the `replicate()` function to run a function `n` times.
For example, you can get 3 sets of 5 numbers from a random normal distribution by setting `n` to `3` and `expr` to `rnorm(5)`.
```
replicate(n = 3, expr = rnorm(5))
```
```
## [,1] [,2] [,3]
## [1,] -0.9965824 0.98721974 -1.5495524
## [2,] 0.7218241 0.02745393 1.0226378
## [3,] -0.6172088 0.67287232 0.1500832
## [4,] 2.0293916 0.57206650 -0.6599640
## [5,] 1.0654161 0.90367770 -0.9945890
```
By default, `replicate()` simplifies your result into a [matrix](https://psyteachr.github.io/glossary/m#matrix "A container data type consisting of numbers arranged into a fixed number of rows and columns") that is easy to convert into a table if your function returns vectors that are the same length. If you’d rather have a list of vectors, set `simplify = FALSE`.
```
replicate(n = 3, expr = rnorm(5), simplify = FALSE)
```
```
## [[1]]
## [1] 1.9724587 -0.4418016 -0.9006372 -0.1505882 -0.8278942
##
## [[2]]
## [1] 1.98582582 0.04400503 -0.40428231 -0.47299855 -0.41482324
##
## [[3]]
## [1] 0.6832342 0.6902011 0.5334919 -0.1861048 0.3829458
```
### 7\.4\.4 map() and apply() functions
`purrr::map()` and `lapply()` return a list of the same length as a vector or list, each element of which is the result of applying a function to the corresponding element. They function much the same, but purrr functions have some optimisations for working with the tidyverse. We’ll be working mostly with purrr functions in this course, but apply functions are very common in code that you might see in examples on the web.
Imagine you want to calculate the power for a two\-sample t\-test with a mean difference of 0\.2 and SD of 1, for all the sample sizes 100 to 1000 (by 100s). You could run the `power.t.test()` function 20 times and extract the values for “power” from the resulting list and put it in a table.
```
p100 <- power.t.test(n = 100, delta = 0.2, sd = 1, type="two.sample")
# 18 more lines
p1000 <- power.t.test(n = 500, delta = 0.2, sd = 1, type="two.sample")
tibble(
n = c(100, "...", 1000),
power = c(p100$power, "...", p1000$power)
)
```
| n | power |
| --- | --- |
| 100 | 0\.290266404572217 |
| … | … |
| 1000 | 0\.884788352886661 |
However, the `apply()` and `map()` functions allow you to perform a function on each item in a vector or list. First make an object `n` that is the vector of the sample sizes you want to test, then use `lapply()` or `map()` to run the function `power.t.test()` on each item. You can set other arguments to `power.t.test()` after the function argument.
```
n <- seq(100, 1000, 100)
pcalc <- lapply(n, power.t.test,
delta = 0.2, sd = 1, type="two.sample")
# or
pcalc <- purrr::map(n, power.t.test,
delta = 0.2, sd = 1, type="two.sample")
```
These functions return a list where each item is the result of `power.t.test()`, which returns a list of results that includes the named item “power.” This is a special list that has a summary format if you just print it directly:
```
pcalc[[1]]
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
But you can see the individual items using the `str()` function.
```
pcalc[[1]] %>% str()
```
```
## List of 8
## $ n : num 100
## $ delta : num 0.2
## $ sd : num 1
## $ sig.level : num 0.05
## $ power : num 0.29
## $ alternative: chr "two.sided"
## $ note : chr "n is number in *each* group"
## $ method : chr "Two-sample t test power calculation"
## - attr(*, "class")= chr "power.htest"
```
`sapply()` is a version of `lapply()` that returns a vector or array instead of a list, where appropriate. The corresponding purrr functions are `map_dbl()`, `map_chr()`, `map_int()` and `map_lgl()`, which return vectors with the corresponding [data type](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object.").
You can extract a value from a list with the function `[[`. You usually see this written as `pcalc[[1]]`, but if you put it inside backticks, you can use it in apply and map functions.
```
sapply(pcalc, `[[`, "power")
```
```
## [1] 0.2902664 0.5140434 0.6863712 0.8064964 0.8847884 0.9333687 0.9623901
## [8] 0.9792066 0.9887083 0.9939638
```
We use `map_dbl()` here because the value for “power” is a [double](https://psyteachr.github.io/glossary/d#double "A data type representing a real decimal number").
```
purrr::map_dbl(pcalc, `[[`, "power")
```
```
## [1] 0.2902664 0.5140434 0.6863712 0.8064964 0.8847884 0.9333687 0.9623901
## [8] 0.9792066 0.9887083 0.9939638
```
We can use the `map()` functions inside a `mutate()` function to run the `power.t.test()` function on the value of `n` from each row of a table, then extract the value for “power,” and delete the column with the power calculations.
```
mypower <- tibble(
n = seq(100, 1000, 100)) %>%
mutate(pcalc = purrr::map(n, power.t.test,
delta = 0.2,
sd = 1,
type="two.sample"),
power = purrr::map_dbl(pcalc, `[[`, "power")) %>%
select(-pcalc)
```
Figure 7\.1: Power for a two\-sample t\-test with d \= 0\.2
7\.5 Custom functions
---------------------
In addition to the built\-in functions and functions you can access from packages, you can also write your own functions (and eventually even packages!).
### 7\.5\.1 Structuring a function
The general structure of a function is as follows:
```
function_name <- function(my_args) {
# process the arguments
# return some value
}
```
Here is a very simple function. Can you guess what it does?
```
add1 <- function(my_number) {
my_number + 1
}
add1(10)
```
```
## [1] 11
```
Let’s make a function that reports p\-values in APA format (with “p \= \[rounded value]” when p \>\= .001 and “p \< .001” when p \< .001\).
First, we have to name the function. You can name it anything, but try not to duplicate existing functions or you will overwrite them. For example, if you call your function `rep`, then you will need to use `base::rep()` to access the normal `rep` function. Let’s call our p\-value function `report_p` and set up the framework of the function.
```
report_p <- function() {
}
```
### 7\.5\.2 Arguments
We need to add one [argument](https://psyteachr.github.io/glossary/a#argument "A variable that provides input to a function."), the p\-value you want to report. The names you choose for the arguments are private to that argument, so it is not a problem if they conflict with other variables in your script. You put the arguments in the parentheses of `function()` in the order you want them to default (just like the built\-in functions you’ve used before).
```
report_p <- function(p) {
}
```
### 7\.5\.3 Argument defaults
You can add a default value to any argument. If that argument is skipped, then the function uses the default argument. It probably doesn’t make sense to run this function without specifying the p\-value, but we can add a second argument called `digits` that defaults to 3, so we can round p\-values to any number of digits.
```
report_p <- function(p, digits = 3) {
}
```
Now we need to write some code inside the function to process the input arguments and turn them into a **return**ed output. Put the output as the last item in function.
```
report_p <- function(p, digits = 3) {
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
reported
}
```
You might also see the returned output inside of the `return()` function. This does the same thing.
```
report_p <- function(p, digits = 3) {
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
return(reported)
}
```
When you run the code defining your function, it doesn’t output anything, but makes a new object in the Environment tab under **`Functions`**. Now you can run the function.
```
report_p(0.04869)
report_p(0.0000023)
```
```
## [1] "p = 0.049"
## [1] "p < .001"
```
### 7\.5\.4 Scope
What happens in a function stays in a function. You can change the value of a variable passed to a function, but that won’t change the value of the variable outside of the function, even if that variable has the same name as the one in the function.
```
reported <- "not changed"
# inside this function, reported == "p = 0.002"
report_p(0.0023)
reported # still "not changed"
```
```
## [1] "p = 0.002"
## [1] "not changed"
```
### 7\.5\.5 Warnings and errors
What happens when you omit the argument for `p`? Or if you set `p` to 1\.5 or “a?”
You might want to add a more specific warning and stop running the function code if someone enters a value that isn’t a number. You can do this with the `stop()` function.
If someone enters a number that isn’t possible for a p\-value (0\-1\), you might want to warn them that this is probably not what they intended, but still continue with the function. You can do this with `warning()`.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
reported
}
```
```
report_p()
```
```
## Error in report_p(): argument "p" is missing, with no default
```
```
report_p("a")
```
```
## Error in report_p("a"): p must be a number
```
```
report_p(-2)
```
```
## Warning in report_p(-2): p-values are normally greater than 0
```
```
report_p(2)
```
```
## Warning in report_p(2): p-values are normally less than 1
```
```
## [1] "p < .001"
## [1] "p = 2"
```
7\.6 Iterating your own functions
---------------------------------
First, let’s build up the code that we want to iterate.
### 7\.6\.1 rnorm()
Create a vector of 20 random numbers drawn from a normal distribution with a mean of 5 and standard deviation of 1 using the `rnorm()` function and store them in the variable `A`.
```
A <- rnorm(20, mean = 5, sd = 1)
```
### 7\.6\.2 tibble::tibble()
A `tibble` is a type of table or `data.frame`. The function `tibble::tibble()` creates a tibble with a column for each argument. Each argument takes the form `column_name = data_vector`.
Create a table called `dat` including two vectors: `A` that is a vector of 20 random normally distributed numbers with a mean of 5 and SD of 1, and `B` that is a vector of 20 random normally distributed numbers with a mean of 5\.5 and SD of 1\.
```
dat <- tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
)
```
### 7\.6\.3 t.test()
You can run a Welch two\-sample t\-test by including the two samples you made as the first two arguments to the function `t.test`. You can reference one column of a table by its names using the format `table_name$column_name`
```
t.test(dat$A, dat$B)
```
```
##
## Welch Two Sample t-test
##
## data: dat$A and dat$B
## t = -1.7716, df = 36.244, p-value = 0.08487
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -1.2445818 0.0838683
## sample estimates:
## mean of x mean of y
## 4.886096 5.466453
```
You can also convert the table to long format using the `gather` function and specify the t\-test using the format `dv_column~grouping_column`.
```
longdat <- gather(dat, group, score, A:B)
t.test(score~group, data = longdat)
```
```
##
## Welch Two Sample t-test
##
## data: score by group
## t = -1.7716, df = 36.244, p-value = 0.08487
## alternative hypothesis: true difference in means between group A and group B is not equal to 0
## 95 percent confidence interval:
## -1.2445818 0.0838683
## sample estimates:
## mean in group A mean in group B
## 4.886096 5.466453
```
### 7\.6\.4 broom::tidy()
You can use the function `broom::tidy()` to extract the data from a statistical test in a table format. The example below pipes everything together.
```
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy()
```
| estimate | estimate1 | estimate2 | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| \-0\.6422108 | 5\.044009 | 5\.68622 | \-2\.310591 | 0\.0264905 | 37\.27083 | \-1\.205237 | \-0\.0791844 | Welch Two Sample t\-test | two.sided |
In the pipeline above, `t.test(score~group, data = .)` uses the `.` notation to change the location of the piped\-in data table from it’s default position as the first argument to a different position.
Finally, we can extract a single value from this results table using `pull()`.
```
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
```
```
## [1] 0.7075268
```
### 7\.6\.5 Custom function: t\_sim()
First, name your function `t_sim` and wrap the code above in a function with no arguments.
```
t_sim <- function() {
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
}
```
Run it a few times to see what happens.
```
t_sim()
```
```
## [1] 0.00997552
```
### 7\.6\.6 Iterate t\_sim()
Let’s run the `t_sim` function 1000 times, assign the resulting p\-values to a vector called `reps`, and check what proportion of p\-values are lower than alpha (e.g., .05\). This number is the power for this analysis.
```
reps <- replicate(1000, t_sim())
alpha <- .05
power <- mean(reps < alpha)
power
```
```
## [1] 0.328
```
### 7\.6\.7 Set seed
You can use the `set.seed` function before you run a function that uses random numbers to make sure that you get the same random data back each time. You can use any integer you like as the seed.
```
set.seed(90201)
```
Make sure you don’t ever use `set.seed()` **inside** of a simulation function, or you will just simulate the exact same data over and over again.
Figure 7\.2: @KellyBodwin
### 7\.6\.8 Add arguments
You can just edit your function each time you want to cacluate power for a different sample n, but it is more efficent to build this into your fuction as an arguments. Redefine `t_sim`, setting arguments for the mean and SD of group A, the mean and SD of group B, and the number of subjects per group. Give them all default values.
```
t_sim <- function(n = 10, m1=0, sd1=1, m2=0, sd2=1) {
tibble(
A = rnorm(n, m1, sd1),
B = rnorm(n, m2, sd2)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
}
```
Test your function with some different values to see if the results make sense.
```
t_sim(100)
t_sim(100, 0, 1, 0.5, 1)
```
```
## [1] 0.5065619
## [1] 0.001844064
```
Use `replicate` to calculate power for 100 subjects/group with an effect size of 0\.2 (e.g., A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1\). Use 1000 replications.
```
reps <- replicate(1000, t_sim(100, 0, 1, 0.2, 1))
power <- mean(reps < .05)
power
```
```
## [1] 0.268
```
Compare this to power calculated from the `power.t.test` function.
```
power.t.test(n = 100, delta = 0.2, sd = 1, type="two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
Calculate power via simulation and `power.t.test` for the following tests:
* 20 subjects/group, A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1
* 40 subjects/group, A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1
* 20 subjects/group, A: m \= 10, SD \= 1; B: m \= 12, SD \= 1\.5
7\.7 Glossary
-------------
| term | definition |
| --- | --- |
| [argument](https://psyteachr.github.io/glossary/a#argument) | A variable that provides input to a function. |
| [data type](https://psyteachr.github.io/glossary/d#data.type) | The kind of data represented by an object. |
| [double](https://psyteachr.github.io/glossary/d#double) | A data type representing a real decimal number |
| [function](https://psyteachr.github.io/glossary/f#function.) | A named section of code that can be reused. |
| [iteration](https://psyteachr.github.io/glossary/i#iteration) | Repeating a process or function |
| [matrix](https://psyteachr.github.io/glossary/m#matrix) | A container data type consisting of numbers arranged into a fixed number of rows and columns |
7\.8 Exercises
--------------
Download the [exercises](exercises/07_func_exercise.Rmd). See the [answers](exercises/07_func_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(7)
# run this to access the answers
dataskills::exercise(7, answers = TRUE)
```
7\.1 Learning Objectives
------------------------
You will learn about functions and iteration by using simulation to calculate a power analysis for an independent samples t\-test.
### 7\.1\.1 Basic
1. Work with basic [iteration functions](func.html#iteration-functions) `rep`, `seq`, `replicate` [(video)](https://youtu.be/X3zFA71JzgE)
2. Use [`map()` and `apply()` functions](func.html#map-apply) [(video)](https://youtu.be/HcZxQZwJ8T4)
3. Write your own [custom functions](func.html#custom-functions) with `function()` [(video)](https://youtu.be/Qqjva0xgYC4)
4. Set [default values](func.html#defaults) for the arguments in your functions
### 7\.1\.2 Intermediate
5. Understand [scope](func.html#scope)
6. Use [error handling and warnings](func.html#warnings-errors) in a function
### 7\.1\.3 Advanced
The topics below are not (yet) covered in these materials, but they are directions for independent learning.
7. Repeat commands having multiple arguments using `purrr::map2_*()` and `purrr::pmap_*()`
8. Create **nested data frames** using `dplyr::group_by()` and `tidyr::nest()`
9. Work with **nested data frames** in `dplyr`
10. Capture and deal with errors using ‘adverb’ functions `purrr::safely()` and `purrr::possibly()`
### 7\.1\.1 Basic
1. Work with basic [iteration functions](func.html#iteration-functions) `rep`, `seq`, `replicate` [(video)](https://youtu.be/X3zFA71JzgE)
2. Use [`map()` and `apply()` functions](func.html#map-apply) [(video)](https://youtu.be/HcZxQZwJ8T4)
3. Write your own [custom functions](func.html#custom-functions) with `function()` [(video)](https://youtu.be/Qqjva0xgYC4)
4. Set [default values](func.html#defaults) for the arguments in your functions
### 7\.1\.2 Intermediate
5. Understand [scope](func.html#scope)
6. Use [error handling and warnings](func.html#warnings-errors) in a function
### 7\.1\.3 Advanced
The topics below are not (yet) covered in these materials, but they are directions for independent learning.
7. Repeat commands having multiple arguments using `purrr::map2_*()` and `purrr::pmap_*()`
8. Create **nested data frames** using `dplyr::group_by()` and `tidyr::nest()`
9. Work with **nested data frames** in `dplyr`
10. Capture and deal with errors using ‘adverb’ functions `purrr::safely()` and `purrr::possibly()`
7\.2 Resources
--------------
* Chapters 19 and 21 of [R for Data Science](http://r4ds.had.co.nz)
* [RStudio Apply Functions Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/purrr.pdf)
In the next two lectures, we are going to learn more about [iteration](https://psyteachr.github.io/glossary/i#iteration "Repeating a process or function") (doing the same commands over and over) and custom [functions](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") through a data simulation exercise, which will also lead us into more traditional statistical topics.
7\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse) ## contains purrr, tidyr, dplyr
library(broom) ## converts test output to tidy tables
set.seed(8675309) # makes sure random numbers are reproducible
```
7\.4 Iteration functions
------------------------
We first learned about the two basic iteration functions, `rep()` and `seq()` in the [Working with Data](data.html#rep_seq) chapter.
### 7\.4\.1 rep()
The function `rep()` lets you repeat the first argument a number of times.
Use `rep()` to create a vector of alternating `"A"` and `"B"` values of length 24\.
```
rep(c("A", "B"), 12)
```
```
## [1] "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A"
## [20] "B" "A" "B" "A" "B"
```
If you don’t specify what the second argument is, it defaults to `times`, repeating the vector in the first argument that many times. Make the same vector as above, setting the second argument explicitly.
```
rep(c("A", "B"), times = 12)
```
```
## [1] "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A"
## [20] "B" "A" "B" "A" "B"
```
If the second argument is a vector that is the same length as the first argument, each element in the first vector is repeated than many times. Use `rep()` to create a vector of 11 `"A"` values followed by 3 `"B"` values.
```
rep(c("A", "B"), c(11, 3))
```
```
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "B" "B" "B"
```
You can repeat each element of the vector a sepcified number of times using the `each` argument, Use `rep()` to create a vector of 12 `"A"` values followed by 12 `"B"` values.
```
rep(c("A", "B"), each = 12)
```
```
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "B" "B" "B" "B" "B" "B" "B"
## [20] "B" "B" "B" "B" "B"
```
What do you think will happen if you set both `times` to 3 and `each` to 2?
```
rep(c("A", "B"), times = 3, each = 2)
```
```
## [1] "A" "A" "B" "B" "A" "A" "B" "B" "A" "A" "B" "B"
```
### 7\.4\.2 seq()
The function `seq()` is useful for generating a sequence of numbers with some pattern.
Use `seq()` to create a vector of the integers 0 to 10\.
```
seq(0, 10)
```
```
## [1] 0 1 2 3 4 5 6 7 8 9 10
```
You can set the `by` argument to count by numbers other than 1 (the default). Use `seq()` to create a vector of the numbers 0 to 100 by 10s.
```
seq(0, 100, by = 10)
```
```
## [1] 0 10 20 30 40 50 60 70 80 90 100
```
The argument `length.out` is useful if you know how many steps you want to divide something into. Use `seq()` to create a vector that starts with 0, ends with 100, and has 12 equally spaced steps (hint: how many numbers would be in a vector with 2 *steps*?).
```
seq(0, 100, length.out = 13)
```
```
## [1] 0.000000 8.333333 16.666667 25.000000 33.333333 41.666667
## [7] 50.000000 58.333333 66.666667 75.000000 83.333333 91.666667
## [13] 100.000000
```
### 7\.4\.3 replicate()
You can use the `replicate()` function to run a function `n` times.
For example, you can get 3 sets of 5 numbers from a random normal distribution by setting `n` to `3` and `expr` to `rnorm(5)`.
```
replicate(n = 3, expr = rnorm(5))
```
```
## [,1] [,2] [,3]
## [1,] -0.9965824 0.98721974 -1.5495524
## [2,] 0.7218241 0.02745393 1.0226378
## [3,] -0.6172088 0.67287232 0.1500832
## [4,] 2.0293916 0.57206650 -0.6599640
## [5,] 1.0654161 0.90367770 -0.9945890
```
By default, `replicate()` simplifies your result into a [matrix](https://psyteachr.github.io/glossary/m#matrix "A container data type consisting of numbers arranged into a fixed number of rows and columns") that is easy to convert into a table if your function returns vectors that are the same length. If you’d rather have a list of vectors, set `simplify = FALSE`.
```
replicate(n = 3, expr = rnorm(5), simplify = FALSE)
```
```
## [[1]]
## [1] 1.9724587 -0.4418016 -0.9006372 -0.1505882 -0.8278942
##
## [[2]]
## [1] 1.98582582 0.04400503 -0.40428231 -0.47299855 -0.41482324
##
## [[3]]
## [1] 0.6832342 0.6902011 0.5334919 -0.1861048 0.3829458
```
### 7\.4\.4 map() and apply() functions
`purrr::map()` and `lapply()` return a list of the same length as a vector or list, each element of which is the result of applying a function to the corresponding element. They function much the same, but purrr functions have some optimisations for working with the tidyverse. We’ll be working mostly with purrr functions in this course, but apply functions are very common in code that you might see in examples on the web.
Imagine you want to calculate the power for a two\-sample t\-test with a mean difference of 0\.2 and SD of 1, for all the sample sizes 100 to 1000 (by 100s). You could run the `power.t.test()` function 20 times and extract the values for “power” from the resulting list and put it in a table.
```
p100 <- power.t.test(n = 100, delta = 0.2, sd = 1, type="two.sample")
# 18 more lines
p1000 <- power.t.test(n = 500, delta = 0.2, sd = 1, type="two.sample")
tibble(
n = c(100, "...", 1000),
power = c(p100$power, "...", p1000$power)
)
```
| n | power |
| --- | --- |
| 100 | 0\.290266404572217 |
| … | … |
| 1000 | 0\.884788352886661 |
However, the `apply()` and `map()` functions allow you to perform a function on each item in a vector or list. First make an object `n` that is the vector of the sample sizes you want to test, then use `lapply()` or `map()` to run the function `power.t.test()` on each item. You can set other arguments to `power.t.test()` after the function argument.
```
n <- seq(100, 1000, 100)
pcalc <- lapply(n, power.t.test,
delta = 0.2, sd = 1, type="two.sample")
# or
pcalc <- purrr::map(n, power.t.test,
delta = 0.2, sd = 1, type="two.sample")
```
These functions return a list where each item is the result of `power.t.test()`, which returns a list of results that includes the named item “power.” This is a special list that has a summary format if you just print it directly:
```
pcalc[[1]]
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
But you can see the individual items using the `str()` function.
```
pcalc[[1]] %>% str()
```
```
## List of 8
## $ n : num 100
## $ delta : num 0.2
## $ sd : num 1
## $ sig.level : num 0.05
## $ power : num 0.29
## $ alternative: chr "two.sided"
## $ note : chr "n is number in *each* group"
## $ method : chr "Two-sample t test power calculation"
## - attr(*, "class")= chr "power.htest"
```
`sapply()` is a version of `lapply()` that returns a vector or array instead of a list, where appropriate. The corresponding purrr functions are `map_dbl()`, `map_chr()`, `map_int()` and `map_lgl()`, which return vectors with the corresponding [data type](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object.").
You can extract a value from a list with the function `[[`. You usually see this written as `pcalc[[1]]`, but if you put it inside backticks, you can use it in apply and map functions.
```
sapply(pcalc, `[[`, "power")
```
```
## [1] 0.2902664 0.5140434 0.6863712 0.8064964 0.8847884 0.9333687 0.9623901
## [8] 0.9792066 0.9887083 0.9939638
```
We use `map_dbl()` here because the value for “power” is a [double](https://psyteachr.github.io/glossary/d#double "A data type representing a real decimal number").
```
purrr::map_dbl(pcalc, `[[`, "power")
```
```
## [1] 0.2902664 0.5140434 0.6863712 0.8064964 0.8847884 0.9333687 0.9623901
## [8] 0.9792066 0.9887083 0.9939638
```
We can use the `map()` functions inside a `mutate()` function to run the `power.t.test()` function on the value of `n` from each row of a table, then extract the value for “power,” and delete the column with the power calculations.
```
mypower <- tibble(
n = seq(100, 1000, 100)) %>%
mutate(pcalc = purrr::map(n, power.t.test,
delta = 0.2,
sd = 1,
type="two.sample"),
power = purrr::map_dbl(pcalc, `[[`, "power")) %>%
select(-pcalc)
```
Figure 7\.1: Power for a two\-sample t\-test with d \= 0\.2
### 7\.4\.1 rep()
The function `rep()` lets you repeat the first argument a number of times.
Use `rep()` to create a vector of alternating `"A"` and `"B"` values of length 24\.
```
rep(c("A", "B"), 12)
```
```
## [1] "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A"
## [20] "B" "A" "B" "A" "B"
```
If you don’t specify what the second argument is, it defaults to `times`, repeating the vector in the first argument that many times. Make the same vector as above, setting the second argument explicitly.
```
rep(c("A", "B"), times = 12)
```
```
## [1] "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A"
## [20] "B" "A" "B" "A" "B"
```
If the second argument is a vector that is the same length as the first argument, each element in the first vector is repeated than many times. Use `rep()` to create a vector of 11 `"A"` values followed by 3 `"B"` values.
```
rep(c("A", "B"), c(11, 3))
```
```
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "B" "B" "B"
```
You can repeat each element of the vector a sepcified number of times using the `each` argument, Use `rep()` to create a vector of 12 `"A"` values followed by 12 `"B"` values.
```
rep(c("A", "B"), each = 12)
```
```
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "B" "B" "B" "B" "B" "B" "B"
## [20] "B" "B" "B" "B" "B"
```
What do you think will happen if you set both `times` to 3 and `each` to 2?
```
rep(c("A", "B"), times = 3, each = 2)
```
```
## [1] "A" "A" "B" "B" "A" "A" "B" "B" "A" "A" "B" "B"
```
### 7\.4\.2 seq()
The function `seq()` is useful for generating a sequence of numbers with some pattern.
Use `seq()` to create a vector of the integers 0 to 10\.
```
seq(0, 10)
```
```
## [1] 0 1 2 3 4 5 6 7 8 9 10
```
You can set the `by` argument to count by numbers other than 1 (the default). Use `seq()` to create a vector of the numbers 0 to 100 by 10s.
```
seq(0, 100, by = 10)
```
```
## [1] 0 10 20 30 40 50 60 70 80 90 100
```
The argument `length.out` is useful if you know how many steps you want to divide something into. Use `seq()` to create a vector that starts with 0, ends with 100, and has 12 equally spaced steps (hint: how many numbers would be in a vector with 2 *steps*?).
```
seq(0, 100, length.out = 13)
```
```
## [1] 0.000000 8.333333 16.666667 25.000000 33.333333 41.666667
## [7] 50.000000 58.333333 66.666667 75.000000 83.333333 91.666667
## [13] 100.000000
```
### 7\.4\.3 replicate()
You can use the `replicate()` function to run a function `n` times.
For example, you can get 3 sets of 5 numbers from a random normal distribution by setting `n` to `3` and `expr` to `rnorm(5)`.
```
replicate(n = 3, expr = rnorm(5))
```
```
## [,1] [,2] [,3]
## [1,] -0.9965824 0.98721974 -1.5495524
## [2,] 0.7218241 0.02745393 1.0226378
## [3,] -0.6172088 0.67287232 0.1500832
## [4,] 2.0293916 0.57206650 -0.6599640
## [5,] 1.0654161 0.90367770 -0.9945890
```
By default, `replicate()` simplifies your result into a [matrix](https://psyteachr.github.io/glossary/m#matrix "A container data type consisting of numbers arranged into a fixed number of rows and columns") that is easy to convert into a table if your function returns vectors that are the same length. If you’d rather have a list of vectors, set `simplify = FALSE`.
```
replicate(n = 3, expr = rnorm(5), simplify = FALSE)
```
```
## [[1]]
## [1] 1.9724587 -0.4418016 -0.9006372 -0.1505882 -0.8278942
##
## [[2]]
## [1] 1.98582582 0.04400503 -0.40428231 -0.47299855 -0.41482324
##
## [[3]]
## [1] 0.6832342 0.6902011 0.5334919 -0.1861048 0.3829458
```
### 7\.4\.4 map() and apply() functions
`purrr::map()` and `lapply()` return a list of the same length as a vector or list, each element of which is the result of applying a function to the corresponding element. They function much the same, but purrr functions have some optimisations for working with the tidyverse. We’ll be working mostly with purrr functions in this course, but apply functions are very common in code that you might see in examples on the web.
Imagine you want to calculate the power for a two\-sample t\-test with a mean difference of 0\.2 and SD of 1, for all the sample sizes 100 to 1000 (by 100s). You could run the `power.t.test()` function 20 times and extract the values for “power” from the resulting list and put it in a table.
```
p100 <- power.t.test(n = 100, delta = 0.2, sd = 1, type="two.sample")
# 18 more lines
p1000 <- power.t.test(n = 500, delta = 0.2, sd = 1, type="two.sample")
tibble(
n = c(100, "...", 1000),
power = c(p100$power, "...", p1000$power)
)
```
| n | power |
| --- | --- |
| 100 | 0\.290266404572217 |
| … | … |
| 1000 | 0\.884788352886661 |
However, the `apply()` and `map()` functions allow you to perform a function on each item in a vector or list. First make an object `n` that is the vector of the sample sizes you want to test, then use `lapply()` or `map()` to run the function `power.t.test()` on each item. You can set other arguments to `power.t.test()` after the function argument.
```
n <- seq(100, 1000, 100)
pcalc <- lapply(n, power.t.test,
delta = 0.2, sd = 1, type="two.sample")
# or
pcalc <- purrr::map(n, power.t.test,
delta = 0.2, sd = 1, type="two.sample")
```
These functions return a list where each item is the result of `power.t.test()`, which returns a list of results that includes the named item “power.” This is a special list that has a summary format if you just print it directly:
```
pcalc[[1]]
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
But you can see the individual items using the `str()` function.
```
pcalc[[1]] %>% str()
```
```
## List of 8
## $ n : num 100
## $ delta : num 0.2
## $ sd : num 1
## $ sig.level : num 0.05
## $ power : num 0.29
## $ alternative: chr "two.sided"
## $ note : chr "n is number in *each* group"
## $ method : chr "Two-sample t test power calculation"
## - attr(*, "class")= chr "power.htest"
```
`sapply()` is a version of `lapply()` that returns a vector or array instead of a list, where appropriate. The corresponding purrr functions are `map_dbl()`, `map_chr()`, `map_int()` and `map_lgl()`, which return vectors with the corresponding [data type](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object.").
You can extract a value from a list with the function `[[`. You usually see this written as `pcalc[[1]]`, but if you put it inside backticks, you can use it in apply and map functions.
```
sapply(pcalc, `[[`, "power")
```
```
## [1] 0.2902664 0.5140434 0.6863712 0.8064964 0.8847884 0.9333687 0.9623901
## [8] 0.9792066 0.9887083 0.9939638
```
We use `map_dbl()` here because the value for “power” is a [double](https://psyteachr.github.io/glossary/d#double "A data type representing a real decimal number").
```
purrr::map_dbl(pcalc, `[[`, "power")
```
```
## [1] 0.2902664 0.5140434 0.6863712 0.8064964 0.8847884 0.9333687 0.9623901
## [8] 0.9792066 0.9887083 0.9939638
```
We can use the `map()` functions inside a `mutate()` function to run the `power.t.test()` function on the value of `n` from each row of a table, then extract the value for “power,” and delete the column with the power calculations.
```
mypower <- tibble(
n = seq(100, 1000, 100)) %>%
mutate(pcalc = purrr::map(n, power.t.test,
delta = 0.2,
sd = 1,
type="two.sample"),
power = purrr::map_dbl(pcalc, `[[`, "power")) %>%
select(-pcalc)
```
Figure 7\.1: Power for a two\-sample t\-test with d \= 0\.2
7\.5 Custom functions
---------------------
In addition to the built\-in functions and functions you can access from packages, you can also write your own functions (and eventually even packages!).
### 7\.5\.1 Structuring a function
The general structure of a function is as follows:
```
function_name <- function(my_args) {
# process the arguments
# return some value
}
```
Here is a very simple function. Can you guess what it does?
```
add1 <- function(my_number) {
my_number + 1
}
add1(10)
```
```
## [1] 11
```
Let’s make a function that reports p\-values in APA format (with “p \= \[rounded value]” when p \>\= .001 and “p \< .001” when p \< .001\).
First, we have to name the function. You can name it anything, but try not to duplicate existing functions or you will overwrite them. For example, if you call your function `rep`, then you will need to use `base::rep()` to access the normal `rep` function. Let’s call our p\-value function `report_p` and set up the framework of the function.
```
report_p <- function() {
}
```
### 7\.5\.2 Arguments
We need to add one [argument](https://psyteachr.github.io/glossary/a#argument "A variable that provides input to a function."), the p\-value you want to report. The names you choose for the arguments are private to that argument, so it is not a problem if they conflict with other variables in your script. You put the arguments in the parentheses of `function()` in the order you want them to default (just like the built\-in functions you’ve used before).
```
report_p <- function(p) {
}
```
### 7\.5\.3 Argument defaults
You can add a default value to any argument. If that argument is skipped, then the function uses the default argument. It probably doesn’t make sense to run this function without specifying the p\-value, but we can add a second argument called `digits` that defaults to 3, so we can round p\-values to any number of digits.
```
report_p <- function(p, digits = 3) {
}
```
Now we need to write some code inside the function to process the input arguments and turn them into a **return**ed output. Put the output as the last item in function.
```
report_p <- function(p, digits = 3) {
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
reported
}
```
You might also see the returned output inside of the `return()` function. This does the same thing.
```
report_p <- function(p, digits = 3) {
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
return(reported)
}
```
When you run the code defining your function, it doesn’t output anything, but makes a new object in the Environment tab under **`Functions`**. Now you can run the function.
```
report_p(0.04869)
report_p(0.0000023)
```
```
## [1] "p = 0.049"
## [1] "p < .001"
```
### 7\.5\.4 Scope
What happens in a function stays in a function. You can change the value of a variable passed to a function, but that won’t change the value of the variable outside of the function, even if that variable has the same name as the one in the function.
```
reported <- "not changed"
# inside this function, reported == "p = 0.002"
report_p(0.0023)
reported # still "not changed"
```
```
## [1] "p = 0.002"
## [1] "not changed"
```
### 7\.5\.5 Warnings and errors
What happens when you omit the argument for `p`? Or if you set `p` to 1\.5 or “a?”
You might want to add a more specific warning and stop running the function code if someone enters a value that isn’t a number. You can do this with the `stop()` function.
If someone enters a number that isn’t possible for a p\-value (0\-1\), you might want to warn them that this is probably not what they intended, but still continue with the function. You can do this with `warning()`.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
reported
}
```
```
report_p()
```
```
## Error in report_p(): argument "p" is missing, with no default
```
```
report_p("a")
```
```
## Error in report_p("a"): p must be a number
```
```
report_p(-2)
```
```
## Warning in report_p(-2): p-values are normally greater than 0
```
```
report_p(2)
```
```
## Warning in report_p(2): p-values are normally less than 1
```
```
## [1] "p < .001"
## [1] "p = 2"
```
### 7\.5\.1 Structuring a function
The general structure of a function is as follows:
```
function_name <- function(my_args) {
# process the arguments
# return some value
}
```
Here is a very simple function. Can you guess what it does?
```
add1 <- function(my_number) {
my_number + 1
}
add1(10)
```
```
## [1] 11
```
Let’s make a function that reports p\-values in APA format (with “p \= \[rounded value]” when p \>\= .001 and “p \< .001” when p \< .001\).
First, we have to name the function. You can name it anything, but try not to duplicate existing functions or you will overwrite them. For example, if you call your function `rep`, then you will need to use `base::rep()` to access the normal `rep` function. Let’s call our p\-value function `report_p` and set up the framework of the function.
```
report_p <- function() {
}
```
### 7\.5\.2 Arguments
We need to add one [argument](https://psyteachr.github.io/glossary/a#argument "A variable that provides input to a function."), the p\-value you want to report. The names you choose for the arguments are private to that argument, so it is not a problem if they conflict with other variables in your script. You put the arguments in the parentheses of `function()` in the order you want them to default (just like the built\-in functions you’ve used before).
```
report_p <- function(p) {
}
```
### 7\.5\.3 Argument defaults
You can add a default value to any argument. If that argument is skipped, then the function uses the default argument. It probably doesn’t make sense to run this function without specifying the p\-value, but we can add a second argument called `digits` that defaults to 3, so we can round p\-values to any number of digits.
```
report_p <- function(p, digits = 3) {
}
```
Now we need to write some code inside the function to process the input arguments and turn them into a **return**ed output. Put the output as the last item in function.
```
report_p <- function(p, digits = 3) {
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
reported
}
```
You might also see the returned output inside of the `return()` function. This does the same thing.
```
report_p <- function(p, digits = 3) {
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
return(reported)
}
```
When you run the code defining your function, it doesn’t output anything, but makes a new object in the Environment tab under **`Functions`**. Now you can run the function.
```
report_p(0.04869)
report_p(0.0000023)
```
```
## [1] "p = 0.049"
## [1] "p < .001"
```
### 7\.5\.4 Scope
What happens in a function stays in a function. You can change the value of a variable passed to a function, but that won’t change the value of the variable outside of the function, even if that variable has the same name as the one in the function.
```
reported <- "not changed"
# inside this function, reported == "p = 0.002"
report_p(0.0023)
reported # still "not changed"
```
```
## [1] "p = 0.002"
## [1] "not changed"
```
### 7\.5\.5 Warnings and errors
What happens when you omit the argument for `p`? Or if you set `p` to 1\.5 or “a?”
You might want to add a more specific warning and stop running the function code if someone enters a value that isn’t a number. You can do this with the `stop()` function.
If someone enters a number that isn’t possible for a p\-value (0\-1\), you might want to warn them that this is probably not what they intended, but still continue with the function. You can do this with `warning()`.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
reported
}
```
```
report_p()
```
```
## Error in report_p(): argument "p" is missing, with no default
```
```
report_p("a")
```
```
## Error in report_p("a"): p must be a number
```
```
report_p(-2)
```
```
## Warning in report_p(-2): p-values are normally greater than 0
```
```
report_p(2)
```
```
## Warning in report_p(2): p-values are normally less than 1
```
```
## [1] "p < .001"
## [1] "p = 2"
```
7\.6 Iterating your own functions
---------------------------------
First, let’s build up the code that we want to iterate.
### 7\.6\.1 rnorm()
Create a vector of 20 random numbers drawn from a normal distribution with a mean of 5 and standard deviation of 1 using the `rnorm()` function and store them in the variable `A`.
```
A <- rnorm(20, mean = 5, sd = 1)
```
### 7\.6\.2 tibble::tibble()
A `tibble` is a type of table or `data.frame`. The function `tibble::tibble()` creates a tibble with a column for each argument. Each argument takes the form `column_name = data_vector`.
Create a table called `dat` including two vectors: `A` that is a vector of 20 random normally distributed numbers with a mean of 5 and SD of 1, and `B` that is a vector of 20 random normally distributed numbers with a mean of 5\.5 and SD of 1\.
```
dat <- tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
)
```
### 7\.6\.3 t.test()
You can run a Welch two\-sample t\-test by including the two samples you made as the first two arguments to the function `t.test`. You can reference one column of a table by its names using the format `table_name$column_name`
```
t.test(dat$A, dat$B)
```
```
##
## Welch Two Sample t-test
##
## data: dat$A and dat$B
## t = -1.7716, df = 36.244, p-value = 0.08487
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -1.2445818 0.0838683
## sample estimates:
## mean of x mean of y
## 4.886096 5.466453
```
You can also convert the table to long format using the `gather` function and specify the t\-test using the format `dv_column~grouping_column`.
```
longdat <- gather(dat, group, score, A:B)
t.test(score~group, data = longdat)
```
```
##
## Welch Two Sample t-test
##
## data: score by group
## t = -1.7716, df = 36.244, p-value = 0.08487
## alternative hypothesis: true difference in means between group A and group B is not equal to 0
## 95 percent confidence interval:
## -1.2445818 0.0838683
## sample estimates:
## mean in group A mean in group B
## 4.886096 5.466453
```
### 7\.6\.4 broom::tidy()
You can use the function `broom::tidy()` to extract the data from a statistical test in a table format. The example below pipes everything together.
```
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy()
```
| estimate | estimate1 | estimate2 | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| \-0\.6422108 | 5\.044009 | 5\.68622 | \-2\.310591 | 0\.0264905 | 37\.27083 | \-1\.205237 | \-0\.0791844 | Welch Two Sample t\-test | two.sided |
In the pipeline above, `t.test(score~group, data = .)` uses the `.` notation to change the location of the piped\-in data table from it’s default position as the first argument to a different position.
Finally, we can extract a single value from this results table using `pull()`.
```
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
```
```
## [1] 0.7075268
```
### 7\.6\.5 Custom function: t\_sim()
First, name your function `t_sim` and wrap the code above in a function with no arguments.
```
t_sim <- function() {
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
}
```
Run it a few times to see what happens.
```
t_sim()
```
```
## [1] 0.00997552
```
### 7\.6\.6 Iterate t\_sim()
Let’s run the `t_sim` function 1000 times, assign the resulting p\-values to a vector called `reps`, and check what proportion of p\-values are lower than alpha (e.g., .05\). This number is the power for this analysis.
```
reps <- replicate(1000, t_sim())
alpha <- .05
power <- mean(reps < alpha)
power
```
```
## [1] 0.328
```
### 7\.6\.7 Set seed
You can use the `set.seed` function before you run a function that uses random numbers to make sure that you get the same random data back each time. You can use any integer you like as the seed.
```
set.seed(90201)
```
Make sure you don’t ever use `set.seed()` **inside** of a simulation function, or you will just simulate the exact same data over and over again.
Figure 7\.2: @KellyBodwin
### 7\.6\.8 Add arguments
You can just edit your function each time you want to cacluate power for a different sample n, but it is more efficent to build this into your fuction as an arguments. Redefine `t_sim`, setting arguments for the mean and SD of group A, the mean and SD of group B, and the number of subjects per group. Give them all default values.
```
t_sim <- function(n = 10, m1=0, sd1=1, m2=0, sd2=1) {
tibble(
A = rnorm(n, m1, sd1),
B = rnorm(n, m2, sd2)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
}
```
Test your function with some different values to see if the results make sense.
```
t_sim(100)
t_sim(100, 0, 1, 0.5, 1)
```
```
## [1] 0.5065619
## [1] 0.001844064
```
Use `replicate` to calculate power for 100 subjects/group with an effect size of 0\.2 (e.g., A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1\). Use 1000 replications.
```
reps <- replicate(1000, t_sim(100, 0, 1, 0.2, 1))
power <- mean(reps < .05)
power
```
```
## [1] 0.268
```
Compare this to power calculated from the `power.t.test` function.
```
power.t.test(n = 100, delta = 0.2, sd = 1, type="two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
Calculate power via simulation and `power.t.test` for the following tests:
* 20 subjects/group, A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1
* 40 subjects/group, A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1
* 20 subjects/group, A: m \= 10, SD \= 1; B: m \= 12, SD \= 1\.5
### 7\.6\.1 rnorm()
Create a vector of 20 random numbers drawn from a normal distribution with a mean of 5 and standard deviation of 1 using the `rnorm()` function and store them in the variable `A`.
```
A <- rnorm(20, mean = 5, sd = 1)
```
### 7\.6\.2 tibble::tibble()
A `tibble` is a type of table or `data.frame`. The function `tibble::tibble()` creates a tibble with a column for each argument. Each argument takes the form `column_name = data_vector`.
Create a table called `dat` including two vectors: `A` that is a vector of 20 random normally distributed numbers with a mean of 5 and SD of 1, and `B` that is a vector of 20 random normally distributed numbers with a mean of 5\.5 and SD of 1\.
```
dat <- tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
)
```
### 7\.6\.3 t.test()
You can run a Welch two\-sample t\-test by including the two samples you made as the first two arguments to the function `t.test`. You can reference one column of a table by its names using the format `table_name$column_name`
```
t.test(dat$A, dat$B)
```
```
##
## Welch Two Sample t-test
##
## data: dat$A and dat$B
## t = -1.7716, df = 36.244, p-value = 0.08487
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -1.2445818 0.0838683
## sample estimates:
## mean of x mean of y
## 4.886096 5.466453
```
You can also convert the table to long format using the `gather` function and specify the t\-test using the format `dv_column~grouping_column`.
```
longdat <- gather(dat, group, score, A:B)
t.test(score~group, data = longdat)
```
```
##
## Welch Two Sample t-test
##
## data: score by group
## t = -1.7716, df = 36.244, p-value = 0.08487
## alternative hypothesis: true difference in means between group A and group B is not equal to 0
## 95 percent confidence interval:
## -1.2445818 0.0838683
## sample estimates:
## mean in group A mean in group B
## 4.886096 5.466453
```
### 7\.6\.4 broom::tidy()
You can use the function `broom::tidy()` to extract the data from a statistical test in a table format. The example below pipes everything together.
```
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy()
```
| estimate | estimate1 | estimate2 | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| \-0\.6422108 | 5\.044009 | 5\.68622 | \-2\.310591 | 0\.0264905 | 37\.27083 | \-1\.205237 | \-0\.0791844 | Welch Two Sample t\-test | two.sided |
In the pipeline above, `t.test(score~group, data = .)` uses the `.` notation to change the location of the piped\-in data table from it’s default position as the first argument to a different position.
Finally, we can extract a single value from this results table using `pull()`.
```
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
```
```
## [1] 0.7075268
```
### 7\.6\.5 Custom function: t\_sim()
First, name your function `t_sim` and wrap the code above in a function with no arguments.
```
t_sim <- function() {
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
}
```
Run it a few times to see what happens.
```
t_sim()
```
```
## [1] 0.00997552
```
### 7\.6\.6 Iterate t\_sim()
Let’s run the `t_sim` function 1000 times, assign the resulting p\-values to a vector called `reps`, and check what proportion of p\-values are lower than alpha (e.g., .05\). This number is the power for this analysis.
```
reps <- replicate(1000, t_sim())
alpha <- .05
power <- mean(reps < alpha)
power
```
```
## [1] 0.328
```
### 7\.6\.7 Set seed
You can use the `set.seed` function before you run a function that uses random numbers to make sure that you get the same random data back each time. You can use any integer you like as the seed.
```
set.seed(90201)
```
Make sure you don’t ever use `set.seed()` **inside** of a simulation function, or you will just simulate the exact same data over and over again.
Figure 7\.2: @KellyBodwin
### 7\.6\.8 Add arguments
You can just edit your function each time you want to cacluate power for a different sample n, but it is more efficent to build this into your fuction as an arguments. Redefine `t_sim`, setting arguments for the mean and SD of group A, the mean and SD of group B, and the number of subjects per group. Give them all default values.
```
t_sim <- function(n = 10, m1=0, sd1=1, m2=0, sd2=1) {
tibble(
A = rnorm(n, m1, sd1),
B = rnorm(n, m2, sd2)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
}
```
Test your function with some different values to see if the results make sense.
```
t_sim(100)
t_sim(100, 0, 1, 0.5, 1)
```
```
## [1] 0.5065619
## [1] 0.001844064
```
Use `replicate` to calculate power for 100 subjects/group with an effect size of 0\.2 (e.g., A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1\). Use 1000 replications.
```
reps <- replicate(1000, t_sim(100, 0, 1, 0.2, 1))
power <- mean(reps < .05)
power
```
```
## [1] 0.268
```
Compare this to power calculated from the `power.t.test` function.
```
power.t.test(n = 100, delta = 0.2, sd = 1, type="two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
Calculate power via simulation and `power.t.test` for the following tests:
* 20 subjects/group, A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1
* 40 subjects/group, A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1
* 20 subjects/group, A: m \= 10, SD \= 1; B: m \= 12, SD \= 1\.5
7\.7 Glossary
-------------
| term | definition |
| --- | --- |
| [argument](https://psyteachr.github.io/glossary/a#argument) | A variable that provides input to a function. |
| [data type](https://psyteachr.github.io/glossary/d#data.type) | The kind of data represented by an object. |
| [double](https://psyteachr.github.io/glossary/d#double) | A data type representing a real decimal number |
| [function](https://psyteachr.github.io/glossary/f#function.) | A named section of code that can be reused. |
| [iteration](https://psyteachr.github.io/glossary/i#iteration) | Repeating a process or function |
| [matrix](https://psyteachr.github.io/glossary/m#matrix) | A container data type consisting of numbers arranged into a fixed number of rows and columns |
7\.8 Exercises
--------------
Download the [exercises](exercises/07_func_exercise.Rmd). See the [answers](exercises/07_func_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(7)
# run this to access the answers
dataskills::exercise(7, answers = TRUE)
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/func.html |
Chapter 7 Iteration \& Functions
================================
7\.1 Learning Objectives
------------------------
You will learn about functions and iteration by using simulation to calculate a power analysis for an independent samples t\-test.
### 7\.1\.1 Basic
1. Work with basic [iteration functions](func.html#iteration-functions) `rep`, `seq`, `replicate` [(video)](https://youtu.be/X3zFA71JzgE)
2. Use [`map()` and `apply()` functions](func.html#map-apply) [(video)](https://youtu.be/HcZxQZwJ8T4)
3. Write your own [custom functions](func.html#custom-functions) with `function()` [(video)](https://youtu.be/Qqjva0xgYC4)
4. Set [default values](func.html#defaults) for the arguments in your functions
### 7\.1\.2 Intermediate
5. Understand [scope](func.html#scope)
6. Use [error handling and warnings](func.html#warnings-errors) in a function
### 7\.1\.3 Advanced
The topics below are not (yet) covered in these materials, but they are directions for independent learning.
7. Repeat commands having multiple arguments using `purrr::map2_*()` and `purrr::pmap_*()`
8. Create **nested data frames** using `dplyr::group_by()` and `tidyr::nest()`
9. Work with **nested data frames** in `dplyr`
10. Capture and deal with errors using ‘adverb’ functions `purrr::safely()` and `purrr::possibly()`
7\.2 Resources
--------------
* Chapters 19 and 21 of [R for Data Science](http://r4ds.had.co.nz)
* [RStudio Apply Functions Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/purrr.pdf)
In the next two lectures, we are going to learn more about [iteration](https://psyteachr.github.io/glossary/i#iteration "Repeating a process or function") (doing the same commands over and over) and custom [functions](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") through a data simulation exercise, which will also lead us into more traditional statistical topics.
7\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse) ## contains purrr, tidyr, dplyr
library(broom) ## converts test output to tidy tables
set.seed(8675309) # makes sure random numbers are reproducible
```
7\.4 Iteration functions
------------------------
We first learned about the two basic iteration functions, `rep()` and `seq()` in the [Working with Data](data.html#rep_seq) chapter.
### 7\.4\.1 rep()
The function `rep()` lets you repeat the first argument a number of times.
Use `rep()` to create a vector of alternating `"A"` and `"B"` values of length 24\.
```
rep(c("A", "B"), 12)
```
```
## [1] "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A"
## [20] "B" "A" "B" "A" "B"
```
If you don’t specify what the second argument is, it defaults to `times`, repeating the vector in the first argument that many times. Make the same vector as above, setting the second argument explicitly.
```
rep(c("A", "B"), times = 12)
```
```
## [1] "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A"
## [20] "B" "A" "B" "A" "B"
```
If the second argument is a vector that is the same length as the first argument, each element in the first vector is repeated than many times. Use `rep()` to create a vector of 11 `"A"` values followed by 3 `"B"` values.
```
rep(c("A", "B"), c(11, 3))
```
```
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "B" "B" "B"
```
You can repeat each element of the vector a sepcified number of times using the `each` argument, Use `rep()` to create a vector of 12 `"A"` values followed by 12 `"B"` values.
```
rep(c("A", "B"), each = 12)
```
```
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "B" "B" "B" "B" "B" "B" "B"
## [20] "B" "B" "B" "B" "B"
```
What do you think will happen if you set both `times` to 3 and `each` to 2?
```
rep(c("A", "B"), times = 3, each = 2)
```
```
## [1] "A" "A" "B" "B" "A" "A" "B" "B" "A" "A" "B" "B"
```
### 7\.4\.2 seq()
The function `seq()` is useful for generating a sequence of numbers with some pattern.
Use `seq()` to create a vector of the integers 0 to 10\.
```
seq(0, 10)
```
```
## [1] 0 1 2 3 4 5 6 7 8 9 10
```
You can set the `by` argument to count by numbers other than 1 (the default). Use `seq()` to create a vector of the numbers 0 to 100 by 10s.
```
seq(0, 100, by = 10)
```
```
## [1] 0 10 20 30 40 50 60 70 80 90 100
```
The argument `length.out` is useful if you know how many steps you want to divide something into. Use `seq()` to create a vector that starts with 0, ends with 100, and has 12 equally spaced steps (hint: how many numbers would be in a vector with 2 *steps*?).
```
seq(0, 100, length.out = 13)
```
```
## [1] 0.000000 8.333333 16.666667 25.000000 33.333333 41.666667
## [7] 50.000000 58.333333 66.666667 75.000000 83.333333 91.666667
## [13] 100.000000
```
### 7\.4\.3 replicate()
You can use the `replicate()` function to run a function `n` times.
For example, you can get 3 sets of 5 numbers from a random normal distribution by setting `n` to `3` and `expr` to `rnorm(5)`.
```
replicate(n = 3, expr = rnorm(5))
```
```
## [,1] [,2] [,3]
## [1,] -0.9965824 0.98721974 -1.5495524
## [2,] 0.7218241 0.02745393 1.0226378
## [3,] -0.6172088 0.67287232 0.1500832
## [4,] 2.0293916 0.57206650 -0.6599640
## [5,] 1.0654161 0.90367770 -0.9945890
```
By default, `replicate()` simplifies your result into a [matrix](https://psyteachr.github.io/glossary/m#matrix "A container data type consisting of numbers arranged into a fixed number of rows and columns") that is easy to convert into a table if your function returns vectors that are the same length. If you’d rather have a list of vectors, set `simplify = FALSE`.
```
replicate(n = 3, expr = rnorm(5), simplify = FALSE)
```
```
## [[1]]
## [1] 1.9724587 -0.4418016 -0.9006372 -0.1505882 -0.8278942
##
## [[2]]
## [1] 1.98582582 0.04400503 -0.40428231 -0.47299855 -0.41482324
##
## [[3]]
## [1] 0.6832342 0.6902011 0.5334919 -0.1861048 0.3829458
```
### 7\.4\.4 map() and apply() functions
`purrr::map()` and `lapply()` return a list of the same length as a vector or list, each element of which is the result of applying a function to the corresponding element. They function much the same, but purrr functions have some optimisations for working with the tidyverse. We’ll be working mostly with purrr functions in this course, but apply functions are very common in code that you might see in examples on the web.
Imagine you want to calculate the power for a two\-sample t\-test with a mean difference of 0\.2 and SD of 1, for all the sample sizes 100 to 1000 (by 100s). You could run the `power.t.test()` function 20 times and extract the values for “power” from the resulting list and put it in a table.
```
p100 <- power.t.test(n = 100, delta = 0.2, sd = 1, type="two.sample")
# 18 more lines
p1000 <- power.t.test(n = 500, delta = 0.2, sd = 1, type="two.sample")
tibble(
n = c(100, "...", 1000),
power = c(p100$power, "...", p1000$power)
)
```
| n | power |
| --- | --- |
| 100 | 0\.290266404572217 |
| … | … |
| 1000 | 0\.884788352886661 |
However, the `apply()` and `map()` functions allow you to perform a function on each item in a vector or list. First make an object `n` that is the vector of the sample sizes you want to test, then use `lapply()` or `map()` to run the function `power.t.test()` on each item. You can set other arguments to `power.t.test()` after the function argument.
```
n <- seq(100, 1000, 100)
pcalc <- lapply(n, power.t.test,
delta = 0.2, sd = 1, type="two.sample")
# or
pcalc <- purrr::map(n, power.t.test,
delta = 0.2, sd = 1, type="two.sample")
```
These functions return a list where each item is the result of `power.t.test()`, which returns a list of results that includes the named item “power.” This is a special list that has a summary format if you just print it directly:
```
pcalc[[1]]
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
But you can see the individual items using the `str()` function.
```
pcalc[[1]] %>% str()
```
```
## List of 8
## $ n : num 100
## $ delta : num 0.2
## $ sd : num 1
## $ sig.level : num 0.05
## $ power : num 0.29
## $ alternative: chr "two.sided"
## $ note : chr "n is number in *each* group"
## $ method : chr "Two-sample t test power calculation"
## - attr(*, "class")= chr "power.htest"
```
`sapply()` is a version of `lapply()` that returns a vector or array instead of a list, where appropriate. The corresponding purrr functions are `map_dbl()`, `map_chr()`, `map_int()` and `map_lgl()`, which return vectors with the corresponding [data type](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object.").
You can extract a value from a list with the function `[[`. You usually see this written as `pcalc[[1]]`, but if you put it inside backticks, you can use it in apply and map functions.
```
sapply(pcalc, `[[`, "power")
```
```
## [1] 0.2902664 0.5140434 0.6863712 0.8064964 0.8847884 0.9333687 0.9623901
## [8] 0.9792066 0.9887083 0.9939638
```
We use `map_dbl()` here because the value for “power” is a [double](https://psyteachr.github.io/glossary/d#double "A data type representing a real decimal number").
```
purrr::map_dbl(pcalc, `[[`, "power")
```
```
## [1] 0.2902664 0.5140434 0.6863712 0.8064964 0.8847884 0.9333687 0.9623901
## [8] 0.9792066 0.9887083 0.9939638
```
We can use the `map()` functions inside a `mutate()` function to run the `power.t.test()` function on the value of `n` from each row of a table, then extract the value for “power,” and delete the column with the power calculations.
```
mypower <- tibble(
n = seq(100, 1000, 100)) %>%
mutate(pcalc = purrr::map(n, power.t.test,
delta = 0.2,
sd = 1,
type="two.sample"),
power = purrr::map_dbl(pcalc, `[[`, "power")) %>%
select(-pcalc)
```
Figure 7\.1: Power for a two\-sample t\-test with d \= 0\.2
7\.5 Custom functions
---------------------
In addition to the built\-in functions and functions you can access from packages, you can also write your own functions (and eventually even packages!).
### 7\.5\.1 Structuring a function
The general structure of a function is as follows:
```
function_name <- function(my_args) {
# process the arguments
# return some value
}
```
Here is a very simple function. Can you guess what it does?
```
add1 <- function(my_number) {
my_number + 1
}
add1(10)
```
```
## [1] 11
```
Let’s make a function that reports p\-values in APA format (with “p \= \[rounded value]” when p \>\= .001 and “p \< .001” when p \< .001\).
First, we have to name the function. You can name it anything, but try not to duplicate existing functions or you will overwrite them. For example, if you call your function `rep`, then you will need to use `base::rep()` to access the normal `rep` function. Let’s call our p\-value function `report_p` and set up the framework of the function.
```
report_p <- function() {
}
```
### 7\.5\.2 Arguments
We need to add one [argument](https://psyteachr.github.io/glossary/a#argument "A variable that provides input to a function."), the p\-value you want to report. The names you choose for the arguments are private to that argument, so it is not a problem if they conflict with other variables in your script. You put the arguments in the parentheses of `function()` in the order you want them to default (just like the built\-in functions you’ve used before).
```
report_p <- function(p) {
}
```
### 7\.5\.3 Argument defaults
You can add a default value to any argument. If that argument is skipped, then the function uses the default argument. It probably doesn’t make sense to run this function without specifying the p\-value, but we can add a second argument called `digits` that defaults to 3, so we can round p\-values to any number of digits.
```
report_p <- function(p, digits = 3) {
}
```
Now we need to write some code inside the function to process the input arguments and turn them into a **return**ed output. Put the output as the last item in function.
```
report_p <- function(p, digits = 3) {
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
reported
}
```
You might also see the returned output inside of the `return()` function. This does the same thing.
```
report_p <- function(p, digits = 3) {
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
return(reported)
}
```
When you run the code defining your function, it doesn’t output anything, but makes a new object in the Environment tab under **`Functions`**. Now you can run the function.
```
report_p(0.04869)
report_p(0.0000023)
```
```
## [1] "p = 0.049"
## [1] "p < .001"
```
### 7\.5\.4 Scope
What happens in a function stays in a function. You can change the value of a variable passed to a function, but that won’t change the value of the variable outside of the function, even if that variable has the same name as the one in the function.
```
reported <- "not changed"
# inside this function, reported == "p = 0.002"
report_p(0.0023)
reported # still "not changed"
```
```
## [1] "p = 0.002"
## [1] "not changed"
```
### 7\.5\.5 Warnings and errors
What happens when you omit the argument for `p`? Or if you set `p` to 1\.5 or “a?”
You might want to add a more specific warning and stop running the function code if someone enters a value that isn’t a number. You can do this with the `stop()` function.
If someone enters a number that isn’t possible for a p\-value (0\-1\), you might want to warn them that this is probably not what they intended, but still continue with the function. You can do this with `warning()`.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
reported
}
```
```
report_p()
```
```
## Error in report_p(): argument "p" is missing, with no default
```
```
report_p("a")
```
```
## Error in report_p("a"): p must be a number
```
```
report_p(-2)
```
```
## Warning in report_p(-2): p-values are normally greater than 0
```
```
report_p(2)
```
```
## Warning in report_p(2): p-values are normally less than 1
```
```
## [1] "p < .001"
## [1] "p = 2"
```
7\.6 Iterating your own functions
---------------------------------
First, let’s build up the code that we want to iterate.
### 7\.6\.1 rnorm()
Create a vector of 20 random numbers drawn from a normal distribution with a mean of 5 and standard deviation of 1 using the `rnorm()` function and store them in the variable `A`.
```
A <- rnorm(20, mean = 5, sd = 1)
```
### 7\.6\.2 tibble::tibble()
A `tibble` is a type of table or `data.frame`. The function `tibble::tibble()` creates a tibble with a column for each argument. Each argument takes the form `column_name = data_vector`.
Create a table called `dat` including two vectors: `A` that is a vector of 20 random normally distributed numbers with a mean of 5 and SD of 1, and `B` that is a vector of 20 random normally distributed numbers with a mean of 5\.5 and SD of 1\.
```
dat <- tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
)
```
### 7\.6\.3 t.test()
You can run a Welch two\-sample t\-test by including the two samples you made as the first two arguments to the function `t.test`. You can reference one column of a table by its names using the format `table_name$column_name`
```
t.test(dat$A, dat$B)
```
```
##
## Welch Two Sample t-test
##
## data: dat$A and dat$B
## t = -1.7716, df = 36.244, p-value = 0.08487
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -1.2445818 0.0838683
## sample estimates:
## mean of x mean of y
## 4.886096 5.466453
```
You can also convert the table to long format using the `gather` function and specify the t\-test using the format `dv_column~grouping_column`.
```
longdat <- gather(dat, group, score, A:B)
t.test(score~group, data = longdat)
```
```
##
## Welch Two Sample t-test
##
## data: score by group
## t = -1.7716, df = 36.244, p-value = 0.08487
## alternative hypothesis: true difference in means between group A and group B is not equal to 0
## 95 percent confidence interval:
## -1.2445818 0.0838683
## sample estimates:
## mean in group A mean in group B
## 4.886096 5.466453
```
### 7\.6\.4 broom::tidy()
You can use the function `broom::tidy()` to extract the data from a statistical test in a table format. The example below pipes everything together.
```
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy()
```
| estimate | estimate1 | estimate2 | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| \-0\.6422108 | 5\.044009 | 5\.68622 | \-2\.310591 | 0\.0264905 | 37\.27083 | \-1\.205237 | \-0\.0791844 | Welch Two Sample t\-test | two.sided |
In the pipeline above, `t.test(score~group, data = .)` uses the `.` notation to change the location of the piped\-in data table from it’s default position as the first argument to a different position.
Finally, we can extract a single value from this results table using `pull()`.
```
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
```
```
## [1] 0.7075268
```
### 7\.6\.5 Custom function: t\_sim()
First, name your function `t_sim` and wrap the code above in a function with no arguments.
```
t_sim <- function() {
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
}
```
Run it a few times to see what happens.
```
t_sim()
```
```
## [1] 0.00997552
```
### 7\.6\.6 Iterate t\_sim()
Let’s run the `t_sim` function 1000 times, assign the resulting p\-values to a vector called `reps`, and check what proportion of p\-values are lower than alpha (e.g., .05\). This number is the power for this analysis.
```
reps <- replicate(1000, t_sim())
alpha <- .05
power <- mean(reps < alpha)
power
```
```
## [1] 0.328
```
### 7\.6\.7 Set seed
You can use the `set.seed` function before you run a function that uses random numbers to make sure that you get the same random data back each time. You can use any integer you like as the seed.
```
set.seed(90201)
```
Make sure you don’t ever use `set.seed()` **inside** of a simulation function, or you will just simulate the exact same data over and over again.
Figure 7\.2: @KellyBodwin
### 7\.6\.8 Add arguments
You can just edit your function each time you want to cacluate power for a different sample n, but it is more efficent to build this into your fuction as an arguments. Redefine `t_sim`, setting arguments for the mean and SD of group A, the mean and SD of group B, and the number of subjects per group. Give them all default values.
```
t_sim <- function(n = 10, m1=0, sd1=1, m2=0, sd2=1) {
tibble(
A = rnorm(n, m1, sd1),
B = rnorm(n, m2, sd2)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
}
```
Test your function with some different values to see if the results make sense.
```
t_sim(100)
t_sim(100, 0, 1, 0.5, 1)
```
```
## [1] 0.5065619
## [1] 0.001844064
```
Use `replicate` to calculate power for 100 subjects/group with an effect size of 0\.2 (e.g., A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1\). Use 1000 replications.
```
reps <- replicate(1000, t_sim(100, 0, 1, 0.2, 1))
power <- mean(reps < .05)
power
```
```
## [1] 0.268
```
Compare this to power calculated from the `power.t.test` function.
```
power.t.test(n = 100, delta = 0.2, sd = 1, type="two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
Calculate power via simulation and `power.t.test` for the following tests:
* 20 subjects/group, A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1
* 40 subjects/group, A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1
* 20 subjects/group, A: m \= 10, SD \= 1; B: m \= 12, SD \= 1\.5
7\.7 Glossary
-------------
| term | definition |
| --- | --- |
| [argument](https://psyteachr.github.io/glossary/a#argument) | A variable that provides input to a function. |
| [data type](https://psyteachr.github.io/glossary/d#data.type) | The kind of data represented by an object. |
| [double](https://psyteachr.github.io/glossary/d#double) | A data type representing a real decimal number |
| [function](https://psyteachr.github.io/glossary/f#function.) | A named section of code that can be reused. |
| [iteration](https://psyteachr.github.io/glossary/i#iteration) | Repeating a process or function |
| [matrix](https://psyteachr.github.io/glossary/m#matrix) | A container data type consisting of numbers arranged into a fixed number of rows and columns |
7\.8 Exercises
--------------
Download the [exercises](exercises/07_func_exercise.Rmd). See the [answers](exercises/07_func_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(7)
# run this to access the answers
dataskills::exercise(7, answers = TRUE)
```
7\.1 Learning Objectives
------------------------
You will learn about functions and iteration by using simulation to calculate a power analysis for an independent samples t\-test.
### 7\.1\.1 Basic
1. Work with basic [iteration functions](func.html#iteration-functions) `rep`, `seq`, `replicate` [(video)](https://youtu.be/X3zFA71JzgE)
2. Use [`map()` and `apply()` functions](func.html#map-apply) [(video)](https://youtu.be/HcZxQZwJ8T4)
3. Write your own [custom functions](func.html#custom-functions) with `function()` [(video)](https://youtu.be/Qqjva0xgYC4)
4. Set [default values](func.html#defaults) for the arguments in your functions
### 7\.1\.2 Intermediate
5. Understand [scope](func.html#scope)
6. Use [error handling and warnings](func.html#warnings-errors) in a function
### 7\.1\.3 Advanced
The topics below are not (yet) covered in these materials, but they are directions for independent learning.
7. Repeat commands having multiple arguments using `purrr::map2_*()` and `purrr::pmap_*()`
8. Create **nested data frames** using `dplyr::group_by()` and `tidyr::nest()`
9. Work with **nested data frames** in `dplyr`
10. Capture and deal with errors using ‘adverb’ functions `purrr::safely()` and `purrr::possibly()`
### 7\.1\.1 Basic
1. Work with basic [iteration functions](func.html#iteration-functions) `rep`, `seq`, `replicate` [(video)](https://youtu.be/X3zFA71JzgE)
2. Use [`map()` and `apply()` functions](func.html#map-apply) [(video)](https://youtu.be/HcZxQZwJ8T4)
3. Write your own [custom functions](func.html#custom-functions) with `function()` [(video)](https://youtu.be/Qqjva0xgYC4)
4. Set [default values](func.html#defaults) for the arguments in your functions
### 7\.1\.2 Intermediate
5. Understand [scope](func.html#scope)
6. Use [error handling and warnings](func.html#warnings-errors) in a function
### 7\.1\.3 Advanced
The topics below are not (yet) covered in these materials, but they are directions for independent learning.
7. Repeat commands having multiple arguments using `purrr::map2_*()` and `purrr::pmap_*()`
8. Create **nested data frames** using `dplyr::group_by()` and `tidyr::nest()`
9. Work with **nested data frames** in `dplyr`
10. Capture and deal with errors using ‘adverb’ functions `purrr::safely()` and `purrr::possibly()`
7\.2 Resources
--------------
* Chapters 19 and 21 of [R for Data Science](http://r4ds.had.co.nz)
* [RStudio Apply Functions Cheatsheet](https://github.com/rstudio/cheatsheets/raw/master/purrr.pdf)
In the next two lectures, we are going to learn more about [iteration](https://psyteachr.github.io/glossary/i#iteration "Repeating a process or function") (doing the same commands over and over) and custom [functions](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") through a data simulation exercise, which will also lead us into more traditional statistical topics.
7\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse) ## contains purrr, tidyr, dplyr
library(broom) ## converts test output to tidy tables
set.seed(8675309) # makes sure random numbers are reproducible
```
7\.4 Iteration functions
------------------------
We first learned about the two basic iteration functions, `rep()` and `seq()` in the [Working with Data](data.html#rep_seq) chapter.
### 7\.4\.1 rep()
The function `rep()` lets you repeat the first argument a number of times.
Use `rep()` to create a vector of alternating `"A"` and `"B"` values of length 24\.
```
rep(c("A", "B"), 12)
```
```
## [1] "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A"
## [20] "B" "A" "B" "A" "B"
```
If you don’t specify what the second argument is, it defaults to `times`, repeating the vector in the first argument that many times. Make the same vector as above, setting the second argument explicitly.
```
rep(c("A", "B"), times = 12)
```
```
## [1] "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A"
## [20] "B" "A" "B" "A" "B"
```
If the second argument is a vector that is the same length as the first argument, each element in the first vector is repeated than many times. Use `rep()` to create a vector of 11 `"A"` values followed by 3 `"B"` values.
```
rep(c("A", "B"), c(11, 3))
```
```
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "B" "B" "B"
```
You can repeat each element of the vector a sepcified number of times using the `each` argument, Use `rep()` to create a vector of 12 `"A"` values followed by 12 `"B"` values.
```
rep(c("A", "B"), each = 12)
```
```
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "B" "B" "B" "B" "B" "B" "B"
## [20] "B" "B" "B" "B" "B"
```
What do you think will happen if you set both `times` to 3 and `each` to 2?
```
rep(c("A", "B"), times = 3, each = 2)
```
```
## [1] "A" "A" "B" "B" "A" "A" "B" "B" "A" "A" "B" "B"
```
### 7\.4\.2 seq()
The function `seq()` is useful for generating a sequence of numbers with some pattern.
Use `seq()` to create a vector of the integers 0 to 10\.
```
seq(0, 10)
```
```
## [1] 0 1 2 3 4 5 6 7 8 9 10
```
You can set the `by` argument to count by numbers other than 1 (the default). Use `seq()` to create a vector of the numbers 0 to 100 by 10s.
```
seq(0, 100, by = 10)
```
```
## [1] 0 10 20 30 40 50 60 70 80 90 100
```
The argument `length.out` is useful if you know how many steps you want to divide something into. Use `seq()` to create a vector that starts with 0, ends with 100, and has 12 equally spaced steps (hint: how many numbers would be in a vector with 2 *steps*?).
```
seq(0, 100, length.out = 13)
```
```
## [1] 0.000000 8.333333 16.666667 25.000000 33.333333 41.666667
## [7] 50.000000 58.333333 66.666667 75.000000 83.333333 91.666667
## [13] 100.000000
```
### 7\.4\.3 replicate()
You can use the `replicate()` function to run a function `n` times.
For example, you can get 3 sets of 5 numbers from a random normal distribution by setting `n` to `3` and `expr` to `rnorm(5)`.
```
replicate(n = 3, expr = rnorm(5))
```
```
## [,1] [,2] [,3]
## [1,] -0.9965824 0.98721974 -1.5495524
## [2,] 0.7218241 0.02745393 1.0226378
## [3,] -0.6172088 0.67287232 0.1500832
## [4,] 2.0293916 0.57206650 -0.6599640
## [5,] 1.0654161 0.90367770 -0.9945890
```
By default, `replicate()` simplifies your result into a [matrix](https://psyteachr.github.io/glossary/m#matrix "A container data type consisting of numbers arranged into a fixed number of rows and columns") that is easy to convert into a table if your function returns vectors that are the same length. If you’d rather have a list of vectors, set `simplify = FALSE`.
```
replicate(n = 3, expr = rnorm(5), simplify = FALSE)
```
```
## [[1]]
## [1] 1.9724587 -0.4418016 -0.9006372 -0.1505882 -0.8278942
##
## [[2]]
## [1] 1.98582582 0.04400503 -0.40428231 -0.47299855 -0.41482324
##
## [[3]]
## [1] 0.6832342 0.6902011 0.5334919 -0.1861048 0.3829458
```
### 7\.4\.4 map() and apply() functions
`purrr::map()` and `lapply()` return a list of the same length as a vector or list, each element of which is the result of applying a function to the corresponding element. They function much the same, but purrr functions have some optimisations for working with the tidyverse. We’ll be working mostly with purrr functions in this course, but apply functions are very common in code that you might see in examples on the web.
Imagine you want to calculate the power for a two\-sample t\-test with a mean difference of 0\.2 and SD of 1, for all the sample sizes 100 to 1000 (by 100s). You could run the `power.t.test()` function 20 times and extract the values for “power” from the resulting list and put it in a table.
```
p100 <- power.t.test(n = 100, delta = 0.2, sd = 1, type="two.sample")
# 18 more lines
p1000 <- power.t.test(n = 500, delta = 0.2, sd = 1, type="two.sample")
tibble(
n = c(100, "...", 1000),
power = c(p100$power, "...", p1000$power)
)
```
| n | power |
| --- | --- |
| 100 | 0\.290266404572217 |
| … | … |
| 1000 | 0\.884788352886661 |
However, the `apply()` and `map()` functions allow you to perform a function on each item in a vector or list. First make an object `n` that is the vector of the sample sizes you want to test, then use `lapply()` or `map()` to run the function `power.t.test()` on each item. You can set other arguments to `power.t.test()` after the function argument.
```
n <- seq(100, 1000, 100)
pcalc <- lapply(n, power.t.test,
delta = 0.2, sd = 1, type="two.sample")
# or
pcalc <- purrr::map(n, power.t.test,
delta = 0.2, sd = 1, type="two.sample")
```
These functions return a list where each item is the result of `power.t.test()`, which returns a list of results that includes the named item “power.” This is a special list that has a summary format if you just print it directly:
```
pcalc[[1]]
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
But you can see the individual items using the `str()` function.
```
pcalc[[1]] %>% str()
```
```
## List of 8
## $ n : num 100
## $ delta : num 0.2
## $ sd : num 1
## $ sig.level : num 0.05
## $ power : num 0.29
## $ alternative: chr "two.sided"
## $ note : chr "n is number in *each* group"
## $ method : chr "Two-sample t test power calculation"
## - attr(*, "class")= chr "power.htest"
```
`sapply()` is a version of `lapply()` that returns a vector or array instead of a list, where appropriate. The corresponding purrr functions are `map_dbl()`, `map_chr()`, `map_int()` and `map_lgl()`, which return vectors with the corresponding [data type](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object.").
You can extract a value from a list with the function `[[`. You usually see this written as `pcalc[[1]]`, but if you put it inside backticks, you can use it in apply and map functions.
```
sapply(pcalc, `[[`, "power")
```
```
## [1] 0.2902664 0.5140434 0.6863712 0.8064964 0.8847884 0.9333687 0.9623901
## [8] 0.9792066 0.9887083 0.9939638
```
We use `map_dbl()` here because the value for “power” is a [double](https://psyteachr.github.io/glossary/d#double "A data type representing a real decimal number").
```
purrr::map_dbl(pcalc, `[[`, "power")
```
```
## [1] 0.2902664 0.5140434 0.6863712 0.8064964 0.8847884 0.9333687 0.9623901
## [8] 0.9792066 0.9887083 0.9939638
```
We can use the `map()` functions inside a `mutate()` function to run the `power.t.test()` function on the value of `n` from each row of a table, then extract the value for “power,” and delete the column with the power calculations.
```
mypower <- tibble(
n = seq(100, 1000, 100)) %>%
mutate(pcalc = purrr::map(n, power.t.test,
delta = 0.2,
sd = 1,
type="two.sample"),
power = purrr::map_dbl(pcalc, `[[`, "power")) %>%
select(-pcalc)
```
Figure 7\.1: Power for a two\-sample t\-test with d \= 0\.2
### 7\.4\.1 rep()
The function `rep()` lets you repeat the first argument a number of times.
Use `rep()` to create a vector of alternating `"A"` and `"B"` values of length 24\.
```
rep(c("A", "B"), 12)
```
```
## [1] "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A"
## [20] "B" "A" "B" "A" "B"
```
If you don’t specify what the second argument is, it defaults to `times`, repeating the vector in the first argument that many times. Make the same vector as above, setting the second argument explicitly.
```
rep(c("A", "B"), times = 12)
```
```
## [1] "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A" "B" "A"
## [20] "B" "A" "B" "A" "B"
```
If the second argument is a vector that is the same length as the first argument, each element in the first vector is repeated than many times. Use `rep()` to create a vector of 11 `"A"` values followed by 3 `"B"` values.
```
rep(c("A", "B"), c(11, 3))
```
```
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "B" "B" "B"
```
You can repeat each element of the vector a sepcified number of times using the `each` argument, Use `rep()` to create a vector of 12 `"A"` values followed by 12 `"B"` values.
```
rep(c("A", "B"), each = 12)
```
```
## [1] "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "A" "B" "B" "B" "B" "B" "B" "B"
## [20] "B" "B" "B" "B" "B"
```
What do you think will happen if you set both `times` to 3 and `each` to 2?
```
rep(c("A", "B"), times = 3, each = 2)
```
```
## [1] "A" "A" "B" "B" "A" "A" "B" "B" "A" "A" "B" "B"
```
### 7\.4\.2 seq()
The function `seq()` is useful for generating a sequence of numbers with some pattern.
Use `seq()` to create a vector of the integers 0 to 10\.
```
seq(0, 10)
```
```
## [1] 0 1 2 3 4 5 6 7 8 9 10
```
You can set the `by` argument to count by numbers other than 1 (the default). Use `seq()` to create a vector of the numbers 0 to 100 by 10s.
```
seq(0, 100, by = 10)
```
```
## [1] 0 10 20 30 40 50 60 70 80 90 100
```
The argument `length.out` is useful if you know how many steps you want to divide something into. Use `seq()` to create a vector that starts with 0, ends with 100, and has 12 equally spaced steps (hint: how many numbers would be in a vector with 2 *steps*?).
```
seq(0, 100, length.out = 13)
```
```
## [1] 0.000000 8.333333 16.666667 25.000000 33.333333 41.666667
## [7] 50.000000 58.333333 66.666667 75.000000 83.333333 91.666667
## [13] 100.000000
```
### 7\.4\.3 replicate()
You can use the `replicate()` function to run a function `n` times.
For example, you can get 3 sets of 5 numbers from a random normal distribution by setting `n` to `3` and `expr` to `rnorm(5)`.
```
replicate(n = 3, expr = rnorm(5))
```
```
## [,1] [,2] [,3]
## [1,] -0.9965824 0.98721974 -1.5495524
## [2,] 0.7218241 0.02745393 1.0226378
## [3,] -0.6172088 0.67287232 0.1500832
## [4,] 2.0293916 0.57206650 -0.6599640
## [5,] 1.0654161 0.90367770 -0.9945890
```
By default, `replicate()` simplifies your result into a [matrix](https://psyteachr.github.io/glossary/m#matrix "A container data type consisting of numbers arranged into a fixed number of rows and columns") that is easy to convert into a table if your function returns vectors that are the same length. If you’d rather have a list of vectors, set `simplify = FALSE`.
```
replicate(n = 3, expr = rnorm(5), simplify = FALSE)
```
```
## [[1]]
## [1] 1.9724587 -0.4418016 -0.9006372 -0.1505882 -0.8278942
##
## [[2]]
## [1] 1.98582582 0.04400503 -0.40428231 -0.47299855 -0.41482324
##
## [[3]]
## [1] 0.6832342 0.6902011 0.5334919 -0.1861048 0.3829458
```
### 7\.4\.4 map() and apply() functions
`purrr::map()` and `lapply()` return a list of the same length as a vector or list, each element of which is the result of applying a function to the corresponding element. They function much the same, but purrr functions have some optimisations for working with the tidyverse. We’ll be working mostly with purrr functions in this course, but apply functions are very common in code that you might see in examples on the web.
Imagine you want to calculate the power for a two\-sample t\-test with a mean difference of 0\.2 and SD of 1, for all the sample sizes 100 to 1000 (by 100s). You could run the `power.t.test()` function 20 times and extract the values for “power” from the resulting list and put it in a table.
```
p100 <- power.t.test(n = 100, delta = 0.2, sd = 1, type="two.sample")
# 18 more lines
p1000 <- power.t.test(n = 500, delta = 0.2, sd = 1, type="two.sample")
tibble(
n = c(100, "...", 1000),
power = c(p100$power, "...", p1000$power)
)
```
| n | power |
| --- | --- |
| 100 | 0\.290266404572217 |
| … | … |
| 1000 | 0\.884788352886661 |
However, the `apply()` and `map()` functions allow you to perform a function on each item in a vector or list. First make an object `n` that is the vector of the sample sizes you want to test, then use `lapply()` or `map()` to run the function `power.t.test()` on each item. You can set other arguments to `power.t.test()` after the function argument.
```
n <- seq(100, 1000, 100)
pcalc <- lapply(n, power.t.test,
delta = 0.2, sd = 1, type="two.sample")
# or
pcalc <- purrr::map(n, power.t.test,
delta = 0.2, sd = 1, type="two.sample")
```
These functions return a list where each item is the result of `power.t.test()`, which returns a list of results that includes the named item “power.” This is a special list that has a summary format if you just print it directly:
```
pcalc[[1]]
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
But you can see the individual items using the `str()` function.
```
pcalc[[1]] %>% str()
```
```
## List of 8
## $ n : num 100
## $ delta : num 0.2
## $ sd : num 1
## $ sig.level : num 0.05
## $ power : num 0.29
## $ alternative: chr "two.sided"
## $ note : chr "n is number in *each* group"
## $ method : chr "Two-sample t test power calculation"
## - attr(*, "class")= chr "power.htest"
```
`sapply()` is a version of `lapply()` that returns a vector or array instead of a list, where appropriate. The corresponding purrr functions are `map_dbl()`, `map_chr()`, `map_int()` and `map_lgl()`, which return vectors with the corresponding [data type](https://psyteachr.github.io/glossary/d#data-type "The kind of data represented by an object.").
You can extract a value from a list with the function `[[`. You usually see this written as `pcalc[[1]]`, but if you put it inside backticks, you can use it in apply and map functions.
```
sapply(pcalc, `[[`, "power")
```
```
## [1] 0.2902664 0.5140434 0.6863712 0.8064964 0.8847884 0.9333687 0.9623901
## [8] 0.9792066 0.9887083 0.9939638
```
We use `map_dbl()` here because the value for “power” is a [double](https://psyteachr.github.io/glossary/d#double "A data type representing a real decimal number").
```
purrr::map_dbl(pcalc, `[[`, "power")
```
```
## [1] 0.2902664 0.5140434 0.6863712 0.8064964 0.8847884 0.9333687 0.9623901
## [8] 0.9792066 0.9887083 0.9939638
```
We can use the `map()` functions inside a `mutate()` function to run the `power.t.test()` function on the value of `n` from each row of a table, then extract the value for “power,” and delete the column with the power calculations.
```
mypower <- tibble(
n = seq(100, 1000, 100)) %>%
mutate(pcalc = purrr::map(n, power.t.test,
delta = 0.2,
sd = 1,
type="two.sample"),
power = purrr::map_dbl(pcalc, `[[`, "power")) %>%
select(-pcalc)
```
Figure 7\.1: Power for a two\-sample t\-test with d \= 0\.2
7\.5 Custom functions
---------------------
In addition to the built\-in functions and functions you can access from packages, you can also write your own functions (and eventually even packages!).
### 7\.5\.1 Structuring a function
The general structure of a function is as follows:
```
function_name <- function(my_args) {
# process the arguments
# return some value
}
```
Here is a very simple function. Can you guess what it does?
```
add1 <- function(my_number) {
my_number + 1
}
add1(10)
```
```
## [1] 11
```
Let’s make a function that reports p\-values in APA format (with “p \= \[rounded value]” when p \>\= .001 and “p \< .001” when p \< .001\).
First, we have to name the function. You can name it anything, but try not to duplicate existing functions or you will overwrite them. For example, if you call your function `rep`, then you will need to use `base::rep()` to access the normal `rep` function. Let’s call our p\-value function `report_p` and set up the framework of the function.
```
report_p <- function() {
}
```
### 7\.5\.2 Arguments
We need to add one [argument](https://psyteachr.github.io/glossary/a#argument "A variable that provides input to a function."), the p\-value you want to report. The names you choose for the arguments are private to that argument, so it is not a problem if they conflict with other variables in your script. You put the arguments in the parentheses of `function()` in the order you want them to default (just like the built\-in functions you’ve used before).
```
report_p <- function(p) {
}
```
### 7\.5\.3 Argument defaults
You can add a default value to any argument. If that argument is skipped, then the function uses the default argument. It probably doesn’t make sense to run this function without specifying the p\-value, but we can add a second argument called `digits` that defaults to 3, so we can round p\-values to any number of digits.
```
report_p <- function(p, digits = 3) {
}
```
Now we need to write some code inside the function to process the input arguments and turn them into a **return**ed output. Put the output as the last item in function.
```
report_p <- function(p, digits = 3) {
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
reported
}
```
You might also see the returned output inside of the `return()` function. This does the same thing.
```
report_p <- function(p, digits = 3) {
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
return(reported)
}
```
When you run the code defining your function, it doesn’t output anything, but makes a new object in the Environment tab under **`Functions`**. Now you can run the function.
```
report_p(0.04869)
report_p(0.0000023)
```
```
## [1] "p = 0.049"
## [1] "p < .001"
```
### 7\.5\.4 Scope
What happens in a function stays in a function. You can change the value of a variable passed to a function, but that won’t change the value of the variable outside of the function, even if that variable has the same name as the one in the function.
```
reported <- "not changed"
# inside this function, reported == "p = 0.002"
report_p(0.0023)
reported # still "not changed"
```
```
## [1] "p = 0.002"
## [1] "not changed"
```
### 7\.5\.5 Warnings and errors
What happens when you omit the argument for `p`? Or if you set `p` to 1\.5 or “a?”
You might want to add a more specific warning and stop running the function code if someone enters a value that isn’t a number. You can do this with the `stop()` function.
If someone enters a number that isn’t possible for a p\-value (0\-1\), you might want to warn them that this is probably not what they intended, but still continue with the function. You can do this with `warning()`.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
reported
}
```
```
report_p()
```
```
## Error in report_p(): argument "p" is missing, with no default
```
```
report_p("a")
```
```
## Error in report_p("a"): p must be a number
```
```
report_p(-2)
```
```
## Warning in report_p(-2): p-values are normally greater than 0
```
```
report_p(2)
```
```
## Warning in report_p(2): p-values are normally less than 1
```
```
## [1] "p < .001"
## [1] "p = 2"
```
### 7\.5\.1 Structuring a function
The general structure of a function is as follows:
```
function_name <- function(my_args) {
# process the arguments
# return some value
}
```
Here is a very simple function. Can you guess what it does?
```
add1 <- function(my_number) {
my_number + 1
}
add1(10)
```
```
## [1] 11
```
Let’s make a function that reports p\-values in APA format (with “p \= \[rounded value]” when p \>\= .001 and “p \< .001” when p \< .001\).
First, we have to name the function. You can name it anything, but try not to duplicate existing functions or you will overwrite them. For example, if you call your function `rep`, then you will need to use `base::rep()` to access the normal `rep` function. Let’s call our p\-value function `report_p` and set up the framework of the function.
```
report_p <- function() {
}
```
### 7\.5\.2 Arguments
We need to add one [argument](https://psyteachr.github.io/glossary/a#argument "A variable that provides input to a function."), the p\-value you want to report. The names you choose for the arguments are private to that argument, so it is not a problem if they conflict with other variables in your script. You put the arguments in the parentheses of `function()` in the order you want them to default (just like the built\-in functions you’ve used before).
```
report_p <- function(p) {
}
```
### 7\.5\.3 Argument defaults
You can add a default value to any argument. If that argument is skipped, then the function uses the default argument. It probably doesn’t make sense to run this function without specifying the p\-value, but we can add a second argument called `digits` that defaults to 3, so we can round p\-values to any number of digits.
```
report_p <- function(p, digits = 3) {
}
```
Now we need to write some code inside the function to process the input arguments and turn them into a **return**ed output. Put the output as the last item in function.
```
report_p <- function(p, digits = 3) {
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
reported
}
```
You might also see the returned output inside of the `return()` function. This does the same thing.
```
report_p <- function(p, digits = 3) {
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
return(reported)
}
```
When you run the code defining your function, it doesn’t output anything, but makes a new object in the Environment tab under **`Functions`**. Now you can run the function.
```
report_p(0.04869)
report_p(0.0000023)
```
```
## [1] "p = 0.049"
## [1] "p < .001"
```
### 7\.5\.4 Scope
What happens in a function stays in a function. You can change the value of a variable passed to a function, but that won’t change the value of the variable outside of the function, even if that variable has the same name as the one in the function.
```
reported <- "not changed"
# inside this function, reported == "p = 0.002"
report_p(0.0023)
reported # still "not changed"
```
```
## [1] "p = 0.002"
## [1] "not changed"
```
### 7\.5\.5 Warnings and errors
What happens when you omit the argument for `p`? Or if you set `p` to 1\.5 or “a?”
You might want to add a more specific warning and stop running the function code if someone enters a value that isn’t a number. You can do this with the `stop()` function.
If someone enters a number that isn’t possible for a p\-value (0\-1\), you might want to warn them that this is probably not what they intended, but still continue with the function. You can do this with `warning()`.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
reported = paste("p =", roundp)
}
reported
}
```
```
report_p()
```
```
## Error in report_p(): argument "p" is missing, with no default
```
```
report_p("a")
```
```
## Error in report_p("a"): p must be a number
```
```
report_p(-2)
```
```
## Warning in report_p(-2): p-values are normally greater than 0
```
```
report_p(2)
```
```
## Warning in report_p(2): p-values are normally less than 1
```
```
## [1] "p < .001"
## [1] "p = 2"
```
7\.6 Iterating your own functions
---------------------------------
First, let’s build up the code that we want to iterate.
### 7\.6\.1 rnorm()
Create a vector of 20 random numbers drawn from a normal distribution with a mean of 5 and standard deviation of 1 using the `rnorm()` function and store them in the variable `A`.
```
A <- rnorm(20, mean = 5, sd = 1)
```
### 7\.6\.2 tibble::tibble()
A `tibble` is a type of table or `data.frame`. The function `tibble::tibble()` creates a tibble with a column for each argument. Each argument takes the form `column_name = data_vector`.
Create a table called `dat` including two vectors: `A` that is a vector of 20 random normally distributed numbers with a mean of 5 and SD of 1, and `B` that is a vector of 20 random normally distributed numbers with a mean of 5\.5 and SD of 1\.
```
dat <- tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
)
```
### 7\.6\.3 t.test()
You can run a Welch two\-sample t\-test by including the two samples you made as the first two arguments to the function `t.test`. You can reference one column of a table by its names using the format `table_name$column_name`
```
t.test(dat$A, dat$B)
```
```
##
## Welch Two Sample t-test
##
## data: dat$A and dat$B
## t = -1.7716, df = 36.244, p-value = 0.08487
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -1.2445818 0.0838683
## sample estimates:
## mean of x mean of y
## 4.886096 5.466453
```
You can also convert the table to long format using the `gather` function and specify the t\-test using the format `dv_column~grouping_column`.
```
longdat <- gather(dat, group, score, A:B)
t.test(score~group, data = longdat)
```
```
##
## Welch Two Sample t-test
##
## data: score by group
## t = -1.7716, df = 36.244, p-value = 0.08487
## alternative hypothesis: true difference in means between group A and group B is not equal to 0
## 95 percent confidence interval:
## -1.2445818 0.0838683
## sample estimates:
## mean in group A mean in group B
## 4.886096 5.466453
```
### 7\.6\.4 broom::tidy()
You can use the function `broom::tidy()` to extract the data from a statistical test in a table format. The example below pipes everything together.
```
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy()
```
| estimate | estimate1 | estimate2 | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| \-0\.6422108 | 5\.044009 | 5\.68622 | \-2\.310591 | 0\.0264905 | 37\.27083 | \-1\.205237 | \-0\.0791844 | Welch Two Sample t\-test | two.sided |
In the pipeline above, `t.test(score~group, data = .)` uses the `.` notation to change the location of the piped\-in data table from it’s default position as the first argument to a different position.
Finally, we can extract a single value from this results table using `pull()`.
```
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
```
```
## [1] 0.7075268
```
### 7\.6\.5 Custom function: t\_sim()
First, name your function `t_sim` and wrap the code above in a function with no arguments.
```
t_sim <- function() {
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
}
```
Run it a few times to see what happens.
```
t_sim()
```
```
## [1] 0.00997552
```
### 7\.6\.6 Iterate t\_sim()
Let’s run the `t_sim` function 1000 times, assign the resulting p\-values to a vector called `reps`, and check what proportion of p\-values are lower than alpha (e.g., .05\). This number is the power for this analysis.
```
reps <- replicate(1000, t_sim())
alpha <- .05
power <- mean(reps < alpha)
power
```
```
## [1] 0.328
```
### 7\.6\.7 Set seed
You can use the `set.seed` function before you run a function that uses random numbers to make sure that you get the same random data back each time. You can use any integer you like as the seed.
```
set.seed(90201)
```
Make sure you don’t ever use `set.seed()` **inside** of a simulation function, or you will just simulate the exact same data over and over again.
Figure 7\.2: @KellyBodwin
### 7\.6\.8 Add arguments
You can just edit your function each time you want to cacluate power for a different sample n, but it is more efficent to build this into your fuction as an arguments. Redefine `t_sim`, setting arguments for the mean and SD of group A, the mean and SD of group B, and the number of subjects per group. Give them all default values.
```
t_sim <- function(n = 10, m1=0, sd1=1, m2=0, sd2=1) {
tibble(
A = rnorm(n, m1, sd1),
B = rnorm(n, m2, sd2)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
}
```
Test your function with some different values to see if the results make sense.
```
t_sim(100)
t_sim(100, 0, 1, 0.5, 1)
```
```
## [1] 0.5065619
## [1] 0.001844064
```
Use `replicate` to calculate power for 100 subjects/group with an effect size of 0\.2 (e.g., A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1\). Use 1000 replications.
```
reps <- replicate(1000, t_sim(100, 0, 1, 0.2, 1))
power <- mean(reps < .05)
power
```
```
## [1] 0.268
```
Compare this to power calculated from the `power.t.test` function.
```
power.t.test(n = 100, delta = 0.2, sd = 1, type="two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
Calculate power via simulation and `power.t.test` for the following tests:
* 20 subjects/group, A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1
* 40 subjects/group, A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1
* 20 subjects/group, A: m \= 10, SD \= 1; B: m \= 12, SD \= 1\.5
### 7\.6\.1 rnorm()
Create a vector of 20 random numbers drawn from a normal distribution with a mean of 5 and standard deviation of 1 using the `rnorm()` function and store them in the variable `A`.
```
A <- rnorm(20, mean = 5, sd = 1)
```
### 7\.6\.2 tibble::tibble()
A `tibble` is a type of table or `data.frame`. The function `tibble::tibble()` creates a tibble with a column for each argument. Each argument takes the form `column_name = data_vector`.
Create a table called `dat` including two vectors: `A` that is a vector of 20 random normally distributed numbers with a mean of 5 and SD of 1, and `B` that is a vector of 20 random normally distributed numbers with a mean of 5\.5 and SD of 1\.
```
dat <- tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
)
```
### 7\.6\.3 t.test()
You can run a Welch two\-sample t\-test by including the two samples you made as the first two arguments to the function `t.test`. You can reference one column of a table by its names using the format `table_name$column_name`
```
t.test(dat$A, dat$B)
```
```
##
## Welch Two Sample t-test
##
## data: dat$A and dat$B
## t = -1.7716, df = 36.244, p-value = 0.08487
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -1.2445818 0.0838683
## sample estimates:
## mean of x mean of y
## 4.886096 5.466453
```
You can also convert the table to long format using the `gather` function and specify the t\-test using the format `dv_column~grouping_column`.
```
longdat <- gather(dat, group, score, A:B)
t.test(score~group, data = longdat)
```
```
##
## Welch Two Sample t-test
##
## data: score by group
## t = -1.7716, df = 36.244, p-value = 0.08487
## alternative hypothesis: true difference in means between group A and group B is not equal to 0
## 95 percent confidence interval:
## -1.2445818 0.0838683
## sample estimates:
## mean in group A mean in group B
## 4.886096 5.466453
```
### 7\.6\.4 broom::tidy()
You can use the function `broom::tidy()` to extract the data from a statistical test in a table format. The example below pipes everything together.
```
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy()
```
| estimate | estimate1 | estimate2 | statistic | p.value | parameter | conf.low | conf.high | method | alternative |
| --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| \-0\.6422108 | 5\.044009 | 5\.68622 | \-2\.310591 | 0\.0264905 | 37\.27083 | \-1\.205237 | \-0\.0791844 | Welch Two Sample t\-test | two.sided |
In the pipeline above, `t.test(score~group, data = .)` uses the `.` notation to change the location of the piped\-in data table from it’s default position as the first argument to a different position.
Finally, we can extract a single value from this results table using `pull()`.
```
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
```
```
## [1] 0.7075268
```
### 7\.6\.5 Custom function: t\_sim()
First, name your function `t_sim` and wrap the code above in a function with no arguments.
```
t_sim <- function() {
tibble(
A = rnorm(20, 5, 1),
B = rnorm(20, 5.5, 1)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
}
```
Run it a few times to see what happens.
```
t_sim()
```
```
## [1] 0.00997552
```
### 7\.6\.6 Iterate t\_sim()
Let’s run the `t_sim` function 1000 times, assign the resulting p\-values to a vector called `reps`, and check what proportion of p\-values are lower than alpha (e.g., .05\). This number is the power for this analysis.
```
reps <- replicate(1000, t_sim())
alpha <- .05
power <- mean(reps < alpha)
power
```
```
## [1] 0.328
```
### 7\.6\.7 Set seed
You can use the `set.seed` function before you run a function that uses random numbers to make sure that you get the same random data back each time. You can use any integer you like as the seed.
```
set.seed(90201)
```
Make sure you don’t ever use `set.seed()` **inside** of a simulation function, or you will just simulate the exact same data over and over again.
Figure 7\.2: @KellyBodwin
### 7\.6\.8 Add arguments
You can just edit your function each time you want to cacluate power for a different sample n, but it is more efficent to build this into your fuction as an arguments. Redefine `t_sim`, setting arguments for the mean and SD of group A, the mean and SD of group B, and the number of subjects per group. Give them all default values.
```
t_sim <- function(n = 10, m1=0, sd1=1, m2=0, sd2=1) {
tibble(
A = rnorm(n, m1, sd1),
B = rnorm(n, m2, sd2)
) %>%
gather(group, score, A:B) %>%
t.test(score~group, data = .) %>%
broom::tidy() %>%
pull(p.value)
}
```
Test your function with some different values to see if the results make sense.
```
t_sim(100)
t_sim(100, 0, 1, 0.5, 1)
```
```
## [1] 0.5065619
## [1] 0.001844064
```
Use `replicate` to calculate power for 100 subjects/group with an effect size of 0\.2 (e.g., A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1\). Use 1000 replications.
```
reps <- replicate(1000, t_sim(100, 0, 1, 0.2, 1))
power <- mean(reps < .05)
power
```
```
## [1] 0.268
```
Compare this to power calculated from the `power.t.test` function.
```
power.t.test(n = 100, delta = 0.2, sd = 1, type="two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
Calculate power via simulation and `power.t.test` for the following tests:
* 20 subjects/group, A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1
* 40 subjects/group, A: m \= 0, SD \= 1; B: m \= 0\.2, SD \= 1
* 20 subjects/group, A: m \= 10, SD \= 1; B: m \= 12, SD \= 1\.5
7\.7 Glossary
-------------
| term | definition |
| --- | --- |
| [argument](https://psyteachr.github.io/glossary/a#argument) | A variable that provides input to a function. |
| [data type](https://psyteachr.github.io/glossary/d#data.type) | The kind of data represented by an object. |
| [double](https://psyteachr.github.io/glossary/d#double) | A data type representing a real decimal number |
| [function](https://psyteachr.github.io/glossary/f#function.) | A named section of code that can be reused. |
| [iteration](https://psyteachr.github.io/glossary/i#iteration) | Repeating a process or function |
| [matrix](https://psyteachr.github.io/glossary/m#matrix) | A container data type consisting of numbers arranged into a fixed number of rows and columns |
7\.8 Exercises
--------------
Download the [exercises](exercises/07_func_exercise.Rmd). See the [answers](exercises/07_func_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(7)
# run this to access the answers
dataskills::exercise(7, answers = TRUE)
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/sim.html |
Chapter 8 Probability \& Simulation
===================================
8\.1 Learning Objectives
------------------------
### 8\.1\.1 Basic
1. Generate and plot data randomly sampled from common distributions [(video)](https://youtu.be/iuecrT3q1kg)
* [uniform](sim.html#uniform)
* [binomial](sim.html#binomial)
* [normal](sim.html#normal)
* [poisson](sim.html#poisson)
2. Generate related variables from a [multivariate](sim.html#mvdist) distribution [(video)](https://youtu.be/B14HfWQ1kIc)
3. Define the following [statistical terms](sim.html#stat-terms):
* [p\-value](sim.html#p-value)
* [alpha](sim.html#alpha)
* [power](sim.html#power)
* smallest effect size of interest ([SESOI](#sesoi))
* [false positive](sim.html#false-pos) (type I error)
* [false negative](#false-neg) (type II error)
* confidence interval ([CI](#conf-inf))
4. Test sampled distributions against a null hypothesis [(video)](https://youtu.be/Am3G6rA2S1s)
* [exact binomial test](sim.html#exact-binom)
* [t\-test](sim.html#t-test) (1\-sample, independent samples, paired samples)
* [correlation](sim.html#correlation) (pearson, kendall and spearman)
5. [Calculate power](sim.html#calc-power-binom) using iteration and a sampling function
### 8\.1\.2 Intermediate
6. Calculate the minimum sample size for a specific power level and design
8\.2 Resources
--------------
* [Stub for this lesson](stubs/8_sim.Rmd)
* [Distribution Shiny App](http://shiny.psy.gla.ac.uk/debruine/simulate/) (or run `dataskills::app("simulate")`
* [Simulation tutorials](https://debruine.github.io/tutorials/sim-data.html)
* [Chapter 21: Iteration](http://r4ds.had.co.nz/iteration.html) of *R for Data Science*
* [Improving your statistical inferences](https://www.coursera.org/learn/statistical-inferences/) on Coursera (week 1\)
* [Faux](https://debruine.github.io/faux/) package for data simulation
* [Simulation\-Based Power\-Analysis for Factorial ANOVA Designs](https://psyarxiv.com/baxsf) ([Daniel Lakens and Caldwell 2019](#ref-lakens_caldwell_2019))
* [Understanding mixed effects models through data simulation](https://psyarxiv.com/xp5cy/) ([DeBruine and Barr 2019](#ref-debruine_barr_2019))
8\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(plotly)
library(faux)
set.seed(8675309) # makes sure random numbers are reproducible
```
Simulating data is a very powerful way to test your understanding of statistical concepts. We are going to use [simulations](https://psyteachr.github.io/glossary/s#simulation "Generating data from summary parameters") to learn the basics of [probability](https://psyteachr.github.io/glossary/p#probability "A number between 0 and 1 where 0 indicates impossibility of the event and 1 indicates certainty").
8\.4 Univariate Distributions
-----------------------------
First, we need to understand some different ways data might be distributed and how to simulate data from these distributions. A [univariate](https://psyteachr.github.io/glossary/u#univariate "Relating to a single variable.") distribution is the distribution of a single variable.
### 8\.4\.1 Uniform Distribution
The [uniform distribution](https://psyteachr.github.io/glossary/u#uniform-distribution "A distribution where all numbers in the range have an equal probability of being sampled") is the simplest distribution. All numbers in the range have an equal probability of being sampled.
Take a minute to think of things in your own research that are uniformly distributed.
#### 8\.4\.1\.1 Continuous distribution
`runif(n, min=0, max=1)`
Use `runif()` to sample from a continuous uniform distribution.
```
u <- runif(100000, min = 0, max = 1)
# plot to visualise
ggplot() +
geom_histogram(aes(u), binwidth = 0.05, boundary = 0,
fill = "white", colour = "black")
```
#### 8\.4\.1\.2 Discrete
`sample(x, size, replace = FALSE, prob = NULL)`
Use `sample()` to sample from a [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") distribution.
You can use `sample()` to simulate events like rolling dice or choosing from a deck of cards. The code below simulates rolling a 6\-sided die 10000 times. We set `replace` to `TRUE` so that each event is independent. See what happens if you set `replace` to `FALSE`.
```
rolls <- sample(1:6, 10000, replace = TRUE)
# plot the results
ggplot() +
geom_histogram(aes(rolls), binwidth = 1,
fill = "white", color = "black")
```
Figure 8\.1: Distribution of dice rolls.
You can also use sample to sample from a list of named outcomes.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
sample(pet_types, 10, replace = TRUE)
```
```
## [1] "cat" "cat" "cat" "cat" "ferret" "dog" "bird" "cat"
## [9] "dog" "fish"
```
Ferrets are a much less common pet than cats and dogs, so our sample isn’t very realistic. You can set the probabilities of each item in the list with the `prob` argument.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
pet_prob <- c(0.3, 0.4, 0.1, 0.1, 0.1)
sample(pet_types, 10, replace = TRUE, prob = pet_prob)
```
```
## [1] "fish" "dog" "cat" "dog" "cat" "dog" "fish" "dog" "cat" "fish"
```
### 8\.4\.2 Binomial Distribution
The [binomial distribution](https://psyteachr.github.io/glossary/b#binomial-distribution "The distribution of data where each observation can have one of two outcomes, like success/failure, yes/no or head/tails. ") is useful for modelling binary data, where each observation can have one of two outcomes, like success/failure, yes/no or head/tails.
`rbinom(n, size, prob)`
The `rbinom` function will generate a random binomial distribution.
* `n` \= number of observations
* `size` \= number of trials
* `prob` \= probability of success on each trial
Coin flips are a typical example of a binomial distribution, where we can assign heads to 1 and tails to 0\.
```
# 20 individual coin flips of a fair coin
rbinom(20, 1, 0.5)
```
```
## [1] 1 1 1 0 1 1 0 1 0 0 1 1 1 0 0 0 1 0 0 0
```
```
# 20 individual coin flips of a baised (0.75) coin
rbinom(20, 1, 0.75)
```
```
## [1] 1 1 1 0 1 0 1 1 1 0 1 1 1 0 0 1 1 1 1 1
```
You can generate the total number of heads in 1 set of 20 coin flips by setting `size` to 20 and `n` to 1\.
```
rbinom(1, 20, 0.75)
```
```
## [1] 13
```
You can generate more sets of 20 coin flips by increasing the `n`.
```
rbinom(10, 20, 0.5)
```
```
## [1] 10 14 11 7 11 13 6 10 9 9
```
You should always check your randomly generated data to check that it makes sense. For large samples, it’s easiest to do that graphically. A histogram is usually the best choice for plotting binomial data.
```
flips <- rbinom(1000, 20, 0.5)
ggplot() +
geom_histogram(
aes(flips),
binwidth = 1,
fill = "white",
color = "black"
)
```
Run the simulation above several times, noting how the histogram changes. Try changing the values of `n`, `size`, and `prob`.
### 8\.4\.3 Normal Distribution
`rnorm(n, mean, sd)`
We can simulate a [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable.") of size `n` if we know the `mean` and standard deviation (`sd`). A density plot is usually the best way to visualise this type of data if your `n` is large.
```
dv <- rnorm(1e5, 10, 2)
# proportions of normally-distributed data
# within 1, 2, or 3 SD of the mean
sd1 <- .6827
sd2 <- .9545
sd3 <- .9973
ggplot() +
geom_density(aes(dv), fill = "white") +
geom_vline(xintercept = mean(dv), color = "red") +
geom_vline(xintercept = quantile(dv, .5 - sd1/2), color = "darkgreen") +
geom_vline(xintercept = quantile(dv, .5 + sd1/2), color = "darkgreen") +
geom_vline(xintercept = quantile(dv, .5 - sd2/2), color = "blue") +
geom_vline(xintercept = quantile(dv, .5 + sd2/2), color = "blue") +
geom_vline(xintercept = quantile(dv, .5 - sd3/2), color = "purple") +
geom_vline(xintercept = quantile(dv, .5 + sd3/2), color = "purple") +
scale_x_continuous(
limits = c(0,20),
breaks = seq(0,20)
)
```
Run the simulation above several times, noting how the density plot changes. What do the vertical lines represent? Try changing the values of `n`, `mean`, and `sd`.
### 8\.4\.4 Poisson Distribution
The [Poisson distribution](https://psyteachr.github.io/glossary/p#poisson-distribution "A distribution that models independent events happening over a unit of time") is useful for modelling events, like how many times something happens over a unit of time, as long as the events are independent (e.g., an event having happened in one time period doesn’t make it more or less likely to happen in the next).
`rpois(n, lambda)`
The `rpois` function will generate a random Poisson distribution.
* `n` \= number of observations
* `lambda` \= the mean number of events per observation
Let’s say we want to model how many texts you get each day for a whole. You know that you get an average of 20 texts per day. So we set `n = 365` and `lambda = 20`. Lambda is a [parameter](https://psyteachr.github.io/glossary/p#parameter "A value that describes a distribution, such as the mean or SD") that describes the Poisson distribution, just like mean and standard deviation are parameters that describe the normal distribution.
```
texts <- rpois(n = 365, lambda = 20)
ggplot() +
geom_histogram(
aes(texts),
binwidth = 1,
fill = "white",
color = "black"
)
```
So we can see that over a year, you’re unlikely to get fewer than 5 texts in a day, or more than 35 (although it’s not impossible).
8\.5 Multivariate Distributions
-------------------------------
### 8\.5\.1 Bivariate Normal
A [bivariate normal](https://psyteachr.github.io/glossary/b#bivariate-normal "Two normally distributed vectors that have a specified correlation with each other.") distribution is two normally distributed vectors that have a specified relationship, or [correlation](https://psyteachr.github.io/glossary/c#correlation "The relationship two vectors have to each other.") to each other.
What if we want to sample from a population with specific relationships between variables? We can sample from a bivariate normal distribution using `mvrnorm()` from the `MASS` package.
Don’t load MASS with the `library()` function because it will create a conflict with the `select()` function from dplyr and you will always need to preface it with `dplyr::`. Just use `MASS::mvrnorm()`.
You need to know how many observations you want to simulate (`n`) the means of the two variables (`mu`) and you need to calculate a [covariance matrix](https://psyteachr.github.io/glossary/c#covariance-matrix "Parameters showing how a set of vectors vary and are correlated.") (`sigma`) from the correlation between the variables (`rho`) and their standard deviations (`sd`).
```
n <- 1000 # number of random samples
# name the mu values to give the resulting columns names
mu <- c(x = 10, y = 20) # the means of the samples
sd <- c(5, 6) # the SDs of the samples
rho <- 0.5 # population correlation between the two variables
# correlation matrix
cor_mat <- matrix(c( 1, rho,
rho, 1), 2)
# create the covariance matrix
sigma <- (sd %*% t(sd)) * cor_mat
# sample from bivariate normal distribution
bvn <- MASS::mvrnorm(n, mu, sigma)
```
Plot your sampled variables to check everything worked like you expect. It’s easiest to convert the output of `mvnorm` into a tibble in order to use it in ggplot.
```
bvn %>%
as_tibble() %>%
ggplot(aes(x, y)) +
geom_point(alpha = 0.5) +
geom_smooth(method = "lm") +
geom_density2d()
```
```
## `geom_smooth()` using formula 'y ~ x'
```
### 8\.5\.2 Multivariate Normal
You can generate more than 2 correlated variables, but it gets a little trickier to create the correlation matrix.
```
n <- 200 # number of random samples
mu <- c(x = 10, y = 20, z = 30) # the means of the samples
sd <- c(8, 9, 10) # the SDs of the samples
rho1_2 <- 0.5 # correlation between x and y
rho1_3 <- 0 # correlation between x and z
rho2_3 <- 0.7 # correlation between y and z
# correlation matrix
cor_mat <- matrix(c( 1, rho1_2, rho1_3,
rho1_2, 1, rho2_3,
rho1_3, rho2_3, 1), 3)
sigma <- (sd %*% t(sd)) * cor_mat
bvn3 <- MASS::mvrnorm(n, mu, sigma)
cor(bvn3) # check correlation matrix
```
```
## x y z
## x 1.0000000 0.5896674 0.1513108
## y 0.5896674 1.0000000 0.7468737
## z 0.1513108 0.7468737 1.0000000
```
You can use the `plotly` library to make a 3D graph.
```
#set up the marker style
marker_style = list(
color = "#ff0000",
line = list(
color = "#444",
width = 1
),
opacity = 0.5,
size = 5
)
# convert bvn3 to a tibble, plot and add markers
bvn3 %>%
as_tibble() %>%
plot_ly(x = ~x, y = ~y, z = ~z, marker = marker_style) %>%
add_markers()
```
### 8\.5\.3 Faux
Alternatively, you can use the package [faux](https://debruine.github.io/faux/) to generate any number of correlated variables. It also has a function for checking the parameters of your new simulated data (`check_sim_stats()`).
```
bvn3 <- rnorm_multi(
n = n,
vars = 3,
mu = mu,
sd = sd,
r = c(rho1_2, rho1_3, rho2_3),
varnames = c("x", "y", "z")
)
check_sim_stats(bvn3)
```
| n | var | x | y | z | mean | sd |
| --- | --- | --- | --- | --- | --- | --- |
| 200 | x | 1\.00 | 0\.54 | 0\.10 | 10\.35 | 7\.66 |
| 200 | y | 0\.54 | 1\.00 | 0\.67 | 20\.01 | 8\.77 |
| 200 | z | 0\.10 | 0\.67 | 1\.00 | 30\.37 | 9\.59 |
You can also use faux to simulate data for factorial designs. Set up the between\-subject and within\-subject factors as lists with the levels as (named) vectors. Means and standard deviations can be included as vectors or data frames. The function calculates sigma for you, structures your dataset, and outputs a plot of the design.
```
b <- list(pet = c(cat = "Cat Owners",
dog = "Dog Owners"))
w <- list(time = c("morning",
"noon",
"night"))
mu <- data.frame(
cat = c(10, 12, 14),
dog = c(10, 15, 20),
row.names = w$time
)
sd <- c(3, 3, 3, 5, 5, 5)
pet_data <- sim_design(
within = w,
between = b,
n = 100,
mu = mu,
sd = sd,
r = .5)
```
You can use the `check_sim_stats()` function, but you need to set the argument `between` to a vector of all the between\-subject factor columns.
```
check_sim_stats(pet_data, between = "pet")
```
| pet | n | var | morning | noon | night | mean | sd |
| --- | --- | --- | --- | --- | --- | --- | --- |
| cat | 100 | morning | 1\.00 | 0\.57 | 0\.51 | 10\.62 | 3\.48 |
| cat | 100 | noon | 0\.57 | 1\.00 | 0\.59 | 12\.44 | 3\.01 |
| cat | 100 | night | 0\.51 | 0\.59 | 1\.00 | 14\.61 | 3\.14 |
| dog | 100 | morning | 1\.00 | 0\.55 | 0\.50 | 9\.44 | 4\.92 |
| dog | 100 | noon | 0\.55 | 1\.00 | 0\.48 | 14\.18 | 5\.90 |
| dog | 100 | night | 0\.50 | 0\.48 | 1\.00 | 19\.42 | 5\.36 |
See the [faux website](https://debruine.github.io/faux/) for more detailed tutorials.
8\.6 Statistical terms
----------------------
Let’s review some important statistical terms before we review tests of distributions.
### 8\.6\.1 Effect
The [effect](https://psyteachr.github.io/glossary/e#effect "Some measure of your data, such as the mean value, or the number of standard deviations the mean differs from a chance value.") is some measure of your data. This will depend on the type of data you have and the type of statistical test you are using. For example, if you flipped a coin 100 times and it landed heads 66 times, the effect would be 66/100\. You can then use the exact binomial test to compare this effect to the [null effect](https://psyteachr.github.io/glossary/n#null-effect "An outcome that does not show an otherwise expected effect.") you would expect from a fair coin (50/100\) or to any other effect you choose. The [effect size](https://psyteachr.github.io/glossary/e#effect-size "The difference between the effect in your data and the null effect (usually a chance value)") refers to the difference between the effect in your data and the null effect (usually a chance value).
### 8\.6\.2 P\-value
The [p\-value](https://psyteachr.github.io/glossary/p#p-value "The probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect)") of a test is the probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect). So if you used a binomial test to test against a chance probability of 1/6 (e.g., the probability of rolling 1 with a 6\-sided die), then a p\-value of 0\.17 means that you could expect to see effects at least as extreme as your data 17% of the time just by chance alone.
### 8\.6\.3 Alpha
If you are using null hypothesis significance testing ([NHST](https://psyteachr.github.io/glossary/n#nhst "Null Hypothesis Signficance Testing")), then you need to decide on a cutoff value ([alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot")) for making a decision to reject the null hypothesis. We call p\-values below the alpha cutoff [significant](https://psyteachr.github.io/glossary/s#significant "The conclusion when the p-value is less than the critical alpha. "). In psychology, alpha is traditionally set at 0\.05, but there are good arguments for [setting a different criterion in some circumstances](http://daniellakens.blogspot.com/2019/05/justifying-your-alpha-by-minimizing-or.html).
### 8\.6\.4 False Positive/Negative
The probability that a test concludes there is an effect when there is really no effect (e.g., concludes a fair coin is biased) is called the [false positive](https://psyteachr.github.io/glossary/f#false-positive "When a test concludes there is an effect when there really is no effect") rate (or [Type I Error](https://psyteachr.github.io/glossary/t#type-i-error "A false positive; When a test concludes there is an effect when there is really is no effect") Rate). The [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") is the false positive rate we accept for a test. The probability that a test concludes there is no effect when there really is one (e.g., concludes a biased coin is fair) is called the [false negative](https://psyteachr.github.io/glossary/f#false-negative "When a test concludes there is no effect when there really is an effect") rate (or [Type II Error](https://psyteachr.github.io/glossary/t#type-ii-error "A false negative; When a test concludes there is no effect when there is really is an effect") Rate). The [beta](https://psyteachr.github.io/glossary/b#beta "The false negative rate we accept for a statistical test.") is the false negative rate we accept for a test.
The false positive rate is not the overall probability of getting a false positive, but the probability of a false positive *under the null hypothesis*. Similarly, the false negative rate is the probability of a false negative *under the alternative hypothesis*. Unless we know the probability that we are testing a null effect, we can’t say anything about the overall probability of false positives or negatives. If 100% of the hypotheses we test are false, then all significant effects are false positives, but if all of the hypotheses we test are true, then all of the positives are true positives and the overall false positive rate is 0\.
### 8\.6\.5 Power and SESOI
[Power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") is equal to 1 minus beta (i.e., the [true positive](https://psyteachr.github.io/glossary/t#true-positive "When a test concludes there is an effect when there is really is an effect") rate), and depends on the effect size, how many samples we take (n), and what we set alpha to. For any test, if you specify all but one of these values, you can calculate the last. The effect size you use in power calculations should be the smallest effect size of interest ([SESOI](https://psyteachr.github.io/glossary/s#sesoi "Smallest Effect Size of Interest: the smallest effect that is theoretically or practically meaningful")). See ([Daniël Lakens, Scheel, and Isager 2018](#ref-TOSTtutorial))([https://doi.org/10\.1177/2515245918770963](https://doi.org/10.1177/2515245918770963)) for a tutorial on methods for choosing an SESOI.
Let’s say you want to be able to detect at least a 15% difference from chance (50%) in a coin’s fairness, and you want your test to have a 5% chance of false positives and a 10% chance of false negatives. What are the following values?
* alpha \=
* beta \=
* false positive rate \=
* false negative rate \=
* power \=
* SESOI \=
### 8\.6\.6 Confidence Intervals
The [confidence interval](https://psyteachr.github.io/glossary/c#confidence-interval "A type of interval estimate used to summarise a given statistic or measurement where a proportion of intervals calculated from the sample(s) will contain the true value of the statistic.") is a range around some value (such as a mean) that has some probability of containing the parameter, if you repeated the process many times. Traditionally in psychology, we use 95% confidence intervals, but you can calculate CIs for any percentage.
A 95% CI does *not* mean that there is a 95% probability that the true mean lies within this range, but that, if you repeated the study many times and calculated the CI this same way every time, you’d expect the true mean to be inside the CI in 95% of the studies. This seems like a subtle distinction, but can lead to some misunderstandings. See ([Morey et al. 2016](#ref-Morey2016))([https://link.springer.com/article/10\.3758/s13423\-015\-0947\-8](https://link.springer.com/article/10.3758/s13423-015-0947-8)) for more detailed discussion.
8\.7 Tests
----------
### 8\.7\.1 Exact binomial test
`binom.test(x, n, p)`
You can test a binomial distribution against a specific probability using the exact binomial test.
* `x` \= the number of successes
* `n` \= the number of trials
* `p` \= hypothesised probability of success
Here we can test a series of 10 coin flips from a fair coin and a biased coin against the hypothesised probability of 0\.5 (even odds).
```
n <- 10
fair_coin <- rbinom(1, n, 0.5)
biased_coin <- rbinom(1, n, 0.6)
binom.test(fair_coin, n, p = 0.5)
binom.test(biased_coin, n, p = 0.5)
```
```
##
## Exact binomial test
##
## data: fair_coin and n
## number of successes = 6, number of trials = 10, p-value = 0.7539
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.2623781 0.8784477
## sample estimates:
## probability of success
## 0.6
##
##
## Exact binomial test
##
## data: biased_coin and n
## number of successes = 8, number of trials = 10, p-value = 0.1094
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.4439045 0.9747893
## sample estimates:
## probability of success
## 0.8
```
Run the code above several times, noting the p\-values for the fair and biased coins. Alternatively, you can [simulate coin flips](http://shiny.psy.gla.ac.uk/debruine/coinsim/) online and build up a graph of results and p\-values.
* How does the p\-value vary for the fair and biased coins?
* What happens to the confidence intervals if you increase n from 10 to 100?
* What criterion would you use to tell if the observed data indicate the coin is fair or biased?
* How often do you conclude the fair coin is biased (false positives)?
* How often do you conclude the biased coin is fair (false negatives)?
#### 8\.7\.1\.1 Sampling function
To estimate these rates, we need to repeat the sampling above many times. A [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") is ideal for repeating the exact same procedure over and over. Set the arguments of the function to variables that you might want to change. Here, we will want to estimate power for:
* different sample sizes (`n`)
* different effects (`bias`)
* different hypothesised probabilities (`p`, defaults to 0\.5\)
```
sim_binom_test <- function(n, bias, p = 0.5) {
# simulate 1 coin flip n times with the specified bias
coin <- rbinom(1, n, bias)
# run a binomial test on the simulated data for the specified p
btest <- binom.test(coin, n, p)
# return the p-value of this test
btest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_binom_test(100, 0.6)
```
```
## [1] 0.1332106
```
#### 8\.7\.1\.2 Calculate power
Then you can use the `replicate()` function to run it many times and save all the output values. You can calculate the [power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") of your analysis by checking the proportion of your simulated analyses that have a p\-value less than your [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") (the probability of rejecting the null hypothesis when the null hypothesis is true).
```
my_reps <- replicate(1e4, sim_binom_test(100, 0.6))
alpha <- 0.05 # this does not always have to be 0.05
mean(my_reps < alpha)
```
```
## [1] 0.4561
```
`1e4` is just scientific notation for a 1 followed by 4 zeros (`10000`). When you’re running simulations, you usually want to run a lot of them. It’s a pain to keep track of whether you’ve typed 5 or 6 zeros (100000 vs 1000000\) and this will change your running time by an order of magnitude.
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
### 8\.7\.2 T\-test
`t.test(x, y, alternative, mu, paired)`
Use a t\-test to compare the mean of one distribution to a null hypothesis (one\-sample t\-test), compare the means of two samples (independent\-samples t\-test), or compare pairs of values (paired\-samples t\-test).
You can run a one\-sample t\-test comparing the mean of your data to `mu`. Here is a simulated distribution with a mean of 0\.5 and an SD of 1, creating an effect size of 0\.5 SD when tested against a `mu` of 0\. Run the simulation a few times to see how often the t\-test returns a significant p\-value (or run it in the [shiny app](http://shiny.psy.gla.ac.uk/debruine/normsim/)).
```
sim_norm <- rnorm(100, 0.5, 1)
t.test(sim_norm, mu = 0)
```
```
##
## One Sample t-test
##
## data: sim_norm
## t = 6.2874, df = 99, p-value = 8.758e-09
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 0.4049912 0.7784761
## sample estimates:
## mean of x
## 0.5917337
```
Run an independent\-samples t\-test by comparing two lists of values.
```
a <- rnorm(100, 0.5, 1)
b <- rnorm(100, 0.7, 1)
t_ind <- t.test(a, b, paired = FALSE)
t_ind
```
```
##
## Welch Two Sample t-test
##
## data: a and b
## t = -1.8061, df = 197.94, p-value = 0.07243
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.54825320 0.02408469
## sample estimates:
## mean of x mean of y
## 0.4585985 0.7206828
```
The `paired` argument defaults to `FALSE`, but it’s good practice to always explicitly set it so you are never confused about what type of test you are performing.
#### 8\.7\.2\.1 Sampling function
We can use the `names()` function to find out the names of all the t.test parameters and use this to just get one type of data, like the test statistic (e.g., t\-value).
```
names(t_ind)
t_ind$statistic
```
```
## [1] "statistic" "parameter" "p.value" "conf.int" "estimate"
## [6] "null.value" "stderr" "alternative" "method" "data.name"
## t
## -1.806051
```
If you want to run the simulation many times and record information each time, first you need to turn your simulation into a function.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2) {
# simulate v1
v1 <- rnorm(n, m1, sd1)
#simulate v2
v2 <- rnorm(n, m2, sd2)
# compare using an independent samples t-test
t_ind <- t.test(v1, v2, paired = FALSE)
# return the p-value
return(t_ind$p.value)
}
```
Run it a few times to check that it gives you sensible values.
```
sim_t_ind(100, 0.7, 1, 0.5, 1)
```
```
## [1] 0.362521
```
#### 8\.7\.2\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_t_ind(100, 0.7, 1, 0.5, 1))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.2925
```
Run the code above several times. How much does the power value fluctuate? How many replications do you need to run to get a reliable estimate of power?
Compare your power estimate from simluation to a power calculation using `power.t.test()`. Here, `delta` is the difference between `m1` and `m2` above.
```
power.t.test(n = 100,
delta = 0.2,
sd = 1,
sig.level = alpha,
type = "two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
What do you think the distribution of p\-values is when there is no effect (i.e., the means are identical)? Check this yourself.
Make sure the `boundary` argument is set to `0` for p\-value histograms. See what happens with a null effect if `boundary` is not set.
### 8\.7\.3 Correlation
You can test if continuous variables are related to each other using the `cor()` function. Let’s use `rnorm_multi()` to make a quick table of correlated values.
```
dat <- rnorm_multi(
n = 100,
vars = 2,
r = -0.5,
varnames = c("x", "y")
)
cor(dat$x, dat$y)
```
```
## [1] -0.4960331
```
Set `n` to a large number like 1e6 so that the correlations are less affected by chance. Change the value of the **mean** for `a`, `x`, or `y`. Does it change the correlation between `x` and `y`? What happens when you increase or decrease the **sd**? Can you work out any rules here?
`cor()` defaults to Pearson’s correlations. Set the `method` argument to use Kendall or Spearman correlations.
```
cor(dat$x, dat$y, method = "spearman")
```
```
## [1] -0.4724992
```
#### 8\.7\.3\.1 Sampling function
Create a function that creates two variables with `n` observations and `r` correlation. Use the function `cor.test()` to give you p\-values for the correlation.
```
sim_cor_test <- function(n = 100, r = 0) {
dat <- rnorm_multi(
n = n,
vars = 2,
r = r,
varnames = c("x", "y")
)
ctest <- cor.test(dat$x, dat$y)
ctest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_cor_test(50, .5)
```
```
## [1] 0.001354836
```
#### 8\.7\.3\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_cor_test(50, 0.5))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.965
```
Compare to the value calcuated by the pwr package.
```
pwr::pwr.r.test(n = 50, r = 0.5)
```
```
##
## approximate correlation power calculation (arctangh transformation)
##
## n = 50
## r = 0.5
## sig.level = 0.05
## power = 0.9669813
## alternative = two.sided
```
8\.8 Example
------------
This example uses the [Growth Chart Data Tables](https://www.cdc.gov/growthcharts/data/zscore/zstatage.csv) from the [US CDC](https://www.cdc.gov/growthcharts/zscore.htm). The data consist of height in centimeters for the z\-scores of –2, \-1\.5, \-1, \-0\.5, 0, 0\.5, 1, 1\.5, and 2 by sex (1\=male; 2\=female) and half\-month of age (from 24\.0 to 240\.5 months).
### 8\.8\.1 Load \& wrangle
We have to do a little data wrangling first. Have a look at the data after you import it and relabel `Sex` to `male` and `female` instead of `1` and `2`. Also convert `Agemos` (age in months) to years. Relabel the column `0` as `mean` and calculate a new column named `sd` as the difference between columns `1` and `0`.
```
orig_height_age <- read_csv("https://www.cdc.gov/growthcharts/data/zscore/zstatage.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## Sex = col_character(),
## Agemos = col_character(),
## `-2` = col_double(),
## `-1.5` = col_double(),
## `-1` = col_double(),
## `-0.5` = col_double(),
## `0` = col_double(),
## `0.5` = col_double(),
## `1` = col_double(),
## `1.5` = col_double(),
## `2` = col_double()
## )
```
```
height_age <- orig_height_age %>%
filter(Sex %in% c(1,2)) %>%
mutate(
sex = recode(Sex, "1" = "male", "2" = "female"),
age = as.numeric(Agemos)/12,
sd = `1` - `0`
) %>%
select(sex, age, mean = `0`, sd)
```
### 8\.8\.2 Plot
Plot your new data frame to see how mean height changes with age for boys and girls.
```
ggplot(height_age, aes(age, mean, color = sex)) +
geom_smooth(aes(ymin = mean - sd,
ymax = mean + sd),
stat="identity")
```
### 8\.8\.3 Simulate a population
Simulate 50 random male heights and 50 random female heights for 20\-year\-olds using the `rnorm()` function and the means and SDs from the `height_age` table. Plot the data.
```
age_filter <- 20
m <- filter(height_age, age == age_filter, sex == "male")
f <- filter(height_age, age == age_filter, sex == "female")
sim_height <- tibble(
male = rnorm(50, m$mean, m$sd),
female = rnorm(50, f$mean, f$sd)
) %>%
gather("sex", "height", male:female)
ggplot(sim_height) +
geom_density(aes(height, fill = sex), alpha = 0.5) +
xlim(125, 225)
```
Run the simulation above several times, noting how the density plot changes. Try changing the age you’re simulating.
### 8\.8\.4 Analyse simulated data
Use the `sim_t_ind(n, m1, sd1, m2, sd2)` function we created above to generate one simulation with a sample size of 50 in each group using the means and SDs of male and female 14\-year\-olds.
```
age_filter <- 14
m <- filter(height_age, age == age_filter, sex == "male")
f <- filter(height_age, age == age_filter, sex == "female")
sim_t_ind(50, m$mean, m$sd, f$mean, f$sd)
```
```
## [1] 0.0005255744
```
### 8\.8\.5 Replicate simulation
Now replicate this 1e4 times using the `replicate()` function. This function will save the returned p\-values in a list (`my_reps`). We can then check what proportion of those p\-values are less than our alpha value. This is the power of our test.
```
my_reps <- replicate(1e4, sim_t_ind(50, m$mean, m$sd, f$mean, f$sd))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.6403
```
### 8\.8\.6 One\-tailed prediction
This design has about 65% power to detect the sex difference in height (with a 2\-tailed test). Modify the `sim_t_ind` function for a 1\-tailed prediction.
You could just set `alternative` equal to “greater” in the function, but it might be better to add the `alt` argument to your function (giving it the same default value as `t.test`) and change the value of `alternative` in the function to `alt`.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2, alt = "two.sided") {
v1 <- rnorm(n, m1, sd1)
v2 <- rnorm(n, m2, sd2)
t_ind <- t.test(v1, v2, paired = FALSE, alternative = alt)
return(t_ind$p.value)
}
alpha <- 0.05
my_reps <- replicate(1e4, sim_t_ind(50, m$mean, m$sd, f$mean, f$sd, "greater"))
mean(my_reps < alpha)
```
```
## [1] 0.752
```
### 8\.8\.7 Range of sample sizes
What if we want to find out what sample size will give us 80% power? We can try trial and error. We know the number should be slightly larger than 50\. But you can search more systematically by repeating your power calculation for a range of sample sizes.
This might seem like overkill for a t\-test, where you can easily look up sample size calculators online, but it is a valuable skill to learn for when your analyses become more complicated.
Start with a relatively low number of replications and/or more spread\-out samples to estimate where you should be looking more specifically. Then you can repeat with a narrower/denser range of sample sizes and more iterations.
```
# make another custom function to return power
pwr_func <- function(n, reps = 100, alpha = 0.05) {
ps <- replicate(reps, sim_t_ind(n, m$mean, m$sd, f$mean, f$sd, "greater"))
mean(ps < alpha)
}
# make a table of the n values you want to check
power_table <- tibble(
n = seq(20, 100, by = 5)
) %>%
# run the power function for each n
mutate(power = map_dbl(n, pwr_func))
# plot the results
ggplot(power_table, aes(n, power)) +
geom_smooth() +
geom_point() +
geom_hline(yintercept = 0.8)
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
Now we can narrow down our search to values around 55 (plus or minus 5\) and increase the number of replications from 1e3 to 1e4\.
```
power_table <- tibble(
n = seq(50, 60)
) %>%
mutate(power = map_dbl(n, pwr_func, reps = 1e4))
ggplot(power_table, aes(n, power)) +
geom_smooth() +
geom_point() +
geom_hline(yintercept = 0.8)
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
8\.9 Glossary
-------------
| term | definition |
| --- | --- |
| [alpha](https://psyteachr.github.io/glossary/a#alpha) | (stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot |
| [beta](https://psyteachr.github.io/glossary/b#beta) | The false negative rate we accept for a statistical test. |
| [binomial distribution](https://psyteachr.github.io/glossary/b#binomial.distribution) | The distribution of data where each observation can have one of two outcomes, like success/failure, yes/no or head/tails. |
| [bivariate normal](https://psyteachr.github.io/glossary/b#bivariate.normal) | Two normally distributed vectors that have a specified correlation with each other. |
| [confidence interval](https://psyteachr.github.io/glossary/c#confidence.interval) | A type of interval estimate used to summarise a given statistic or measurement where a proportion of intervals calculated from the sample(s) will contain the true value of the statistic. |
| [correlation](https://psyteachr.github.io/glossary/c#correlation) | The relationship two vectors have to each other. |
| [covariance matrix](https://psyteachr.github.io/glossary/c#covariance.matrix) | Parameters showing how a set of vectors vary and are correlated. |
| [discrete](https://psyteachr.github.io/glossary/d#discrete) | Data that can only take certain values, such as integers. |
| [effect size](https://psyteachr.github.io/glossary/e#effect.size) | The difference between the effect in your data and the null effect (usually a chance value) |
| [effect](https://psyteachr.github.io/glossary/e#effect) | Some measure of your data, such as the mean value, or the number of standard deviations the mean differs from a chance value. |
| [false negative](https://psyteachr.github.io/glossary/f#false.negative) | When a test concludes there is no effect when there really is an effect |
| [false positive](https://psyteachr.github.io/glossary/f#false.positive) | When a test concludes there is an effect when there really is no effect |
| [function](https://psyteachr.github.io/glossary/f#function.) | A named section of code that can be reused. |
| [nhst](https://psyteachr.github.io/glossary/n#nhst) | Null Hypothesis Signficance Testing |
| [normal distribution](https://psyteachr.github.io/glossary/n#normal.distribution) | A symmetric distribution of data where values near the centre are most probable. |
| [null effect](https://psyteachr.github.io/glossary/n#null.effect) | An outcome that does not show an otherwise expected effect. |
| [p value](https://psyteachr.github.io/glossary/p#p.value) | The probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect) |
| [parameter](https://psyteachr.github.io/glossary/p#parameter) | A value that describes a distribution, such as the mean or SD |
| [poisson distribution](https://psyteachr.github.io/glossary/p#poisson.distribution) | A distribution that models independent events happening over a unit of time |
| [power](https://psyteachr.github.io/glossary/p#power) | The probability of rejecting the null hypothesis when it is false. |
| [probability](https://psyteachr.github.io/glossary/p#probability) | A number between 0 and 1 where 0 indicates impossibility of the event and 1 indicates certainty |
| [sesoi](https://psyteachr.github.io/glossary/s#sesoi) | Smallest Effect Size of Interest: the smallest effect that is theoretically or practically meaningful |
| [significant](https://psyteachr.github.io/glossary/s#significant) | The conclusion when the p\-value is less than the critical alpha. |
| [simulation](https://psyteachr.github.io/glossary/s#simulation) | Generating data from summary parameters |
| [true positive](https://psyteachr.github.io/glossary/t#true.positive) | When a test concludes there is an effect when there is really is an effect |
| [type i error](https://psyteachr.github.io/glossary/t#type.i.error) | A false positive; When a test concludes there is an effect when there is really is no effect |
| [type ii error](https://psyteachr.github.io/glossary/t#type.ii.error) | A false negative; When a test concludes there is no effect when there is really is an effect |
| [uniform distribution](https://psyteachr.github.io/glossary/u#uniform.distribution) | A distribution where all numbers in the range have an equal probability of being sampled |
| [univariate](https://psyteachr.github.io/glossary/u#univariate) | Relating to a single variable. |
8\.10 Exercises
---------------
Download the [exercises](exercises/08_sim_exercise.Rmd). See the [answers](exercises/08_sim_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(8)
# run this to access the answers
dataskills::exercise(8, answers = TRUE)
```
8\.1 Learning Objectives
------------------------
### 8\.1\.1 Basic
1. Generate and plot data randomly sampled from common distributions [(video)](https://youtu.be/iuecrT3q1kg)
* [uniform](sim.html#uniform)
* [binomial](sim.html#binomial)
* [normal](sim.html#normal)
* [poisson](sim.html#poisson)
2. Generate related variables from a [multivariate](sim.html#mvdist) distribution [(video)](https://youtu.be/B14HfWQ1kIc)
3. Define the following [statistical terms](sim.html#stat-terms):
* [p\-value](sim.html#p-value)
* [alpha](sim.html#alpha)
* [power](sim.html#power)
* smallest effect size of interest ([SESOI](#sesoi))
* [false positive](sim.html#false-pos) (type I error)
* [false negative](#false-neg) (type II error)
* confidence interval ([CI](#conf-inf))
4. Test sampled distributions against a null hypothesis [(video)](https://youtu.be/Am3G6rA2S1s)
* [exact binomial test](sim.html#exact-binom)
* [t\-test](sim.html#t-test) (1\-sample, independent samples, paired samples)
* [correlation](sim.html#correlation) (pearson, kendall and spearman)
5. [Calculate power](sim.html#calc-power-binom) using iteration and a sampling function
### 8\.1\.2 Intermediate
6. Calculate the minimum sample size for a specific power level and design
### 8\.1\.1 Basic
1. Generate and plot data randomly sampled from common distributions [(video)](https://youtu.be/iuecrT3q1kg)
* [uniform](sim.html#uniform)
* [binomial](sim.html#binomial)
* [normal](sim.html#normal)
* [poisson](sim.html#poisson)
2. Generate related variables from a [multivariate](sim.html#mvdist) distribution [(video)](https://youtu.be/B14HfWQ1kIc)
3. Define the following [statistical terms](sim.html#stat-terms):
* [p\-value](sim.html#p-value)
* [alpha](sim.html#alpha)
* [power](sim.html#power)
* smallest effect size of interest ([SESOI](#sesoi))
* [false positive](sim.html#false-pos) (type I error)
* [false negative](#false-neg) (type II error)
* confidence interval ([CI](#conf-inf))
4. Test sampled distributions against a null hypothesis [(video)](https://youtu.be/Am3G6rA2S1s)
* [exact binomial test](sim.html#exact-binom)
* [t\-test](sim.html#t-test) (1\-sample, independent samples, paired samples)
* [correlation](sim.html#correlation) (pearson, kendall and spearman)
5. [Calculate power](sim.html#calc-power-binom) using iteration and a sampling function
### 8\.1\.2 Intermediate
6. Calculate the minimum sample size for a specific power level and design
8\.2 Resources
--------------
* [Stub for this lesson](stubs/8_sim.Rmd)
* [Distribution Shiny App](http://shiny.psy.gla.ac.uk/debruine/simulate/) (or run `dataskills::app("simulate")`
* [Simulation tutorials](https://debruine.github.io/tutorials/sim-data.html)
* [Chapter 21: Iteration](http://r4ds.had.co.nz/iteration.html) of *R for Data Science*
* [Improving your statistical inferences](https://www.coursera.org/learn/statistical-inferences/) on Coursera (week 1\)
* [Faux](https://debruine.github.io/faux/) package for data simulation
* [Simulation\-Based Power\-Analysis for Factorial ANOVA Designs](https://psyarxiv.com/baxsf) ([Daniel Lakens and Caldwell 2019](#ref-lakens_caldwell_2019))
* [Understanding mixed effects models through data simulation](https://psyarxiv.com/xp5cy/) ([DeBruine and Barr 2019](#ref-debruine_barr_2019))
8\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(plotly)
library(faux)
set.seed(8675309) # makes sure random numbers are reproducible
```
Simulating data is a very powerful way to test your understanding of statistical concepts. We are going to use [simulations](https://psyteachr.github.io/glossary/s#simulation "Generating data from summary parameters") to learn the basics of [probability](https://psyteachr.github.io/glossary/p#probability "A number between 0 and 1 where 0 indicates impossibility of the event and 1 indicates certainty").
8\.4 Univariate Distributions
-----------------------------
First, we need to understand some different ways data might be distributed and how to simulate data from these distributions. A [univariate](https://psyteachr.github.io/glossary/u#univariate "Relating to a single variable.") distribution is the distribution of a single variable.
### 8\.4\.1 Uniform Distribution
The [uniform distribution](https://psyteachr.github.io/glossary/u#uniform-distribution "A distribution where all numbers in the range have an equal probability of being sampled") is the simplest distribution. All numbers in the range have an equal probability of being sampled.
Take a minute to think of things in your own research that are uniformly distributed.
#### 8\.4\.1\.1 Continuous distribution
`runif(n, min=0, max=1)`
Use `runif()` to sample from a continuous uniform distribution.
```
u <- runif(100000, min = 0, max = 1)
# plot to visualise
ggplot() +
geom_histogram(aes(u), binwidth = 0.05, boundary = 0,
fill = "white", colour = "black")
```
#### 8\.4\.1\.2 Discrete
`sample(x, size, replace = FALSE, prob = NULL)`
Use `sample()` to sample from a [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") distribution.
You can use `sample()` to simulate events like rolling dice or choosing from a deck of cards. The code below simulates rolling a 6\-sided die 10000 times. We set `replace` to `TRUE` so that each event is independent. See what happens if you set `replace` to `FALSE`.
```
rolls <- sample(1:6, 10000, replace = TRUE)
# plot the results
ggplot() +
geom_histogram(aes(rolls), binwidth = 1,
fill = "white", color = "black")
```
Figure 8\.1: Distribution of dice rolls.
You can also use sample to sample from a list of named outcomes.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
sample(pet_types, 10, replace = TRUE)
```
```
## [1] "cat" "cat" "cat" "cat" "ferret" "dog" "bird" "cat"
## [9] "dog" "fish"
```
Ferrets are a much less common pet than cats and dogs, so our sample isn’t very realistic. You can set the probabilities of each item in the list with the `prob` argument.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
pet_prob <- c(0.3, 0.4, 0.1, 0.1, 0.1)
sample(pet_types, 10, replace = TRUE, prob = pet_prob)
```
```
## [1] "fish" "dog" "cat" "dog" "cat" "dog" "fish" "dog" "cat" "fish"
```
### 8\.4\.2 Binomial Distribution
The [binomial distribution](https://psyteachr.github.io/glossary/b#binomial-distribution "The distribution of data where each observation can have one of two outcomes, like success/failure, yes/no or head/tails. ") is useful for modelling binary data, where each observation can have one of two outcomes, like success/failure, yes/no or head/tails.
`rbinom(n, size, prob)`
The `rbinom` function will generate a random binomial distribution.
* `n` \= number of observations
* `size` \= number of trials
* `prob` \= probability of success on each trial
Coin flips are a typical example of a binomial distribution, where we can assign heads to 1 and tails to 0\.
```
# 20 individual coin flips of a fair coin
rbinom(20, 1, 0.5)
```
```
## [1] 1 1 1 0 1 1 0 1 0 0 1 1 1 0 0 0 1 0 0 0
```
```
# 20 individual coin flips of a baised (0.75) coin
rbinom(20, 1, 0.75)
```
```
## [1] 1 1 1 0 1 0 1 1 1 0 1 1 1 0 0 1 1 1 1 1
```
You can generate the total number of heads in 1 set of 20 coin flips by setting `size` to 20 and `n` to 1\.
```
rbinom(1, 20, 0.75)
```
```
## [1] 13
```
You can generate more sets of 20 coin flips by increasing the `n`.
```
rbinom(10, 20, 0.5)
```
```
## [1] 10 14 11 7 11 13 6 10 9 9
```
You should always check your randomly generated data to check that it makes sense. For large samples, it’s easiest to do that graphically. A histogram is usually the best choice for plotting binomial data.
```
flips <- rbinom(1000, 20, 0.5)
ggplot() +
geom_histogram(
aes(flips),
binwidth = 1,
fill = "white",
color = "black"
)
```
Run the simulation above several times, noting how the histogram changes. Try changing the values of `n`, `size`, and `prob`.
### 8\.4\.3 Normal Distribution
`rnorm(n, mean, sd)`
We can simulate a [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable.") of size `n` if we know the `mean` and standard deviation (`sd`). A density plot is usually the best way to visualise this type of data if your `n` is large.
```
dv <- rnorm(1e5, 10, 2)
# proportions of normally-distributed data
# within 1, 2, or 3 SD of the mean
sd1 <- .6827
sd2 <- .9545
sd3 <- .9973
ggplot() +
geom_density(aes(dv), fill = "white") +
geom_vline(xintercept = mean(dv), color = "red") +
geom_vline(xintercept = quantile(dv, .5 - sd1/2), color = "darkgreen") +
geom_vline(xintercept = quantile(dv, .5 + sd1/2), color = "darkgreen") +
geom_vline(xintercept = quantile(dv, .5 - sd2/2), color = "blue") +
geom_vline(xintercept = quantile(dv, .5 + sd2/2), color = "blue") +
geom_vline(xintercept = quantile(dv, .5 - sd3/2), color = "purple") +
geom_vline(xintercept = quantile(dv, .5 + sd3/2), color = "purple") +
scale_x_continuous(
limits = c(0,20),
breaks = seq(0,20)
)
```
Run the simulation above several times, noting how the density plot changes. What do the vertical lines represent? Try changing the values of `n`, `mean`, and `sd`.
### 8\.4\.4 Poisson Distribution
The [Poisson distribution](https://psyteachr.github.io/glossary/p#poisson-distribution "A distribution that models independent events happening over a unit of time") is useful for modelling events, like how many times something happens over a unit of time, as long as the events are independent (e.g., an event having happened in one time period doesn’t make it more or less likely to happen in the next).
`rpois(n, lambda)`
The `rpois` function will generate a random Poisson distribution.
* `n` \= number of observations
* `lambda` \= the mean number of events per observation
Let’s say we want to model how many texts you get each day for a whole. You know that you get an average of 20 texts per day. So we set `n = 365` and `lambda = 20`. Lambda is a [parameter](https://psyteachr.github.io/glossary/p#parameter "A value that describes a distribution, such as the mean or SD") that describes the Poisson distribution, just like mean and standard deviation are parameters that describe the normal distribution.
```
texts <- rpois(n = 365, lambda = 20)
ggplot() +
geom_histogram(
aes(texts),
binwidth = 1,
fill = "white",
color = "black"
)
```
So we can see that over a year, you’re unlikely to get fewer than 5 texts in a day, or more than 35 (although it’s not impossible).
### 8\.4\.1 Uniform Distribution
The [uniform distribution](https://psyteachr.github.io/glossary/u#uniform-distribution "A distribution where all numbers in the range have an equal probability of being sampled") is the simplest distribution. All numbers in the range have an equal probability of being sampled.
Take a minute to think of things in your own research that are uniformly distributed.
#### 8\.4\.1\.1 Continuous distribution
`runif(n, min=0, max=1)`
Use `runif()` to sample from a continuous uniform distribution.
```
u <- runif(100000, min = 0, max = 1)
# plot to visualise
ggplot() +
geom_histogram(aes(u), binwidth = 0.05, boundary = 0,
fill = "white", colour = "black")
```
#### 8\.4\.1\.2 Discrete
`sample(x, size, replace = FALSE, prob = NULL)`
Use `sample()` to sample from a [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") distribution.
You can use `sample()` to simulate events like rolling dice or choosing from a deck of cards. The code below simulates rolling a 6\-sided die 10000 times. We set `replace` to `TRUE` so that each event is independent. See what happens if you set `replace` to `FALSE`.
```
rolls <- sample(1:6, 10000, replace = TRUE)
# plot the results
ggplot() +
geom_histogram(aes(rolls), binwidth = 1,
fill = "white", color = "black")
```
Figure 8\.1: Distribution of dice rolls.
You can also use sample to sample from a list of named outcomes.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
sample(pet_types, 10, replace = TRUE)
```
```
## [1] "cat" "cat" "cat" "cat" "ferret" "dog" "bird" "cat"
## [9] "dog" "fish"
```
Ferrets are a much less common pet than cats and dogs, so our sample isn’t very realistic. You can set the probabilities of each item in the list with the `prob` argument.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
pet_prob <- c(0.3, 0.4, 0.1, 0.1, 0.1)
sample(pet_types, 10, replace = TRUE, prob = pet_prob)
```
```
## [1] "fish" "dog" "cat" "dog" "cat" "dog" "fish" "dog" "cat" "fish"
```
#### 8\.4\.1\.1 Continuous distribution
`runif(n, min=0, max=1)`
Use `runif()` to sample from a continuous uniform distribution.
```
u <- runif(100000, min = 0, max = 1)
# plot to visualise
ggplot() +
geom_histogram(aes(u), binwidth = 0.05, boundary = 0,
fill = "white", colour = "black")
```
#### 8\.4\.1\.2 Discrete
`sample(x, size, replace = FALSE, prob = NULL)`
Use `sample()` to sample from a [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") distribution.
You can use `sample()` to simulate events like rolling dice or choosing from a deck of cards. The code below simulates rolling a 6\-sided die 10000 times. We set `replace` to `TRUE` so that each event is independent. See what happens if you set `replace` to `FALSE`.
```
rolls <- sample(1:6, 10000, replace = TRUE)
# plot the results
ggplot() +
geom_histogram(aes(rolls), binwidth = 1,
fill = "white", color = "black")
```
Figure 8\.1: Distribution of dice rolls.
You can also use sample to sample from a list of named outcomes.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
sample(pet_types, 10, replace = TRUE)
```
```
## [1] "cat" "cat" "cat" "cat" "ferret" "dog" "bird" "cat"
## [9] "dog" "fish"
```
Ferrets are a much less common pet than cats and dogs, so our sample isn’t very realistic. You can set the probabilities of each item in the list with the `prob` argument.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
pet_prob <- c(0.3, 0.4, 0.1, 0.1, 0.1)
sample(pet_types, 10, replace = TRUE, prob = pet_prob)
```
```
## [1] "fish" "dog" "cat" "dog" "cat" "dog" "fish" "dog" "cat" "fish"
```
### 8\.4\.2 Binomial Distribution
The [binomial distribution](https://psyteachr.github.io/glossary/b#binomial-distribution "The distribution of data where each observation can have one of two outcomes, like success/failure, yes/no or head/tails. ") is useful for modelling binary data, where each observation can have one of two outcomes, like success/failure, yes/no or head/tails.
`rbinom(n, size, prob)`
The `rbinom` function will generate a random binomial distribution.
* `n` \= number of observations
* `size` \= number of trials
* `prob` \= probability of success on each trial
Coin flips are a typical example of a binomial distribution, where we can assign heads to 1 and tails to 0\.
```
# 20 individual coin flips of a fair coin
rbinom(20, 1, 0.5)
```
```
## [1] 1 1 1 0 1 1 0 1 0 0 1 1 1 0 0 0 1 0 0 0
```
```
# 20 individual coin flips of a baised (0.75) coin
rbinom(20, 1, 0.75)
```
```
## [1] 1 1 1 0 1 0 1 1 1 0 1 1 1 0 0 1 1 1 1 1
```
You can generate the total number of heads in 1 set of 20 coin flips by setting `size` to 20 and `n` to 1\.
```
rbinom(1, 20, 0.75)
```
```
## [1] 13
```
You can generate more sets of 20 coin flips by increasing the `n`.
```
rbinom(10, 20, 0.5)
```
```
## [1] 10 14 11 7 11 13 6 10 9 9
```
You should always check your randomly generated data to check that it makes sense. For large samples, it’s easiest to do that graphically. A histogram is usually the best choice for plotting binomial data.
```
flips <- rbinom(1000, 20, 0.5)
ggplot() +
geom_histogram(
aes(flips),
binwidth = 1,
fill = "white",
color = "black"
)
```
Run the simulation above several times, noting how the histogram changes. Try changing the values of `n`, `size`, and `prob`.
### 8\.4\.3 Normal Distribution
`rnorm(n, mean, sd)`
We can simulate a [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable.") of size `n` if we know the `mean` and standard deviation (`sd`). A density plot is usually the best way to visualise this type of data if your `n` is large.
```
dv <- rnorm(1e5, 10, 2)
# proportions of normally-distributed data
# within 1, 2, or 3 SD of the mean
sd1 <- .6827
sd2 <- .9545
sd3 <- .9973
ggplot() +
geom_density(aes(dv), fill = "white") +
geom_vline(xintercept = mean(dv), color = "red") +
geom_vline(xintercept = quantile(dv, .5 - sd1/2), color = "darkgreen") +
geom_vline(xintercept = quantile(dv, .5 + sd1/2), color = "darkgreen") +
geom_vline(xintercept = quantile(dv, .5 - sd2/2), color = "blue") +
geom_vline(xintercept = quantile(dv, .5 + sd2/2), color = "blue") +
geom_vline(xintercept = quantile(dv, .5 - sd3/2), color = "purple") +
geom_vline(xintercept = quantile(dv, .5 + sd3/2), color = "purple") +
scale_x_continuous(
limits = c(0,20),
breaks = seq(0,20)
)
```
Run the simulation above several times, noting how the density plot changes. What do the vertical lines represent? Try changing the values of `n`, `mean`, and `sd`.
### 8\.4\.4 Poisson Distribution
The [Poisson distribution](https://psyteachr.github.io/glossary/p#poisson-distribution "A distribution that models independent events happening over a unit of time") is useful for modelling events, like how many times something happens over a unit of time, as long as the events are independent (e.g., an event having happened in one time period doesn’t make it more or less likely to happen in the next).
`rpois(n, lambda)`
The `rpois` function will generate a random Poisson distribution.
* `n` \= number of observations
* `lambda` \= the mean number of events per observation
Let’s say we want to model how many texts you get each day for a whole. You know that you get an average of 20 texts per day. So we set `n = 365` and `lambda = 20`. Lambda is a [parameter](https://psyteachr.github.io/glossary/p#parameter "A value that describes a distribution, such as the mean or SD") that describes the Poisson distribution, just like mean and standard deviation are parameters that describe the normal distribution.
```
texts <- rpois(n = 365, lambda = 20)
ggplot() +
geom_histogram(
aes(texts),
binwidth = 1,
fill = "white",
color = "black"
)
```
So we can see that over a year, you’re unlikely to get fewer than 5 texts in a day, or more than 35 (although it’s not impossible).
8\.5 Multivariate Distributions
-------------------------------
### 8\.5\.1 Bivariate Normal
A [bivariate normal](https://psyteachr.github.io/glossary/b#bivariate-normal "Two normally distributed vectors that have a specified correlation with each other.") distribution is two normally distributed vectors that have a specified relationship, or [correlation](https://psyteachr.github.io/glossary/c#correlation "The relationship two vectors have to each other.") to each other.
What if we want to sample from a population with specific relationships between variables? We can sample from a bivariate normal distribution using `mvrnorm()` from the `MASS` package.
Don’t load MASS with the `library()` function because it will create a conflict with the `select()` function from dplyr and you will always need to preface it with `dplyr::`. Just use `MASS::mvrnorm()`.
You need to know how many observations you want to simulate (`n`) the means of the two variables (`mu`) and you need to calculate a [covariance matrix](https://psyteachr.github.io/glossary/c#covariance-matrix "Parameters showing how a set of vectors vary and are correlated.") (`sigma`) from the correlation between the variables (`rho`) and their standard deviations (`sd`).
```
n <- 1000 # number of random samples
# name the mu values to give the resulting columns names
mu <- c(x = 10, y = 20) # the means of the samples
sd <- c(5, 6) # the SDs of the samples
rho <- 0.5 # population correlation between the two variables
# correlation matrix
cor_mat <- matrix(c( 1, rho,
rho, 1), 2)
# create the covariance matrix
sigma <- (sd %*% t(sd)) * cor_mat
# sample from bivariate normal distribution
bvn <- MASS::mvrnorm(n, mu, sigma)
```
Plot your sampled variables to check everything worked like you expect. It’s easiest to convert the output of `mvnorm` into a tibble in order to use it in ggplot.
```
bvn %>%
as_tibble() %>%
ggplot(aes(x, y)) +
geom_point(alpha = 0.5) +
geom_smooth(method = "lm") +
geom_density2d()
```
```
## `geom_smooth()` using formula 'y ~ x'
```
### 8\.5\.2 Multivariate Normal
You can generate more than 2 correlated variables, but it gets a little trickier to create the correlation matrix.
```
n <- 200 # number of random samples
mu <- c(x = 10, y = 20, z = 30) # the means of the samples
sd <- c(8, 9, 10) # the SDs of the samples
rho1_2 <- 0.5 # correlation between x and y
rho1_3 <- 0 # correlation between x and z
rho2_3 <- 0.7 # correlation between y and z
# correlation matrix
cor_mat <- matrix(c( 1, rho1_2, rho1_3,
rho1_2, 1, rho2_3,
rho1_3, rho2_3, 1), 3)
sigma <- (sd %*% t(sd)) * cor_mat
bvn3 <- MASS::mvrnorm(n, mu, sigma)
cor(bvn3) # check correlation matrix
```
```
## x y z
## x 1.0000000 0.5896674 0.1513108
## y 0.5896674 1.0000000 0.7468737
## z 0.1513108 0.7468737 1.0000000
```
You can use the `plotly` library to make a 3D graph.
```
#set up the marker style
marker_style = list(
color = "#ff0000",
line = list(
color = "#444",
width = 1
),
opacity = 0.5,
size = 5
)
# convert bvn3 to a tibble, plot and add markers
bvn3 %>%
as_tibble() %>%
plot_ly(x = ~x, y = ~y, z = ~z, marker = marker_style) %>%
add_markers()
```
### 8\.5\.3 Faux
Alternatively, you can use the package [faux](https://debruine.github.io/faux/) to generate any number of correlated variables. It also has a function for checking the parameters of your new simulated data (`check_sim_stats()`).
```
bvn3 <- rnorm_multi(
n = n,
vars = 3,
mu = mu,
sd = sd,
r = c(rho1_2, rho1_3, rho2_3),
varnames = c("x", "y", "z")
)
check_sim_stats(bvn3)
```
| n | var | x | y | z | mean | sd |
| --- | --- | --- | --- | --- | --- | --- |
| 200 | x | 1\.00 | 0\.54 | 0\.10 | 10\.35 | 7\.66 |
| 200 | y | 0\.54 | 1\.00 | 0\.67 | 20\.01 | 8\.77 |
| 200 | z | 0\.10 | 0\.67 | 1\.00 | 30\.37 | 9\.59 |
You can also use faux to simulate data for factorial designs. Set up the between\-subject and within\-subject factors as lists with the levels as (named) vectors. Means and standard deviations can be included as vectors or data frames. The function calculates sigma for you, structures your dataset, and outputs a plot of the design.
```
b <- list(pet = c(cat = "Cat Owners",
dog = "Dog Owners"))
w <- list(time = c("morning",
"noon",
"night"))
mu <- data.frame(
cat = c(10, 12, 14),
dog = c(10, 15, 20),
row.names = w$time
)
sd <- c(3, 3, 3, 5, 5, 5)
pet_data <- sim_design(
within = w,
between = b,
n = 100,
mu = mu,
sd = sd,
r = .5)
```
You can use the `check_sim_stats()` function, but you need to set the argument `between` to a vector of all the between\-subject factor columns.
```
check_sim_stats(pet_data, between = "pet")
```
| pet | n | var | morning | noon | night | mean | sd |
| --- | --- | --- | --- | --- | --- | --- | --- |
| cat | 100 | morning | 1\.00 | 0\.57 | 0\.51 | 10\.62 | 3\.48 |
| cat | 100 | noon | 0\.57 | 1\.00 | 0\.59 | 12\.44 | 3\.01 |
| cat | 100 | night | 0\.51 | 0\.59 | 1\.00 | 14\.61 | 3\.14 |
| dog | 100 | morning | 1\.00 | 0\.55 | 0\.50 | 9\.44 | 4\.92 |
| dog | 100 | noon | 0\.55 | 1\.00 | 0\.48 | 14\.18 | 5\.90 |
| dog | 100 | night | 0\.50 | 0\.48 | 1\.00 | 19\.42 | 5\.36 |
See the [faux website](https://debruine.github.io/faux/) for more detailed tutorials.
### 8\.5\.1 Bivariate Normal
A [bivariate normal](https://psyteachr.github.io/glossary/b#bivariate-normal "Two normally distributed vectors that have a specified correlation with each other.") distribution is two normally distributed vectors that have a specified relationship, or [correlation](https://psyteachr.github.io/glossary/c#correlation "The relationship two vectors have to each other.") to each other.
What if we want to sample from a population with specific relationships between variables? We can sample from a bivariate normal distribution using `mvrnorm()` from the `MASS` package.
Don’t load MASS with the `library()` function because it will create a conflict with the `select()` function from dplyr and you will always need to preface it with `dplyr::`. Just use `MASS::mvrnorm()`.
You need to know how many observations you want to simulate (`n`) the means of the two variables (`mu`) and you need to calculate a [covariance matrix](https://psyteachr.github.io/glossary/c#covariance-matrix "Parameters showing how a set of vectors vary and are correlated.") (`sigma`) from the correlation between the variables (`rho`) and their standard deviations (`sd`).
```
n <- 1000 # number of random samples
# name the mu values to give the resulting columns names
mu <- c(x = 10, y = 20) # the means of the samples
sd <- c(5, 6) # the SDs of the samples
rho <- 0.5 # population correlation between the two variables
# correlation matrix
cor_mat <- matrix(c( 1, rho,
rho, 1), 2)
# create the covariance matrix
sigma <- (sd %*% t(sd)) * cor_mat
# sample from bivariate normal distribution
bvn <- MASS::mvrnorm(n, mu, sigma)
```
Plot your sampled variables to check everything worked like you expect. It’s easiest to convert the output of `mvnorm` into a tibble in order to use it in ggplot.
```
bvn %>%
as_tibble() %>%
ggplot(aes(x, y)) +
geom_point(alpha = 0.5) +
geom_smooth(method = "lm") +
geom_density2d()
```
```
## `geom_smooth()` using formula 'y ~ x'
```
### 8\.5\.2 Multivariate Normal
You can generate more than 2 correlated variables, but it gets a little trickier to create the correlation matrix.
```
n <- 200 # number of random samples
mu <- c(x = 10, y = 20, z = 30) # the means of the samples
sd <- c(8, 9, 10) # the SDs of the samples
rho1_2 <- 0.5 # correlation between x and y
rho1_3 <- 0 # correlation between x and z
rho2_3 <- 0.7 # correlation between y and z
# correlation matrix
cor_mat <- matrix(c( 1, rho1_2, rho1_3,
rho1_2, 1, rho2_3,
rho1_3, rho2_3, 1), 3)
sigma <- (sd %*% t(sd)) * cor_mat
bvn3 <- MASS::mvrnorm(n, mu, sigma)
cor(bvn3) # check correlation matrix
```
```
## x y z
## x 1.0000000 0.5896674 0.1513108
## y 0.5896674 1.0000000 0.7468737
## z 0.1513108 0.7468737 1.0000000
```
You can use the `plotly` library to make a 3D graph.
```
#set up the marker style
marker_style = list(
color = "#ff0000",
line = list(
color = "#444",
width = 1
),
opacity = 0.5,
size = 5
)
# convert bvn3 to a tibble, plot and add markers
bvn3 %>%
as_tibble() %>%
plot_ly(x = ~x, y = ~y, z = ~z, marker = marker_style) %>%
add_markers()
```
### 8\.5\.3 Faux
Alternatively, you can use the package [faux](https://debruine.github.io/faux/) to generate any number of correlated variables. It also has a function for checking the parameters of your new simulated data (`check_sim_stats()`).
```
bvn3 <- rnorm_multi(
n = n,
vars = 3,
mu = mu,
sd = sd,
r = c(rho1_2, rho1_3, rho2_3),
varnames = c("x", "y", "z")
)
check_sim_stats(bvn3)
```
| n | var | x | y | z | mean | sd |
| --- | --- | --- | --- | --- | --- | --- |
| 200 | x | 1\.00 | 0\.54 | 0\.10 | 10\.35 | 7\.66 |
| 200 | y | 0\.54 | 1\.00 | 0\.67 | 20\.01 | 8\.77 |
| 200 | z | 0\.10 | 0\.67 | 1\.00 | 30\.37 | 9\.59 |
You can also use faux to simulate data for factorial designs. Set up the between\-subject and within\-subject factors as lists with the levels as (named) vectors. Means and standard deviations can be included as vectors or data frames. The function calculates sigma for you, structures your dataset, and outputs a plot of the design.
```
b <- list(pet = c(cat = "Cat Owners",
dog = "Dog Owners"))
w <- list(time = c("morning",
"noon",
"night"))
mu <- data.frame(
cat = c(10, 12, 14),
dog = c(10, 15, 20),
row.names = w$time
)
sd <- c(3, 3, 3, 5, 5, 5)
pet_data <- sim_design(
within = w,
between = b,
n = 100,
mu = mu,
sd = sd,
r = .5)
```
You can use the `check_sim_stats()` function, but you need to set the argument `between` to a vector of all the between\-subject factor columns.
```
check_sim_stats(pet_data, between = "pet")
```
| pet | n | var | morning | noon | night | mean | sd |
| --- | --- | --- | --- | --- | --- | --- | --- |
| cat | 100 | morning | 1\.00 | 0\.57 | 0\.51 | 10\.62 | 3\.48 |
| cat | 100 | noon | 0\.57 | 1\.00 | 0\.59 | 12\.44 | 3\.01 |
| cat | 100 | night | 0\.51 | 0\.59 | 1\.00 | 14\.61 | 3\.14 |
| dog | 100 | morning | 1\.00 | 0\.55 | 0\.50 | 9\.44 | 4\.92 |
| dog | 100 | noon | 0\.55 | 1\.00 | 0\.48 | 14\.18 | 5\.90 |
| dog | 100 | night | 0\.50 | 0\.48 | 1\.00 | 19\.42 | 5\.36 |
See the [faux website](https://debruine.github.io/faux/) for more detailed tutorials.
8\.6 Statistical terms
----------------------
Let’s review some important statistical terms before we review tests of distributions.
### 8\.6\.1 Effect
The [effect](https://psyteachr.github.io/glossary/e#effect "Some measure of your data, such as the mean value, or the number of standard deviations the mean differs from a chance value.") is some measure of your data. This will depend on the type of data you have and the type of statistical test you are using. For example, if you flipped a coin 100 times and it landed heads 66 times, the effect would be 66/100\. You can then use the exact binomial test to compare this effect to the [null effect](https://psyteachr.github.io/glossary/n#null-effect "An outcome that does not show an otherwise expected effect.") you would expect from a fair coin (50/100\) or to any other effect you choose. The [effect size](https://psyteachr.github.io/glossary/e#effect-size "The difference between the effect in your data and the null effect (usually a chance value)") refers to the difference between the effect in your data and the null effect (usually a chance value).
### 8\.6\.2 P\-value
The [p\-value](https://psyteachr.github.io/glossary/p#p-value "The probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect)") of a test is the probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect). So if you used a binomial test to test against a chance probability of 1/6 (e.g., the probability of rolling 1 with a 6\-sided die), then a p\-value of 0\.17 means that you could expect to see effects at least as extreme as your data 17% of the time just by chance alone.
### 8\.6\.3 Alpha
If you are using null hypothesis significance testing ([NHST](https://psyteachr.github.io/glossary/n#nhst "Null Hypothesis Signficance Testing")), then you need to decide on a cutoff value ([alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot")) for making a decision to reject the null hypothesis. We call p\-values below the alpha cutoff [significant](https://psyteachr.github.io/glossary/s#significant "The conclusion when the p-value is less than the critical alpha. "). In psychology, alpha is traditionally set at 0\.05, but there are good arguments for [setting a different criterion in some circumstances](http://daniellakens.blogspot.com/2019/05/justifying-your-alpha-by-minimizing-or.html).
### 8\.6\.4 False Positive/Negative
The probability that a test concludes there is an effect when there is really no effect (e.g., concludes a fair coin is biased) is called the [false positive](https://psyteachr.github.io/glossary/f#false-positive "When a test concludes there is an effect when there really is no effect") rate (or [Type I Error](https://psyteachr.github.io/glossary/t#type-i-error "A false positive; When a test concludes there is an effect when there is really is no effect") Rate). The [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") is the false positive rate we accept for a test. The probability that a test concludes there is no effect when there really is one (e.g., concludes a biased coin is fair) is called the [false negative](https://psyteachr.github.io/glossary/f#false-negative "When a test concludes there is no effect when there really is an effect") rate (or [Type II Error](https://psyteachr.github.io/glossary/t#type-ii-error "A false negative; When a test concludes there is no effect when there is really is an effect") Rate). The [beta](https://psyteachr.github.io/glossary/b#beta "The false negative rate we accept for a statistical test.") is the false negative rate we accept for a test.
The false positive rate is not the overall probability of getting a false positive, but the probability of a false positive *under the null hypothesis*. Similarly, the false negative rate is the probability of a false negative *under the alternative hypothesis*. Unless we know the probability that we are testing a null effect, we can’t say anything about the overall probability of false positives or negatives. If 100% of the hypotheses we test are false, then all significant effects are false positives, but if all of the hypotheses we test are true, then all of the positives are true positives and the overall false positive rate is 0\.
### 8\.6\.5 Power and SESOI
[Power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") is equal to 1 minus beta (i.e., the [true positive](https://psyteachr.github.io/glossary/t#true-positive "When a test concludes there is an effect when there is really is an effect") rate), and depends on the effect size, how many samples we take (n), and what we set alpha to. For any test, if you specify all but one of these values, you can calculate the last. The effect size you use in power calculations should be the smallest effect size of interest ([SESOI](https://psyteachr.github.io/glossary/s#sesoi "Smallest Effect Size of Interest: the smallest effect that is theoretically or practically meaningful")). See ([Daniël Lakens, Scheel, and Isager 2018](#ref-TOSTtutorial))([https://doi.org/10\.1177/2515245918770963](https://doi.org/10.1177/2515245918770963)) for a tutorial on methods for choosing an SESOI.
Let’s say you want to be able to detect at least a 15% difference from chance (50%) in a coin’s fairness, and you want your test to have a 5% chance of false positives and a 10% chance of false negatives. What are the following values?
* alpha \=
* beta \=
* false positive rate \=
* false negative rate \=
* power \=
* SESOI \=
### 8\.6\.6 Confidence Intervals
The [confidence interval](https://psyteachr.github.io/glossary/c#confidence-interval "A type of interval estimate used to summarise a given statistic or measurement where a proportion of intervals calculated from the sample(s) will contain the true value of the statistic.") is a range around some value (such as a mean) that has some probability of containing the parameter, if you repeated the process many times. Traditionally in psychology, we use 95% confidence intervals, but you can calculate CIs for any percentage.
A 95% CI does *not* mean that there is a 95% probability that the true mean lies within this range, but that, if you repeated the study many times and calculated the CI this same way every time, you’d expect the true mean to be inside the CI in 95% of the studies. This seems like a subtle distinction, but can lead to some misunderstandings. See ([Morey et al. 2016](#ref-Morey2016))([https://link.springer.com/article/10\.3758/s13423\-015\-0947\-8](https://link.springer.com/article/10.3758/s13423-015-0947-8)) for more detailed discussion.
### 8\.6\.1 Effect
The [effect](https://psyteachr.github.io/glossary/e#effect "Some measure of your data, such as the mean value, or the number of standard deviations the mean differs from a chance value.") is some measure of your data. This will depend on the type of data you have and the type of statistical test you are using. For example, if you flipped a coin 100 times and it landed heads 66 times, the effect would be 66/100\. You can then use the exact binomial test to compare this effect to the [null effect](https://psyteachr.github.io/glossary/n#null-effect "An outcome that does not show an otherwise expected effect.") you would expect from a fair coin (50/100\) or to any other effect you choose. The [effect size](https://psyteachr.github.io/glossary/e#effect-size "The difference between the effect in your data and the null effect (usually a chance value)") refers to the difference between the effect in your data and the null effect (usually a chance value).
### 8\.6\.2 P\-value
The [p\-value](https://psyteachr.github.io/glossary/p#p-value "The probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect)") of a test is the probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect). So if you used a binomial test to test against a chance probability of 1/6 (e.g., the probability of rolling 1 with a 6\-sided die), then a p\-value of 0\.17 means that you could expect to see effects at least as extreme as your data 17% of the time just by chance alone.
### 8\.6\.3 Alpha
If you are using null hypothesis significance testing ([NHST](https://psyteachr.github.io/glossary/n#nhst "Null Hypothesis Signficance Testing")), then you need to decide on a cutoff value ([alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot")) for making a decision to reject the null hypothesis. We call p\-values below the alpha cutoff [significant](https://psyteachr.github.io/glossary/s#significant "The conclusion when the p-value is less than the critical alpha. "). In psychology, alpha is traditionally set at 0\.05, but there are good arguments for [setting a different criterion in some circumstances](http://daniellakens.blogspot.com/2019/05/justifying-your-alpha-by-minimizing-or.html).
### 8\.6\.4 False Positive/Negative
The probability that a test concludes there is an effect when there is really no effect (e.g., concludes a fair coin is biased) is called the [false positive](https://psyteachr.github.io/glossary/f#false-positive "When a test concludes there is an effect when there really is no effect") rate (or [Type I Error](https://psyteachr.github.io/glossary/t#type-i-error "A false positive; When a test concludes there is an effect when there is really is no effect") Rate). The [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") is the false positive rate we accept for a test. The probability that a test concludes there is no effect when there really is one (e.g., concludes a biased coin is fair) is called the [false negative](https://psyteachr.github.io/glossary/f#false-negative "When a test concludes there is no effect when there really is an effect") rate (or [Type II Error](https://psyteachr.github.io/glossary/t#type-ii-error "A false negative; When a test concludes there is no effect when there is really is an effect") Rate). The [beta](https://psyteachr.github.io/glossary/b#beta "The false negative rate we accept for a statistical test.") is the false negative rate we accept for a test.
The false positive rate is not the overall probability of getting a false positive, but the probability of a false positive *under the null hypothesis*. Similarly, the false negative rate is the probability of a false negative *under the alternative hypothesis*. Unless we know the probability that we are testing a null effect, we can’t say anything about the overall probability of false positives or negatives. If 100% of the hypotheses we test are false, then all significant effects are false positives, but if all of the hypotheses we test are true, then all of the positives are true positives and the overall false positive rate is 0\.
### 8\.6\.5 Power and SESOI
[Power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") is equal to 1 minus beta (i.e., the [true positive](https://psyteachr.github.io/glossary/t#true-positive "When a test concludes there is an effect when there is really is an effect") rate), and depends on the effect size, how many samples we take (n), and what we set alpha to. For any test, if you specify all but one of these values, you can calculate the last. The effect size you use in power calculations should be the smallest effect size of interest ([SESOI](https://psyteachr.github.io/glossary/s#sesoi "Smallest Effect Size of Interest: the smallest effect that is theoretically or practically meaningful")). See ([Daniël Lakens, Scheel, and Isager 2018](#ref-TOSTtutorial))([https://doi.org/10\.1177/2515245918770963](https://doi.org/10.1177/2515245918770963)) for a tutorial on methods for choosing an SESOI.
Let’s say you want to be able to detect at least a 15% difference from chance (50%) in a coin’s fairness, and you want your test to have a 5% chance of false positives and a 10% chance of false negatives. What are the following values?
* alpha \=
* beta \=
* false positive rate \=
* false negative rate \=
* power \=
* SESOI \=
### 8\.6\.6 Confidence Intervals
The [confidence interval](https://psyteachr.github.io/glossary/c#confidence-interval "A type of interval estimate used to summarise a given statistic or measurement where a proportion of intervals calculated from the sample(s) will contain the true value of the statistic.") is a range around some value (such as a mean) that has some probability of containing the parameter, if you repeated the process many times. Traditionally in psychology, we use 95% confidence intervals, but you can calculate CIs for any percentage.
A 95% CI does *not* mean that there is a 95% probability that the true mean lies within this range, but that, if you repeated the study many times and calculated the CI this same way every time, you’d expect the true mean to be inside the CI in 95% of the studies. This seems like a subtle distinction, but can lead to some misunderstandings. See ([Morey et al. 2016](#ref-Morey2016))([https://link.springer.com/article/10\.3758/s13423\-015\-0947\-8](https://link.springer.com/article/10.3758/s13423-015-0947-8)) for more detailed discussion.
8\.7 Tests
----------
### 8\.7\.1 Exact binomial test
`binom.test(x, n, p)`
You can test a binomial distribution against a specific probability using the exact binomial test.
* `x` \= the number of successes
* `n` \= the number of trials
* `p` \= hypothesised probability of success
Here we can test a series of 10 coin flips from a fair coin and a biased coin against the hypothesised probability of 0\.5 (even odds).
```
n <- 10
fair_coin <- rbinom(1, n, 0.5)
biased_coin <- rbinom(1, n, 0.6)
binom.test(fair_coin, n, p = 0.5)
binom.test(biased_coin, n, p = 0.5)
```
```
##
## Exact binomial test
##
## data: fair_coin and n
## number of successes = 6, number of trials = 10, p-value = 0.7539
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.2623781 0.8784477
## sample estimates:
## probability of success
## 0.6
##
##
## Exact binomial test
##
## data: biased_coin and n
## number of successes = 8, number of trials = 10, p-value = 0.1094
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.4439045 0.9747893
## sample estimates:
## probability of success
## 0.8
```
Run the code above several times, noting the p\-values for the fair and biased coins. Alternatively, you can [simulate coin flips](http://shiny.psy.gla.ac.uk/debruine/coinsim/) online and build up a graph of results and p\-values.
* How does the p\-value vary for the fair and biased coins?
* What happens to the confidence intervals if you increase n from 10 to 100?
* What criterion would you use to tell if the observed data indicate the coin is fair or biased?
* How often do you conclude the fair coin is biased (false positives)?
* How often do you conclude the biased coin is fair (false negatives)?
#### 8\.7\.1\.1 Sampling function
To estimate these rates, we need to repeat the sampling above many times. A [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") is ideal for repeating the exact same procedure over and over. Set the arguments of the function to variables that you might want to change. Here, we will want to estimate power for:
* different sample sizes (`n`)
* different effects (`bias`)
* different hypothesised probabilities (`p`, defaults to 0\.5\)
```
sim_binom_test <- function(n, bias, p = 0.5) {
# simulate 1 coin flip n times with the specified bias
coin <- rbinom(1, n, bias)
# run a binomial test on the simulated data for the specified p
btest <- binom.test(coin, n, p)
# return the p-value of this test
btest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_binom_test(100, 0.6)
```
```
## [1] 0.1332106
```
#### 8\.7\.1\.2 Calculate power
Then you can use the `replicate()` function to run it many times and save all the output values. You can calculate the [power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") of your analysis by checking the proportion of your simulated analyses that have a p\-value less than your [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") (the probability of rejecting the null hypothesis when the null hypothesis is true).
```
my_reps <- replicate(1e4, sim_binom_test(100, 0.6))
alpha <- 0.05 # this does not always have to be 0.05
mean(my_reps < alpha)
```
```
## [1] 0.4561
```
`1e4` is just scientific notation for a 1 followed by 4 zeros (`10000`). When you’re running simulations, you usually want to run a lot of them. It’s a pain to keep track of whether you’ve typed 5 or 6 zeros (100000 vs 1000000\) and this will change your running time by an order of magnitude.
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
### 8\.7\.2 T\-test
`t.test(x, y, alternative, mu, paired)`
Use a t\-test to compare the mean of one distribution to a null hypothesis (one\-sample t\-test), compare the means of two samples (independent\-samples t\-test), or compare pairs of values (paired\-samples t\-test).
You can run a one\-sample t\-test comparing the mean of your data to `mu`. Here is a simulated distribution with a mean of 0\.5 and an SD of 1, creating an effect size of 0\.5 SD when tested against a `mu` of 0\. Run the simulation a few times to see how often the t\-test returns a significant p\-value (or run it in the [shiny app](http://shiny.psy.gla.ac.uk/debruine/normsim/)).
```
sim_norm <- rnorm(100, 0.5, 1)
t.test(sim_norm, mu = 0)
```
```
##
## One Sample t-test
##
## data: sim_norm
## t = 6.2874, df = 99, p-value = 8.758e-09
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 0.4049912 0.7784761
## sample estimates:
## mean of x
## 0.5917337
```
Run an independent\-samples t\-test by comparing two lists of values.
```
a <- rnorm(100, 0.5, 1)
b <- rnorm(100, 0.7, 1)
t_ind <- t.test(a, b, paired = FALSE)
t_ind
```
```
##
## Welch Two Sample t-test
##
## data: a and b
## t = -1.8061, df = 197.94, p-value = 0.07243
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.54825320 0.02408469
## sample estimates:
## mean of x mean of y
## 0.4585985 0.7206828
```
The `paired` argument defaults to `FALSE`, but it’s good practice to always explicitly set it so you are never confused about what type of test you are performing.
#### 8\.7\.2\.1 Sampling function
We can use the `names()` function to find out the names of all the t.test parameters and use this to just get one type of data, like the test statistic (e.g., t\-value).
```
names(t_ind)
t_ind$statistic
```
```
## [1] "statistic" "parameter" "p.value" "conf.int" "estimate"
## [6] "null.value" "stderr" "alternative" "method" "data.name"
## t
## -1.806051
```
If you want to run the simulation many times and record information each time, first you need to turn your simulation into a function.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2) {
# simulate v1
v1 <- rnorm(n, m1, sd1)
#simulate v2
v2 <- rnorm(n, m2, sd2)
# compare using an independent samples t-test
t_ind <- t.test(v1, v2, paired = FALSE)
# return the p-value
return(t_ind$p.value)
}
```
Run it a few times to check that it gives you sensible values.
```
sim_t_ind(100, 0.7, 1, 0.5, 1)
```
```
## [1] 0.362521
```
#### 8\.7\.2\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_t_ind(100, 0.7, 1, 0.5, 1))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.2925
```
Run the code above several times. How much does the power value fluctuate? How many replications do you need to run to get a reliable estimate of power?
Compare your power estimate from simluation to a power calculation using `power.t.test()`. Here, `delta` is the difference between `m1` and `m2` above.
```
power.t.test(n = 100,
delta = 0.2,
sd = 1,
sig.level = alpha,
type = "two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
What do you think the distribution of p\-values is when there is no effect (i.e., the means are identical)? Check this yourself.
Make sure the `boundary` argument is set to `0` for p\-value histograms. See what happens with a null effect if `boundary` is not set.
### 8\.7\.3 Correlation
You can test if continuous variables are related to each other using the `cor()` function. Let’s use `rnorm_multi()` to make a quick table of correlated values.
```
dat <- rnorm_multi(
n = 100,
vars = 2,
r = -0.5,
varnames = c("x", "y")
)
cor(dat$x, dat$y)
```
```
## [1] -0.4960331
```
Set `n` to a large number like 1e6 so that the correlations are less affected by chance. Change the value of the **mean** for `a`, `x`, or `y`. Does it change the correlation between `x` and `y`? What happens when you increase or decrease the **sd**? Can you work out any rules here?
`cor()` defaults to Pearson’s correlations. Set the `method` argument to use Kendall or Spearman correlations.
```
cor(dat$x, dat$y, method = "spearman")
```
```
## [1] -0.4724992
```
#### 8\.7\.3\.1 Sampling function
Create a function that creates two variables with `n` observations and `r` correlation. Use the function `cor.test()` to give you p\-values for the correlation.
```
sim_cor_test <- function(n = 100, r = 0) {
dat <- rnorm_multi(
n = n,
vars = 2,
r = r,
varnames = c("x", "y")
)
ctest <- cor.test(dat$x, dat$y)
ctest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_cor_test(50, .5)
```
```
## [1] 0.001354836
```
#### 8\.7\.3\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_cor_test(50, 0.5))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.965
```
Compare to the value calcuated by the pwr package.
```
pwr::pwr.r.test(n = 50, r = 0.5)
```
```
##
## approximate correlation power calculation (arctangh transformation)
##
## n = 50
## r = 0.5
## sig.level = 0.05
## power = 0.9669813
## alternative = two.sided
```
### 8\.7\.1 Exact binomial test
`binom.test(x, n, p)`
You can test a binomial distribution against a specific probability using the exact binomial test.
* `x` \= the number of successes
* `n` \= the number of trials
* `p` \= hypothesised probability of success
Here we can test a series of 10 coin flips from a fair coin and a biased coin against the hypothesised probability of 0\.5 (even odds).
```
n <- 10
fair_coin <- rbinom(1, n, 0.5)
biased_coin <- rbinom(1, n, 0.6)
binom.test(fair_coin, n, p = 0.5)
binom.test(biased_coin, n, p = 0.5)
```
```
##
## Exact binomial test
##
## data: fair_coin and n
## number of successes = 6, number of trials = 10, p-value = 0.7539
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.2623781 0.8784477
## sample estimates:
## probability of success
## 0.6
##
##
## Exact binomial test
##
## data: biased_coin and n
## number of successes = 8, number of trials = 10, p-value = 0.1094
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.4439045 0.9747893
## sample estimates:
## probability of success
## 0.8
```
Run the code above several times, noting the p\-values for the fair and biased coins. Alternatively, you can [simulate coin flips](http://shiny.psy.gla.ac.uk/debruine/coinsim/) online and build up a graph of results and p\-values.
* How does the p\-value vary for the fair and biased coins?
* What happens to the confidence intervals if you increase n from 10 to 100?
* What criterion would you use to tell if the observed data indicate the coin is fair or biased?
* How often do you conclude the fair coin is biased (false positives)?
* How often do you conclude the biased coin is fair (false negatives)?
#### 8\.7\.1\.1 Sampling function
To estimate these rates, we need to repeat the sampling above many times. A [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") is ideal for repeating the exact same procedure over and over. Set the arguments of the function to variables that you might want to change. Here, we will want to estimate power for:
* different sample sizes (`n`)
* different effects (`bias`)
* different hypothesised probabilities (`p`, defaults to 0\.5\)
```
sim_binom_test <- function(n, bias, p = 0.5) {
# simulate 1 coin flip n times with the specified bias
coin <- rbinom(1, n, bias)
# run a binomial test on the simulated data for the specified p
btest <- binom.test(coin, n, p)
# return the p-value of this test
btest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_binom_test(100, 0.6)
```
```
## [1] 0.1332106
```
#### 8\.7\.1\.2 Calculate power
Then you can use the `replicate()` function to run it many times and save all the output values. You can calculate the [power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") of your analysis by checking the proportion of your simulated analyses that have a p\-value less than your [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") (the probability of rejecting the null hypothesis when the null hypothesis is true).
```
my_reps <- replicate(1e4, sim_binom_test(100, 0.6))
alpha <- 0.05 # this does not always have to be 0.05
mean(my_reps < alpha)
```
```
## [1] 0.4561
```
`1e4` is just scientific notation for a 1 followed by 4 zeros (`10000`). When you’re running simulations, you usually want to run a lot of them. It’s a pain to keep track of whether you’ve typed 5 or 6 zeros (100000 vs 1000000\) and this will change your running time by an order of magnitude.
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
#### 8\.7\.1\.1 Sampling function
To estimate these rates, we need to repeat the sampling above many times. A [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") is ideal for repeating the exact same procedure over and over. Set the arguments of the function to variables that you might want to change. Here, we will want to estimate power for:
* different sample sizes (`n`)
* different effects (`bias`)
* different hypothesised probabilities (`p`, defaults to 0\.5\)
```
sim_binom_test <- function(n, bias, p = 0.5) {
# simulate 1 coin flip n times with the specified bias
coin <- rbinom(1, n, bias)
# run a binomial test on the simulated data for the specified p
btest <- binom.test(coin, n, p)
# return the p-value of this test
btest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_binom_test(100, 0.6)
```
```
## [1] 0.1332106
```
#### 8\.7\.1\.2 Calculate power
Then you can use the `replicate()` function to run it many times and save all the output values. You can calculate the [power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") of your analysis by checking the proportion of your simulated analyses that have a p\-value less than your [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") (the probability of rejecting the null hypothesis when the null hypothesis is true).
```
my_reps <- replicate(1e4, sim_binom_test(100, 0.6))
alpha <- 0.05 # this does not always have to be 0.05
mean(my_reps < alpha)
```
```
## [1] 0.4561
```
`1e4` is just scientific notation for a 1 followed by 4 zeros (`10000`). When you’re running simulations, you usually want to run a lot of them. It’s a pain to keep track of whether you’ve typed 5 or 6 zeros (100000 vs 1000000\) and this will change your running time by an order of magnitude.
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
### 8\.7\.2 T\-test
`t.test(x, y, alternative, mu, paired)`
Use a t\-test to compare the mean of one distribution to a null hypothesis (one\-sample t\-test), compare the means of two samples (independent\-samples t\-test), or compare pairs of values (paired\-samples t\-test).
You can run a one\-sample t\-test comparing the mean of your data to `mu`. Here is a simulated distribution with a mean of 0\.5 and an SD of 1, creating an effect size of 0\.5 SD when tested against a `mu` of 0\. Run the simulation a few times to see how often the t\-test returns a significant p\-value (or run it in the [shiny app](http://shiny.psy.gla.ac.uk/debruine/normsim/)).
```
sim_norm <- rnorm(100, 0.5, 1)
t.test(sim_norm, mu = 0)
```
```
##
## One Sample t-test
##
## data: sim_norm
## t = 6.2874, df = 99, p-value = 8.758e-09
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 0.4049912 0.7784761
## sample estimates:
## mean of x
## 0.5917337
```
Run an independent\-samples t\-test by comparing two lists of values.
```
a <- rnorm(100, 0.5, 1)
b <- rnorm(100, 0.7, 1)
t_ind <- t.test(a, b, paired = FALSE)
t_ind
```
```
##
## Welch Two Sample t-test
##
## data: a and b
## t = -1.8061, df = 197.94, p-value = 0.07243
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.54825320 0.02408469
## sample estimates:
## mean of x mean of y
## 0.4585985 0.7206828
```
The `paired` argument defaults to `FALSE`, but it’s good practice to always explicitly set it so you are never confused about what type of test you are performing.
#### 8\.7\.2\.1 Sampling function
We can use the `names()` function to find out the names of all the t.test parameters and use this to just get one type of data, like the test statistic (e.g., t\-value).
```
names(t_ind)
t_ind$statistic
```
```
## [1] "statistic" "parameter" "p.value" "conf.int" "estimate"
## [6] "null.value" "stderr" "alternative" "method" "data.name"
## t
## -1.806051
```
If you want to run the simulation many times and record information each time, first you need to turn your simulation into a function.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2) {
# simulate v1
v1 <- rnorm(n, m1, sd1)
#simulate v2
v2 <- rnorm(n, m2, sd2)
# compare using an independent samples t-test
t_ind <- t.test(v1, v2, paired = FALSE)
# return the p-value
return(t_ind$p.value)
}
```
Run it a few times to check that it gives you sensible values.
```
sim_t_ind(100, 0.7, 1, 0.5, 1)
```
```
## [1] 0.362521
```
#### 8\.7\.2\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_t_ind(100, 0.7, 1, 0.5, 1))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.2925
```
Run the code above several times. How much does the power value fluctuate? How many replications do you need to run to get a reliable estimate of power?
Compare your power estimate from simluation to a power calculation using `power.t.test()`. Here, `delta` is the difference between `m1` and `m2` above.
```
power.t.test(n = 100,
delta = 0.2,
sd = 1,
sig.level = alpha,
type = "two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
What do you think the distribution of p\-values is when there is no effect (i.e., the means are identical)? Check this yourself.
Make sure the `boundary` argument is set to `0` for p\-value histograms. See what happens with a null effect if `boundary` is not set.
#### 8\.7\.2\.1 Sampling function
We can use the `names()` function to find out the names of all the t.test parameters and use this to just get one type of data, like the test statistic (e.g., t\-value).
```
names(t_ind)
t_ind$statistic
```
```
## [1] "statistic" "parameter" "p.value" "conf.int" "estimate"
## [6] "null.value" "stderr" "alternative" "method" "data.name"
## t
## -1.806051
```
If you want to run the simulation many times and record information each time, first you need to turn your simulation into a function.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2) {
# simulate v1
v1 <- rnorm(n, m1, sd1)
#simulate v2
v2 <- rnorm(n, m2, sd2)
# compare using an independent samples t-test
t_ind <- t.test(v1, v2, paired = FALSE)
# return the p-value
return(t_ind$p.value)
}
```
Run it a few times to check that it gives you sensible values.
```
sim_t_ind(100, 0.7, 1, 0.5, 1)
```
```
## [1] 0.362521
```
#### 8\.7\.2\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_t_ind(100, 0.7, 1, 0.5, 1))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.2925
```
Run the code above several times. How much does the power value fluctuate? How many replications do you need to run to get a reliable estimate of power?
Compare your power estimate from simluation to a power calculation using `power.t.test()`. Here, `delta` is the difference between `m1` and `m2` above.
```
power.t.test(n = 100,
delta = 0.2,
sd = 1,
sig.level = alpha,
type = "two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
What do you think the distribution of p\-values is when there is no effect (i.e., the means are identical)? Check this yourself.
Make sure the `boundary` argument is set to `0` for p\-value histograms. See what happens with a null effect if `boundary` is not set.
### 8\.7\.3 Correlation
You can test if continuous variables are related to each other using the `cor()` function. Let’s use `rnorm_multi()` to make a quick table of correlated values.
```
dat <- rnorm_multi(
n = 100,
vars = 2,
r = -0.5,
varnames = c("x", "y")
)
cor(dat$x, dat$y)
```
```
## [1] -0.4960331
```
Set `n` to a large number like 1e6 so that the correlations are less affected by chance. Change the value of the **mean** for `a`, `x`, or `y`. Does it change the correlation between `x` and `y`? What happens when you increase or decrease the **sd**? Can you work out any rules here?
`cor()` defaults to Pearson’s correlations. Set the `method` argument to use Kendall or Spearman correlations.
```
cor(dat$x, dat$y, method = "spearman")
```
```
## [1] -0.4724992
```
#### 8\.7\.3\.1 Sampling function
Create a function that creates two variables with `n` observations and `r` correlation. Use the function `cor.test()` to give you p\-values for the correlation.
```
sim_cor_test <- function(n = 100, r = 0) {
dat <- rnorm_multi(
n = n,
vars = 2,
r = r,
varnames = c("x", "y")
)
ctest <- cor.test(dat$x, dat$y)
ctest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_cor_test(50, .5)
```
```
## [1] 0.001354836
```
#### 8\.7\.3\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_cor_test(50, 0.5))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.965
```
Compare to the value calcuated by the pwr package.
```
pwr::pwr.r.test(n = 50, r = 0.5)
```
```
##
## approximate correlation power calculation (arctangh transformation)
##
## n = 50
## r = 0.5
## sig.level = 0.05
## power = 0.9669813
## alternative = two.sided
```
#### 8\.7\.3\.1 Sampling function
Create a function that creates two variables with `n` observations and `r` correlation. Use the function `cor.test()` to give you p\-values for the correlation.
```
sim_cor_test <- function(n = 100, r = 0) {
dat <- rnorm_multi(
n = n,
vars = 2,
r = r,
varnames = c("x", "y")
)
ctest <- cor.test(dat$x, dat$y)
ctest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_cor_test(50, .5)
```
```
## [1] 0.001354836
```
#### 8\.7\.3\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_cor_test(50, 0.5))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.965
```
Compare to the value calcuated by the pwr package.
```
pwr::pwr.r.test(n = 50, r = 0.5)
```
```
##
## approximate correlation power calculation (arctangh transformation)
##
## n = 50
## r = 0.5
## sig.level = 0.05
## power = 0.9669813
## alternative = two.sided
```
8\.8 Example
------------
This example uses the [Growth Chart Data Tables](https://www.cdc.gov/growthcharts/data/zscore/zstatage.csv) from the [US CDC](https://www.cdc.gov/growthcharts/zscore.htm). The data consist of height in centimeters for the z\-scores of –2, \-1\.5, \-1, \-0\.5, 0, 0\.5, 1, 1\.5, and 2 by sex (1\=male; 2\=female) and half\-month of age (from 24\.0 to 240\.5 months).
### 8\.8\.1 Load \& wrangle
We have to do a little data wrangling first. Have a look at the data after you import it and relabel `Sex` to `male` and `female` instead of `1` and `2`. Also convert `Agemos` (age in months) to years. Relabel the column `0` as `mean` and calculate a new column named `sd` as the difference between columns `1` and `0`.
```
orig_height_age <- read_csv("https://www.cdc.gov/growthcharts/data/zscore/zstatage.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## Sex = col_character(),
## Agemos = col_character(),
## `-2` = col_double(),
## `-1.5` = col_double(),
## `-1` = col_double(),
## `-0.5` = col_double(),
## `0` = col_double(),
## `0.5` = col_double(),
## `1` = col_double(),
## `1.5` = col_double(),
## `2` = col_double()
## )
```
```
height_age <- orig_height_age %>%
filter(Sex %in% c(1,2)) %>%
mutate(
sex = recode(Sex, "1" = "male", "2" = "female"),
age = as.numeric(Agemos)/12,
sd = `1` - `0`
) %>%
select(sex, age, mean = `0`, sd)
```
### 8\.8\.2 Plot
Plot your new data frame to see how mean height changes with age for boys and girls.
```
ggplot(height_age, aes(age, mean, color = sex)) +
geom_smooth(aes(ymin = mean - sd,
ymax = mean + sd),
stat="identity")
```
### 8\.8\.3 Simulate a population
Simulate 50 random male heights and 50 random female heights for 20\-year\-olds using the `rnorm()` function and the means and SDs from the `height_age` table. Plot the data.
```
age_filter <- 20
m <- filter(height_age, age == age_filter, sex == "male")
f <- filter(height_age, age == age_filter, sex == "female")
sim_height <- tibble(
male = rnorm(50, m$mean, m$sd),
female = rnorm(50, f$mean, f$sd)
) %>%
gather("sex", "height", male:female)
ggplot(sim_height) +
geom_density(aes(height, fill = sex), alpha = 0.5) +
xlim(125, 225)
```
Run the simulation above several times, noting how the density plot changes. Try changing the age you’re simulating.
### 8\.8\.4 Analyse simulated data
Use the `sim_t_ind(n, m1, sd1, m2, sd2)` function we created above to generate one simulation with a sample size of 50 in each group using the means and SDs of male and female 14\-year\-olds.
```
age_filter <- 14
m <- filter(height_age, age == age_filter, sex == "male")
f <- filter(height_age, age == age_filter, sex == "female")
sim_t_ind(50, m$mean, m$sd, f$mean, f$sd)
```
```
## [1] 0.0005255744
```
### 8\.8\.5 Replicate simulation
Now replicate this 1e4 times using the `replicate()` function. This function will save the returned p\-values in a list (`my_reps`). We can then check what proportion of those p\-values are less than our alpha value. This is the power of our test.
```
my_reps <- replicate(1e4, sim_t_ind(50, m$mean, m$sd, f$mean, f$sd))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.6403
```
### 8\.8\.6 One\-tailed prediction
This design has about 65% power to detect the sex difference in height (with a 2\-tailed test). Modify the `sim_t_ind` function for a 1\-tailed prediction.
You could just set `alternative` equal to “greater” in the function, but it might be better to add the `alt` argument to your function (giving it the same default value as `t.test`) and change the value of `alternative` in the function to `alt`.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2, alt = "two.sided") {
v1 <- rnorm(n, m1, sd1)
v2 <- rnorm(n, m2, sd2)
t_ind <- t.test(v1, v2, paired = FALSE, alternative = alt)
return(t_ind$p.value)
}
alpha <- 0.05
my_reps <- replicate(1e4, sim_t_ind(50, m$mean, m$sd, f$mean, f$sd, "greater"))
mean(my_reps < alpha)
```
```
## [1] 0.752
```
### 8\.8\.7 Range of sample sizes
What if we want to find out what sample size will give us 80% power? We can try trial and error. We know the number should be slightly larger than 50\. But you can search more systematically by repeating your power calculation for a range of sample sizes.
This might seem like overkill for a t\-test, where you can easily look up sample size calculators online, but it is a valuable skill to learn for when your analyses become more complicated.
Start with a relatively low number of replications and/or more spread\-out samples to estimate where you should be looking more specifically. Then you can repeat with a narrower/denser range of sample sizes and more iterations.
```
# make another custom function to return power
pwr_func <- function(n, reps = 100, alpha = 0.05) {
ps <- replicate(reps, sim_t_ind(n, m$mean, m$sd, f$mean, f$sd, "greater"))
mean(ps < alpha)
}
# make a table of the n values you want to check
power_table <- tibble(
n = seq(20, 100, by = 5)
) %>%
# run the power function for each n
mutate(power = map_dbl(n, pwr_func))
# plot the results
ggplot(power_table, aes(n, power)) +
geom_smooth() +
geom_point() +
geom_hline(yintercept = 0.8)
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
Now we can narrow down our search to values around 55 (plus or minus 5\) and increase the number of replications from 1e3 to 1e4\.
```
power_table <- tibble(
n = seq(50, 60)
) %>%
mutate(power = map_dbl(n, pwr_func, reps = 1e4))
ggplot(power_table, aes(n, power)) +
geom_smooth() +
geom_point() +
geom_hline(yintercept = 0.8)
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
### 8\.8\.1 Load \& wrangle
We have to do a little data wrangling first. Have a look at the data after you import it and relabel `Sex` to `male` and `female` instead of `1` and `2`. Also convert `Agemos` (age in months) to years. Relabel the column `0` as `mean` and calculate a new column named `sd` as the difference between columns `1` and `0`.
```
orig_height_age <- read_csv("https://www.cdc.gov/growthcharts/data/zscore/zstatage.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## Sex = col_character(),
## Agemos = col_character(),
## `-2` = col_double(),
## `-1.5` = col_double(),
## `-1` = col_double(),
## `-0.5` = col_double(),
## `0` = col_double(),
## `0.5` = col_double(),
## `1` = col_double(),
## `1.5` = col_double(),
## `2` = col_double()
## )
```
```
height_age <- orig_height_age %>%
filter(Sex %in% c(1,2)) %>%
mutate(
sex = recode(Sex, "1" = "male", "2" = "female"),
age = as.numeric(Agemos)/12,
sd = `1` - `0`
) %>%
select(sex, age, mean = `0`, sd)
```
### 8\.8\.2 Plot
Plot your new data frame to see how mean height changes with age for boys and girls.
```
ggplot(height_age, aes(age, mean, color = sex)) +
geom_smooth(aes(ymin = mean - sd,
ymax = mean + sd),
stat="identity")
```
### 8\.8\.3 Simulate a population
Simulate 50 random male heights and 50 random female heights for 20\-year\-olds using the `rnorm()` function and the means and SDs from the `height_age` table. Plot the data.
```
age_filter <- 20
m <- filter(height_age, age == age_filter, sex == "male")
f <- filter(height_age, age == age_filter, sex == "female")
sim_height <- tibble(
male = rnorm(50, m$mean, m$sd),
female = rnorm(50, f$mean, f$sd)
) %>%
gather("sex", "height", male:female)
ggplot(sim_height) +
geom_density(aes(height, fill = sex), alpha = 0.5) +
xlim(125, 225)
```
Run the simulation above several times, noting how the density plot changes. Try changing the age you’re simulating.
### 8\.8\.4 Analyse simulated data
Use the `sim_t_ind(n, m1, sd1, m2, sd2)` function we created above to generate one simulation with a sample size of 50 in each group using the means and SDs of male and female 14\-year\-olds.
```
age_filter <- 14
m <- filter(height_age, age == age_filter, sex == "male")
f <- filter(height_age, age == age_filter, sex == "female")
sim_t_ind(50, m$mean, m$sd, f$mean, f$sd)
```
```
## [1] 0.0005255744
```
### 8\.8\.5 Replicate simulation
Now replicate this 1e4 times using the `replicate()` function. This function will save the returned p\-values in a list (`my_reps`). We can then check what proportion of those p\-values are less than our alpha value. This is the power of our test.
```
my_reps <- replicate(1e4, sim_t_ind(50, m$mean, m$sd, f$mean, f$sd))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.6403
```
### 8\.8\.6 One\-tailed prediction
This design has about 65% power to detect the sex difference in height (with a 2\-tailed test). Modify the `sim_t_ind` function for a 1\-tailed prediction.
You could just set `alternative` equal to “greater” in the function, but it might be better to add the `alt` argument to your function (giving it the same default value as `t.test`) and change the value of `alternative` in the function to `alt`.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2, alt = "two.sided") {
v1 <- rnorm(n, m1, sd1)
v2 <- rnorm(n, m2, sd2)
t_ind <- t.test(v1, v2, paired = FALSE, alternative = alt)
return(t_ind$p.value)
}
alpha <- 0.05
my_reps <- replicate(1e4, sim_t_ind(50, m$mean, m$sd, f$mean, f$sd, "greater"))
mean(my_reps < alpha)
```
```
## [1] 0.752
```
### 8\.8\.7 Range of sample sizes
What if we want to find out what sample size will give us 80% power? We can try trial and error. We know the number should be slightly larger than 50\. But you can search more systematically by repeating your power calculation for a range of sample sizes.
This might seem like overkill for a t\-test, where you can easily look up sample size calculators online, but it is a valuable skill to learn for when your analyses become more complicated.
Start with a relatively low number of replications and/or more spread\-out samples to estimate where you should be looking more specifically. Then you can repeat with a narrower/denser range of sample sizes and more iterations.
```
# make another custom function to return power
pwr_func <- function(n, reps = 100, alpha = 0.05) {
ps <- replicate(reps, sim_t_ind(n, m$mean, m$sd, f$mean, f$sd, "greater"))
mean(ps < alpha)
}
# make a table of the n values you want to check
power_table <- tibble(
n = seq(20, 100, by = 5)
) %>%
# run the power function for each n
mutate(power = map_dbl(n, pwr_func))
# plot the results
ggplot(power_table, aes(n, power)) +
geom_smooth() +
geom_point() +
geom_hline(yintercept = 0.8)
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
Now we can narrow down our search to values around 55 (plus or minus 5\) and increase the number of replications from 1e3 to 1e4\.
```
power_table <- tibble(
n = seq(50, 60)
) %>%
mutate(power = map_dbl(n, pwr_func, reps = 1e4))
ggplot(power_table, aes(n, power)) +
geom_smooth() +
geom_point() +
geom_hline(yintercept = 0.8)
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
8\.9 Glossary
-------------
| term | definition |
| --- | --- |
| [alpha](https://psyteachr.github.io/glossary/a#alpha) | (stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot |
| [beta](https://psyteachr.github.io/glossary/b#beta) | The false negative rate we accept for a statistical test. |
| [binomial distribution](https://psyteachr.github.io/glossary/b#binomial.distribution) | The distribution of data where each observation can have one of two outcomes, like success/failure, yes/no or head/tails. |
| [bivariate normal](https://psyteachr.github.io/glossary/b#bivariate.normal) | Two normally distributed vectors that have a specified correlation with each other. |
| [confidence interval](https://psyteachr.github.io/glossary/c#confidence.interval) | A type of interval estimate used to summarise a given statistic or measurement where a proportion of intervals calculated from the sample(s) will contain the true value of the statistic. |
| [correlation](https://psyteachr.github.io/glossary/c#correlation) | The relationship two vectors have to each other. |
| [covariance matrix](https://psyteachr.github.io/glossary/c#covariance.matrix) | Parameters showing how a set of vectors vary and are correlated. |
| [discrete](https://psyteachr.github.io/glossary/d#discrete) | Data that can only take certain values, such as integers. |
| [effect size](https://psyteachr.github.io/glossary/e#effect.size) | The difference between the effect in your data and the null effect (usually a chance value) |
| [effect](https://psyteachr.github.io/glossary/e#effect) | Some measure of your data, such as the mean value, or the number of standard deviations the mean differs from a chance value. |
| [false negative](https://psyteachr.github.io/glossary/f#false.negative) | When a test concludes there is no effect when there really is an effect |
| [false positive](https://psyteachr.github.io/glossary/f#false.positive) | When a test concludes there is an effect when there really is no effect |
| [function](https://psyteachr.github.io/glossary/f#function.) | A named section of code that can be reused. |
| [nhst](https://psyteachr.github.io/glossary/n#nhst) | Null Hypothesis Signficance Testing |
| [normal distribution](https://psyteachr.github.io/glossary/n#normal.distribution) | A symmetric distribution of data where values near the centre are most probable. |
| [null effect](https://psyteachr.github.io/glossary/n#null.effect) | An outcome that does not show an otherwise expected effect. |
| [p value](https://psyteachr.github.io/glossary/p#p.value) | The probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect) |
| [parameter](https://psyteachr.github.io/glossary/p#parameter) | A value that describes a distribution, such as the mean or SD |
| [poisson distribution](https://psyteachr.github.io/glossary/p#poisson.distribution) | A distribution that models independent events happening over a unit of time |
| [power](https://psyteachr.github.io/glossary/p#power) | The probability of rejecting the null hypothesis when it is false. |
| [probability](https://psyteachr.github.io/glossary/p#probability) | A number between 0 and 1 where 0 indicates impossibility of the event and 1 indicates certainty |
| [sesoi](https://psyteachr.github.io/glossary/s#sesoi) | Smallest Effect Size of Interest: the smallest effect that is theoretically or practically meaningful |
| [significant](https://psyteachr.github.io/glossary/s#significant) | The conclusion when the p\-value is less than the critical alpha. |
| [simulation](https://psyteachr.github.io/glossary/s#simulation) | Generating data from summary parameters |
| [true positive](https://psyteachr.github.io/glossary/t#true.positive) | When a test concludes there is an effect when there is really is an effect |
| [type i error](https://psyteachr.github.io/glossary/t#type.i.error) | A false positive; When a test concludes there is an effect when there is really is no effect |
| [type ii error](https://psyteachr.github.io/glossary/t#type.ii.error) | A false negative; When a test concludes there is no effect when there is really is an effect |
| [uniform distribution](https://psyteachr.github.io/glossary/u#uniform.distribution) | A distribution where all numbers in the range have an equal probability of being sampled |
| [univariate](https://psyteachr.github.io/glossary/u#univariate) | Relating to a single variable. |
8\.10 Exercises
---------------
Download the [exercises](exercises/08_sim_exercise.Rmd). See the [answers](exercises/08_sim_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(8)
# run this to access the answers
dataskills::exercise(8, answers = TRUE)
```
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/sim.html |
Chapter 8 Probability \& Simulation
===================================
8\.1 Learning Objectives
------------------------
### 8\.1\.1 Basic
1. Generate and plot data randomly sampled from common distributions [(video)](https://youtu.be/iuecrT3q1kg)
* [uniform](sim.html#uniform)
* [binomial](sim.html#binomial)
* [normal](sim.html#normal)
* [poisson](sim.html#poisson)
2. Generate related variables from a [multivariate](sim.html#mvdist) distribution [(video)](https://youtu.be/B14HfWQ1kIc)
3. Define the following [statistical terms](sim.html#stat-terms):
* [p\-value](sim.html#p-value)
* [alpha](sim.html#alpha)
* [power](sim.html#power)
* smallest effect size of interest ([SESOI](#sesoi))
* [false positive](sim.html#false-pos) (type I error)
* [false negative](#false-neg) (type II error)
* confidence interval ([CI](#conf-inf))
4. Test sampled distributions against a null hypothesis [(video)](https://youtu.be/Am3G6rA2S1s)
* [exact binomial test](sim.html#exact-binom)
* [t\-test](sim.html#t-test) (1\-sample, independent samples, paired samples)
* [correlation](sim.html#correlation) (pearson, kendall and spearman)
5. [Calculate power](sim.html#calc-power-binom) using iteration and a sampling function
### 8\.1\.2 Intermediate
6. Calculate the minimum sample size for a specific power level and design
8\.2 Resources
--------------
* [Stub for this lesson](stubs/8_sim.Rmd)
* [Distribution Shiny App](http://shiny.psy.gla.ac.uk/debruine/simulate/) (or run `dataskills::app("simulate")`
* [Simulation tutorials](https://debruine.github.io/tutorials/sim-data.html)
* [Chapter 21: Iteration](http://r4ds.had.co.nz/iteration.html) of *R for Data Science*
* [Improving your statistical inferences](https://www.coursera.org/learn/statistical-inferences/) on Coursera (week 1\)
* [Faux](https://debruine.github.io/faux/) package for data simulation
* [Simulation\-Based Power\-Analysis for Factorial ANOVA Designs](https://psyarxiv.com/baxsf) ([Daniel Lakens and Caldwell 2019](#ref-lakens_caldwell_2019))
* [Understanding mixed effects models through data simulation](https://psyarxiv.com/xp5cy/) ([DeBruine and Barr 2019](#ref-debruine_barr_2019))
8\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(plotly)
library(faux)
set.seed(8675309) # makes sure random numbers are reproducible
```
Simulating data is a very powerful way to test your understanding of statistical concepts. We are going to use [simulations](https://psyteachr.github.io/glossary/s#simulation "Generating data from summary parameters") to learn the basics of [probability](https://psyteachr.github.io/glossary/p#probability "A number between 0 and 1 where 0 indicates impossibility of the event and 1 indicates certainty").
8\.4 Univariate Distributions
-----------------------------
First, we need to understand some different ways data might be distributed and how to simulate data from these distributions. A [univariate](https://psyteachr.github.io/glossary/u#univariate "Relating to a single variable.") distribution is the distribution of a single variable.
### 8\.4\.1 Uniform Distribution
The [uniform distribution](https://psyteachr.github.io/glossary/u#uniform-distribution "A distribution where all numbers in the range have an equal probability of being sampled") is the simplest distribution. All numbers in the range have an equal probability of being sampled.
Take a minute to think of things in your own research that are uniformly distributed.
#### 8\.4\.1\.1 Continuous distribution
`runif(n, min=0, max=1)`
Use `runif()` to sample from a continuous uniform distribution.
```
u <- runif(100000, min = 0, max = 1)
# plot to visualise
ggplot() +
geom_histogram(aes(u), binwidth = 0.05, boundary = 0,
fill = "white", colour = "black")
```
#### 8\.4\.1\.2 Discrete
`sample(x, size, replace = FALSE, prob = NULL)`
Use `sample()` to sample from a [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") distribution.
You can use `sample()` to simulate events like rolling dice or choosing from a deck of cards. The code below simulates rolling a 6\-sided die 10000 times. We set `replace` to `TRUE` so that each event is independent. See what happens if you set `replace` to `FALSE`.
```
rolls <- sample(1:6, 10000, replace = TRUE)
# plot the results
ggplot() +
geom_histogram(aes(rolls), binwidth = 1,
fill = "white", color = "black")
```
Figure 8\.1: Distribution of dice rolls.
You can also use sample to sample from a list of named outcomes.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
sample(pet_types, 10, replace = TRUE)
```
```
## [1] "cat" "cat" "cat" "cat" "ferret" "dog" "bird" "cat"
## [9] "dog" "fish"
```
Ferrets are a much less common pet than cats and dogs, so our sample isn’t very realistic. You can set the probabilities of each item in the list with the `prob` argument.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
pet_prob <- c(0.3, 0.4, 0.1, 0.1, 0.1)
sample(pet_types, 10, replace = TRUE, prob = pet_prob)
```
```
## [1] "fish" "dog" "cat" "dog" "cat" "dog" "fish" "dog" "cat" "fish"
```
### 8\.4\.2 Binomial Distribution
The [binomial distribution](https://psyteachr.github.io/glossary/b#binomial-distribution "The distribution of data where each observation can have one of two outcomes, like success/failure, yes/no or head/tails. ") is useful for modelling binary data, where each observation can have one of two outcomes, like success/failure, yes/no or head/tails.
`rbinom(n, size, prob)`
The `rbinom` function will generate a random binomial distribution.
* `n` \= number of observations
* `size` \= number of trials
* `prob` \= probability of success on each trial
Coin flips are a typical example of a binomial distribution, where we can assign heads to 1 and tails to 0\.
```
# 20 individual coin flips of a fair coin
rbinom(20, 1, 0.5)
```
```
## [1] 1 1 1 0 1 1 0 1 0 0 1 1 1 0 0 0 1 0 0 0
```
```
# 20 individual coin flips of a baised (0.75) coin
rbinom(20, 1, 0.75)
```
```
## [1] 1 1 1 0 1 0 1 1 1 0 1 1 1 0 0 1 1 1 1 1
```
You can generate the total number of heads in 1 set of 20 coin flips by setting `size` to 20 and `n` to 1\.
```
rbinom(1, 20, 0.75)
```
```
## [1] 13
```
You can generate more sets of 20 coin flips by increasing the `n`.
```
rbinom(10, 20, 0.5)
```
```
## [1] 10 14 11 7 11 13 6 10 9 9
```
You should always check your randomly generated data to check that it makes sense. For large samples, it’s easiest to do that graphically. A histogram is usually the best choice for plotting binomial data.
```
flips <- rbinom(1000, 20, 0.5)
ggplot() +
geom_histogram(
aes(flips),
binwidth = 1,
fill = "white",
color = "black"
)
```
Run the simulation above several times, noting how the histogram changes. Try changing the values of `n`, `size`, and `prob`.
### 8\.4\.3 Normal Distribution
`rnorm(n, mean, sd)`
We can simulate a [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable.") of size `n` if we know the `mean` and standard deviation (`sd`). A density plot is usually the best way to visualise this type of data if your `n` is large.
```
dv <- rnorm(1e5, 10, 2)
# proportions of normally-distributed data
# within 1, 2, or 3 SD of the mean
sd1 <- .6827
sd2 <- .9545
sd3 <- .9973
ggplot() +
geom_density(aes(dv), fill = "white") +
geom_vline(xintercept = mean(dv), color = "red") +
geom_vline(xintercept = quantile(dv, .5 - sd1/2), color = "darkgreen") +
geom_vline(xintercept = quantile(dv, .5 + sd1/2), color = "darkgreen") +
geom_vline(xintercept = quantile(dv, .5 - sd2/2), color = "blue") +
geom_vline(xintercept = quantile(dv, .5 + sd2/2), color = "blue") +
geom_vline(xintercept = quantile(dv, .5 - sd3/2), color = "purple") +
geom_vline(xintercept = quantile(dv, .5 + sd3/2), color = "purple") +
scale_x_continuous(
limits = c(0,20),
breaks = seq(0,20)
)
```
Run the simulation above several times, noting how the density plot changes. What do the vertical lines represent? Try changing the values of `n`, `mean`, and `sd`.
### 8\.4\.4 Poisson Distribution
The [Poisson distribution](https://psyteachr.github.io/glossary/p#poisson-distribution "A distribution that models independent events happening over a unit of time") is useful for modelling events, like how many times something happens over a unit of time, as long as the events are independent (e.g., an event having happened in one time period doesn’t make it more or less likely to happen in the next).
`rpois(n, lambda)`
The `rpois` function will generate a random Poisson distribution.
* `n` \= number of observations
* `lambda` \= the mean number of events per observation
Let’s say we want to model how many texts you get each day for a whole. You know that you get an average of 20 texts per day. So we set `n = 365` and `lambda = 20`. Lambda is a [parameter](https://psyteachr.github.io/glossary/p#parameter "A value that describes a distribution, such as the mean or SD") that describes the Poisson distribution, just like mean and standard deviation are parameters that describe the normal distribution.
```
texts <- rpois(n = 365, lambda = 20)
ggplot() +
geom_histogram(
aes(texts),
binwidth = 1,
fill = "white",
color = "black"
)
```
So we can see that over a year, you’re unlikely to get fewer than 5 texts in a day, or more than 35 (although it’s not impossible).
8\.5 Multivariate Distributions
-------------------------------
### 8\.5\.1 Bivariate Normal
A [bivariate normal](https://psyteachr.github.io/glossary/b#bivariate-normal "Two normally distributed vectors that have a specified correlation with each other.") distribution is two normally distributed vectors that have a specified relationship, or [correlation](https://psyteachr.github.io/glossary/c#correlation "The relationship two vectors have to each other.") to each other.
What if we want to sample from a population with specific relationships between variables? We can sample from a bivariate normal distribution using `mvrnorm()` from the `MASS` package.
Don’t load MASS with the `library()` function because it will create a conflict with the `select()` function from dplyr and you will always need to preface it with `dplyr::`. Just use `MASS::mvrnorm()`.
You need to know how many observations you want to simulate (`n`) the means of the two variables (`mu`) and you need to calculate a [covariance matrix](https://psyteachr.github.io/glossary/c#covariance-matrix "Parameters showing how a set of vectors vary and are correlated.") (`sigma`) from the correlation between the variables (`rho`) and their standard deviations (`sd`).
```
n <- 1000 # number of random samples
# name the mu values to give the resulting columns names
mu <- c(x = 10, y = 20) # the means of the samples
sd <- c(5, 6) # the SDs of the samples
rho <- 0.5 # population correlation between the two variables
# correlation matrix
cor_mat <- matrix(c( 1, rho,
rho, 1), 2)
# create the covariance matrix
sigma <- (sd %*% t(sd)) * cor_mat
# sample from bivariate normal distribution
bvn <- MASS::mvrnorm(n, mu, sigma)
```
Plot your sampled variables to check everything worked like you expect. It’s easiest to convert the output of `mvnorm` into a tibble in order to use it in ggplot.
```
bvn %>%
as_tibble() %>%
ggplot(aes(x, y)) +
geom_point(alpha = 0.5) +
geom_smooth(method = "lm") +
geom_density2d()
```
```
## `geom_smooth()` using formula 'y ~ x'
```
### 8\.5\.2 Multivariate Normal
You can generate more than 2 correlated variables, but it gets a little trickier to create the correlation matrix.
```
n <- 200 # number of random samples
mu <- c(x = 10, y = 20, z = 30) # the means of the samples
sd <- c(8, 9, 10) # the SDs of the samples
rho1_2 <- 0.5 # correlation between x and y
rho1_3 <- 0 # correlation between x and z
rho2_3 <- 0.7 # correlation between y and z
# correlation matrix
cor_mat <- matrix(c( 1, rho1_2, rho1_3,
rho1_2, 1, rho2_3,
rho1_3, rho2_3, 1), 3)
sigma <- (sd %*% t(sd)) * cor_mat
bvn3 <- MASS::mvrnorm(n, mu, sigma)
cor(bvn3) # check correlation matrix
```
```
## x y z
## x 1.0000000 0.5896674 0.1513108
## y 0.5896674 1.0000000 0.7468737
## z 0.1513108 0.7468737 1.0000000
```
You can use the `plotly` library to make a 3D graph.
```
#set up the marker style
marker_style = list(
color = "#ff0000",
line = list(
color = "#444",
width = 1
),
opacity = 0.5,
size = 5
)
# convert bvn3 to a tibble, plot and add markers
bvn3 %>%
as_tibble() %>%
plot_ly(x = ~x, y = ~y, z = ~z, marker = marker_style) %>%
add_markers()
```
### 8\.5\.3 Faux
Alternatively, you can use the package [faux](https://debruine.github.io/faux/) to generate any number of correlated variables. It also has a function for checking the parameters of your new simulated data (`check_sim_stats()`).
```
bvn3 <- rnorm_multi(
n = n,
vars = 3,
mu = mu,
sd = sd,
r = c(rho1_2, rho1_3, rho2_3),
varnames = c("x", "y", "z")
)
check_sim_stats(bvn3)
```
| n | var | x | y | z | mean | sd |
| --- | --- | --- | --- | --- | --- | --- |
| 200 | x | 1\.00 | 0\.54 | 0\.10 | 10\.35 | 7\.66 |
| 200 | y | 0\.54 | 1\.00 | 0\.67 | 20\.01 | 8\.77 |
| 200 | z | 0\.10 | 0\.67 | 1\.00 | 30\.37 | 9\.59 |
You can also use faux to simulate data for factorial designs. Set up the between\-subject and within\-subject factors as lists with the levels as (named) vectors. Means and standard deviations can be included as vectors or data frames. The function calculates sigma for you, structures your dataset, and outputs a plot of the design.
```
b <- list(pet = c(cat = "Cat Owners",
dog = "Dog Owners"))
w <- list(time = c("morning",
"noon",
"night"))
mu <- data.frame(
cat = c(10, 12, 14),
dog = c(10, 15, 20),
row.names = w$time
)
sd <- c(3, 3, 3, 5, 5, 5)
pet_data <- sim_design(
within = w,
between = b,
n = 100,
mu = mu,
sd = sd,
r = .5)
```
You can use the `check_sim_stats()` function, but you need to set the argument `between` to a vector of all the between\-subject factor columns.
```
check_sim_stats(pet_data, between = "pet")
```
| pet | n | var | morning | noon | night | mean | sd |
| --- | --- | --- | --- | --- | --- | --- | --- |
| cat | 100 | morning | 1\.00 | 0\.57 | 0\.51 | 10\.62 | 3\.48 |
| cat | 100 | noon | 0\.57 | 1\.00 | 0\.59 | 12\.44 | 3\.01 |
| cat | 100 | night | 0\.51 | 0\.59 | 1\.00 | 14\.61 | 3\.14 |
| dog | 100 | morning | 1\.00 | 0\.55 | 0\.50 | 9\.44 | 4\.92 |
| dog | 100 | noon | 0\.55 | 1\.00 | 0\.48 | 14\.18 | 5\.90 |
| dog | 100 | night | 0\.50 | 0\.48 | 1\.00 | 19\.42 | 5\.36 |
See the [faux website](https://debruine.github.io/faux/) for more detailed tutorials.
8\.6 Statistical terms
----------------------
Let’s review some important statistical terms before we review tests of distributions.
### 8\.6\.1 Effect
The [effect](https://psyteachr.github.io/glossary/e#effect "Some measure of your data, such as the mean value, or the number of standard deviations the mean differs from a chance value.") is some measure of your data. This will depend on the type of data you have and the type of statistical test you are using. For example, if you flipped a coin 100 times and it landed heads 66 times, the effect would be 66/100\. You can then use the exact binomial test to compare this effect to the [null effect](https://psyteachr.github.io/glossary/n#null-effect "An outcome that does not show an otherwise expected effect.") you would expect from a fair coin (50/100\) or to any other effect you choose. The [effect size](https://psyteachr.github.io/glossary/e#effect-size "The difference between the effect in your data and the null effect (usually a chance value)") refers to the difference between the effect in your data and the null effect (usually a chance value).
### 8\.6\.2 P\-value
The [p\-value](https://psyteachr.github.io/glossary/p#p-value "The probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect)") of a test is the probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect). So if you used a binomial test to test against a chance probability of 1/6 (e.g., the probability of rolling 1 with a 6\-sided die), then a p\-value of 0\.17 means that you could expect to see effects at least as extreme as your data 17% of the time just by chance alone.
### 8\.6\.3 Alpha
If you are using null hypothesis significance testing ([NHST](https://psyteachr.github.io/glossary/n#nhst "Null Hypothesis Signficance Testing")), then you need to decide on a cutoff value ([alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot")) for making a decision to reject the null hypothesis. We call p\-values below the alpha cutoff [significant](https://psyteachr.github.io/glossary/s#significant "The conclusion when the p-value is less than the critical alpha. "). In psychology, alpha is traditionally set at 0\.05, but there are good arguments for [setting a different criterion in some circumstances](http://daniellakens.blogspot.com/2019/05/justifying-your-alpha-by-minimizing-or.html).
### 8\.6\.4 False Positive/Negative
The probability that a test concludes there is an effect when there is really no effect (e.g., concludes a fair coin is biased) is called the [false positive](https://psyteachr.github.io/glossary/f#false-positive "When a test concludes there is an effect when there really is no effect") rate (or [Type I Error](https://psyteachr.github.io/glossary/t#type-i-error "A false positive; When a test concludes there is an effect when there is really is no effect") Rate). The [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") is the false positive rate we accept for a test. The probability that a test concludes there is no effect when there really is one (e.g., concludes a biased coin is fair) is called the [false negative](https://psyteachr.github.io/glossary/f#false-negative "When a test concludes there is no effect when there really is an effect") rate (or [Type II Error](https://psyteachr.github.io/glossary/t#type-ii-error "A false negative; When a test concludes there is no effect when there is really is an effect") Rate). The [beta](https://psyteachr.github.io/glossary/b#beta "The false negative rate we accept for a statistical test.") is the false negative rate we accept for a test.
The false positive rate is not the overall probability of getting a false positive, but the probability of a false positive *under the null hypothesis*. Similarly, the false negative rate is the probability of a false negative *under the alternative hypothesis*. Unless we know the probability that we are testing a null effect, we can’t say anything about the overall probability of false positives or negatives. If 100% of the hypotheses we test are false, then all significant effects are false positives, but if all of the hypotheses we test are true, then all of the positives are true positives and the overall false positive rate is 0\.
### 8\.6\.5 Power and SESOI
[Power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") is equal to 1 minus beta (i.e., the [true positive](https://psyteachr.github.io/glossary/t#true-positive "When a test concludes there is an effect when there is really is an effect") rate), and depends on the effect size, how many samples we take (n), and what we set alpha to. For any test, if you specify all but one of these values, you can calculate the last. The effect size you use in power calculations should be the smallest effect size of interest ([SESOI](https://psyteachr.github.io/glossary/s#sesoi "Smallest Effect Size of Interest: the smallest effect that is theoretically or practically meaningful")). See ([Daniël Lakens, Scheel, and Isager 2018](#ref-TOSTtutorial))([https://doi.org/10\.1177/2515245918770963](https://doi.org/10.1177/2515245918770963)) for a tutorial on methods for choosing an SESOI.
Let’s say you want to be able to detect at least a 15% difference from chance (50%) in a coin’s fairness, and you want your test to have a 5% chance of false positives and a 10% chance of false negatives. What are the following values?
* alpha \=
* beta \=
* false positive rate \=
* false negative rate \=
* power \=
* SESOI \=
### 8\.6\.6 Confidence Intervals
The [confidence interval](https://psyteachr.github.io/glossary/c#confidence-interval "A type of interval estimate used to summarise a given statistic or measurement where a proportion of intervals calculated from the sample(s) will contain the true value of the statistic.") is a range around some value (such as a mean) that has some probability of containing the parameter, if you repeated the process many times. Traditionally in psychology, we use 95% confidence intervals, but you can calculate CIs for any percentage.
A 95% CI does *not* mean that there is a 95% probability that the true mean lies within this range, but that, if you repeated the study many times and calculated the CI this same way every time, you’d expect the true mean to be inside the CI in 95% of the studies. This seems like a subtle distinction, but can lead to some misunderstandings. See ([Morey et al. 2016](#ref-Morey2016))([https://link.springer.com/article/10\.3758/s13423\-015\-0947\-8](https://link.springer.com/article/10.3758/s13423-015-0947-8)) for more detailed discussion.
8\.7 Tests
----------
### 8\.7\.1 Exact binomial test
`binom.test(x, n, p)`
You can test a binomial distribution against a specific probability using the exact binomial test.
* `x` \= the number of successes
* `n` \= the number of trials
* `p` \= hypothesised probability of success
Here we can test a series of 10 coin flips from a fair coin and a biased coin against the hypothesised probability of 0\.5 (even odds).
```
n <- 10
fair_coin <- rbinom(1, n, 0.5)
biased_coin <- rbinom(1, n, 0.6)
binom.test(fair_coin, n, p = 0.5)
binom.test(biased_coin, n, p = 0.5)
```
```
##
## Exact binomial test
##
## data: fair_coin and n
## number of successes = 6, number of trials = 10, p-value = 0.7539
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.2623781 0.8784477
## sample estimates:
## probability of success
## 0.6
##
##
## Exact binomial test
##
## data: biased_coin and n
## number of successes = 8, number of trials = 10, p-value = 0.1094
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.4439045 0.9747893
## sample estimates:
## probability of success
## 0.8
```
Run the code above several times, noting the p\-values for the fair and biased coins. Alternatively, you can [simulate coin flips](http://shiny.psy.gla.ac.uk/debruine/coinsim/) online and build up a graph of results and p\-values.
* How does the p\-value vary for the fair and biased coins?
* What happens to the confidence intervals if you increase n from 10 to 100?
* What criterion would you use to tell if the observed data indicate the coin is fair or biased?
* How often do you conclude the fair coin is biased (false positives)?
* How often do you conclude the biased coin is fair (false negatives)?
#### 8\.7\.1\.1 Sampling function
To estimate these rates, we need to repeat the sampling above many times. A [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") is ideal for repeating the exact same procedure over and over. Set the arguments of the function to variables that you might want to change. Here, we will want to estimate power for:
* different sample sizes (`n`)
* different effects (`bias`)
* different hypothesised probabilities (`p`, defaults to 0\.5\)
```
sim_binom_test <- function(n, bias, p = 0.5) {
# simulate 1 coin flip n times with the specified bias
coin <- rbinom(1, n, bias)
# run a binomial test on the simulated data for the specified p
btest <- binom.test(coin, n, p)
# return the p-value of this test
btest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_binom_test(100, 0.6)
```
```
## [1] 0.1332106
```
#### 8\.7\.1\.2 Calculate power
Then you can use the `replicate()` function to run it many times and save all the output values. You can calculate the [power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") of your analysis by checking the proportion of your simulated analyses that have a p\-value less than your [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") (the probability of rejecting the null hypothesis when the null hypothesis is true).
```
my_reps <- replicate(1e4, sim_binom_test(100, 0.6))
alpha <- 0.05 # this does not always have to be 0.05
mean(my_reps < alpha)
```
```
## [1] 0.4561
```
`1e4` is just scientific notation for a 1 followed by 4 zeros (`10000`). When you’re running simulations, you usually want to run a lot of them. It’s a pain to keep track of whether you’ve typed 5 or 6 zeros (100000 vs 1000000\) and this will change your running time by an order of magnitude.
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
### 8\.7\.2 T\-test
`t.test(x, y, alternative, mu, paired)`
Use a t\-test to compare the mean of one distribution to a null hypothesis (one\-sample t\-test), compare the means of two samples (independent\-samples t\-test), or compare pairs of values (paired\-samples t\-test).
You can run a one\-sample t\-test comparing the mean of your data to `mu`. Here is a simulated distribution with a mean of 0\.5 and an SD of 1, creating an effect size of 0\.5 SD when tested against a `mu` of 0\. Run the simulation a few times to see how often the t\-test returns a significant p\-value (or run it in the [shiny app](http://shiny.psy.gla.ac.uk/debruine/normsim/)).
```
sim_norm <- rnorm(100, 0.5, 1)
t.test(sim_norm, mu = 0)
```
```
##
## One Sample t-test
##
## data: sim_norm
## t = 6.2874, df = 99, p-value = 8.758e-09
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 0.4049912 0.7784761
## sample estimates:
## mean of x
## 0.5917337
```
Run an independent\-samples t\-test by comparing two lists of values.
```
a <- rnorm(100, 0.5, 1)
b <- rnorm(100, 0.7, 1)
t_ind <- t.test(a, b, paired = FALSE)
t_ind
```
```
##
## Welch Two Sample t-test
##
## data: a and b
## t = -1.8061, df = 197.94, p-value = 0.07243
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.54825320 0.02408469
## sample estimates:
## mean of x mean of y
## 0.4585985 0.7206828
```
The `paired` argument defaults to `FALSE`, but it’s good practice to always explicitly set it so you are never confused about what type of test you are performing.
#### 8\.7\.2\.1 Sampling function
We can use the `names()` function to find out the names of all the t.test parameters and use this to just get one type of data, like the test statistic (e.g., t\-value).
```
names(t_ind)
t_ind$statistic
```
```
## [1] "statistic" "parameter" "p.value" "conf.int" "estimate"
## [6] "null.value" "stderr" "alternative" "method" "data.name"
## t
## -1.806051
```
If you want to run the simulation many times and record information each time, first you need to turn your simulation into a function.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2) {
# simulate v1
v1 <- rnorm(n, m1, sd1)
#simulate v2
v2 <- rnorm(n, m2, sd2)
# compare using an independent samples t-test
t_ind <- t.test(v1, v2, paired = FALSE)
# return the p-value
return(t_ind$p.value)
}
```
Run it a few times to check that it gives you sensible values.
```
sim_t_ind(100, 0.7, 1, 0.5, 1)
```
```
## [1] 0.362521
```
#### 8\.7\.2\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_t_ind(100, 0.7, 1, 0.5, 1))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.2925
```
Run the code above several times. How much does the power value fluctuate? How many replications do you need to run to get a reliable estimate of power?
Compare your power estimate from simluation to a power calculation using `power.t.test()`. Here, `delta` is the difference between `m1` and `m2` above.
```
power.t.test(n = 100,
delta = 0.2,
sd = 1,
sig.level = alpha,
type = "two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
What do you think the distribution of p\-values is when there is no effect (i.e., the means are identical)? Check this yourself.
Make sure the `boundary` argument is set to `0` for p\-value histograms. See what happens with a null effect if `boundary` is not set.
### 8\.7\.3 Correlation
You can test if continuous variables are related to each other using the `cor()` function. Let’s use `rnorm_multi()` to make a quick table of correlated values.
```
dat <- rnorm_multi(
n = 100,
vars = 2,
r = -0.5,
varnames = c("x", "y")
)
cor(dat$x, dat$y)
```
```
## [1] -0.4960331
```
Set `n` to a large number like 1e6 so that the correlations are less affected by chance. Change the value of the **mean** for `a`, `x`, or `y`. Does it change the correlation between `x` and `y`? What happens when you increase or decrease the **sd**? Can you work out any rules here?
`cor()` defaults to Pearson’s correlations. Set the `method` argument to use Kendall or Spearman correlations.
```
cor(dat$x, dat$y, method = "spearman")
```
```
## [1] -0.4724992
```
#### 8\.7\.3\.1 Sampling function
Create a function that creates two variables with `n` observations and `r` correlation. Use the function `cor.test()` to give you p\-values for the correlation.
```
sim_cor_test <- function(n = 100, r = 0) {
dat <- rnorm_multi(
n = n,
vars = 2,
r = r,
varnames = c("x", "y")
)
ctest <- cor.test(dat$x, dat$y)
ctest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_cor_test(50, .5)
```
```
## [1] 0.001354836
```
#### 8\.7\.3\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_cor_test(50, 0.5))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.965
```
Compare to the value calcuated by the pwr package.
```
pwr::pwr.r.test(n = 50, r = 0.5)
```
```
##
## approximate correlation power calculation (arctangh transformation)
##
## n = 50
## r = 0.5
## sig.level = 0.05
## power = 0.9669813
## alternative = two.sided
```
8\.8 Example
------------
This example uses the [Growth Chart Data Tables](https://www.cdc.gov/growthcharts/data/zscore/zstatage.csv) from the [US CDC](https://www.cdc.gov/growthcharts/zscore.htm). The data consist of height in centimeters for the z\-scores of –2, \-1\.5, \-1, \-0\.5, 0, 0\.5, 1, 1\.5, and 2 by sex (1\=male; 2\=female) and half\-month of age (from 24\.0 to 240\.5 months).
### 8\.8\.1 Load \& wrangle
We have to do a little data wrangling first. Have a look at the data after you import it and relabel `Sex` to `male` and `female` instead of `1` and `2`. Also convert `Agemos` (age in months) to years. Relabel the column `0` as `mean` and calculate a new column named `sd` as the difference between columns `1` and `0`.
```
orig_height_age <- read_csv("https://www.cdc.gov/growthcharts/data/zscore/zstatage.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## Sex = col_character(),
## Agemos = col_character(),
## `-2` = col_double(),
## `-1.5` = col_double(),
## `-1` = col_double(),
## `-0.5` = col_double(),
## `0` = col_double(),
## `0.5` = col_double(),
## `1` = col_double(),
## `1.5` = col_double(),
## `2` = col_double()
## )
```
```
height_age <- orig_height_age %>%
filter(Sex %in% c(1,2)) %>%
mutate(
sex = recode(Sex, "1" = "male", "2" = "female"),
age = as.numeric(Agemos)/12,
sd = `1` - `0`
) %>%
select(sex, age, mean = `0`, sd)
```
### 8\.8\.2 Plot
Plot your new data frame to see how mean height changes with age for boys and girls.
```
ggplot(height_age, aes(age, mean, color = sex)) +
geom_smooth(aes(ymin = mean - sd,
ymax = mean + sd),
stat="identity")
```
### 8\.8\.3 Simulate a population
Simulate 50 random male heights and 50 random female heights for 20\-year\-olds using the `rnorm()` function and the means and SDs from the `height_age` table. Plot the data.
```
age_filter <- 20
m <- filter(height_age, age == age_filter, sex == "male")
f <- filter(height_age, age == age_filter, sex == "female")
sim_height <- tibble(
male = rnorm(50, m$mean, m$sd),
female = rnorm(50, f$mean, f$sd)
) %>%
gather("sex", "height", male:female)
ggplot(sim_height) +
geom_density(aes(height, fill = sex), alpha = 0.5) +
xlim(125, 225)
```
Run the simulation above several times, noting how the density plot changes. Try changing the age you’re simulating.
### 8\.8\.4 Analyse simulated data
Use the `sim_t_ind(n, m1, sd1, m2, sd2)` function we created above to generate one simulation with a sample size of 50 in each group using the means and SDs of male and female 14\-year\-olds.
```
age_filter <- 14
m <- filter(height_age, age == age_filter, sex == "male")
f <- filter(height_age, age == age_filter, sex == "female")
sim_t_ind(50, m$mean, m$sd, f$mean, f$sd)
```
```
## [1] 0.0005255744
```
### 8\.8\.5 Replicate simulation
Now replicate this 1e4 times using the `replicate()` function. This function will save the returned p\-values in a list (`my_reps`). We can then check what proportion of those p\-values are less than our alpha value. This is the power of our test.
```
my_reps <- replicate(1e4, sim_t_ind(50, m$mean, m$sd, f$mean, f$sd))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.6403
```
### 8\.8\.6 One\-tailed prediction
This design has about 65% power to detect the sex difference in height (with a 2\-tailed test). Modify the `sim_t_ind` function for a 1\-tailed prediction.
You could just set `alternative` equal to “greater” in the function, but it might be better to add the `alt` argument to your function (giving it the same default value as `t.test`) and change the value of `alternative` in the function to `alt`.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2, alt = "two.sided") {
v1 <- rnorm(n, m1, sd1)
v2 <- rnorm(n, m2, sd2)
t_ind <- t.test(v1, v2, paired = FALSE, alternative = alt)
return(t_ind$p.value)
}
alpha <- 0.05
my_reps <- replicate(1e4, sim_t_ind(50, m$mean, m$sd, f$mean, f$sd, "greater"))
mean(my_reps < alpha)
```
```
## [1] 0.752
```
### 8\.8\.7 Range of sample sizes
What if we want to find out what sample size will give us 80% power? We can try trial and error. We know the number should be slightly larger than 50\. But you can search more systematically by repeating your power calculation for a range of sample sizes.
This might seem like overkill for a t\-test, where you can easily look up sample size calculators online, but it is a valuable skill to learn for when your analyses become more complicated.
Start with a relatively low number of replications and/or more spread\-out samples to estimate where you should be looking more specifically. Then you can repeat with a narrower/denser range of sample sizes and more iterations.
```
# make another custom function to return power
pwr_func <- function(n, reps = 100, alpha = 0.05) {
ps <- replicate(reps, sim_t_ind(n, m$mean, m$sd, f$mean, f$sd, "greater"))
mean(ps < alpha)
}
# make a table of the n values you want to check
power_table <- tibble(
n = seq(20, 100, by = 5)
) %>%
# run the power function for each n
mutate(power = map_dbl(n, pwr_func))
# plot the results
ggplot(power_table, aes(n, power)) +
geom_smooth() +
geom_point() +
geom_hline(yintercept = 0.8)
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
Now we can narrow down our search to values around 55 (plus or minus 5\) and increase the number of replications from 1e3 to 1e4\.
```
power_table <- tibble(
n = seq(50, 60)
) %>%
mutate(power = map_dbl(n, pwr_func, reps = 1e4))
ggplot(power_table, aes(n, power)) +
geom_smooth() +
geom_point() +
geom_hline(yintercept = 0.8)
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
8\.9 Glossary
-------------
| term | definition |
| --- | --- |
| [alpha](https://psyteachr.github.io/glossary/a#alpha) | (stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot |
| [beta](https://psyteachr.github.io/glossary/b#beta) | The false negative rate we accept for a statistical test. |
| [binomial distribution](https://psyteachr.github.io/glossary/b#binomial.distribution) | The distribution of data where each observation can have one of two outcomes, like success/failure, yes/no or head/tails. |
| [bivariate normal](https://psyteachr.github.io/glossary/b#bivariate.normal) | Two normally distributed vectors that have a specified correlation with each other. |
| [confidence interval](https://psyteachr.github.io/glossary/c#confidence.interval) | A type of interval estimate used to summarise a given statistic or measurement where a proportion of intervals calculated from the sample(s) will contain the true value of the statistic. |
| [correlation](https://psyteachr.github.io/glossary/c#correlation) | The relationship two vectors have to each other. |
| [covariance matrix](https://psyteachr.github.io/glossary/c#covariance.matrix) | Parameters showing how a set of vectors vary and are correlated. |
| [discrete](https://psyteachr.github.io/glossary/d#discrete) | Data that can only take certain values, such as integers. |
| [effect size](https://psyteachr.github.io/glossary/e#effect.size) | The difference between the effect in your data and the null effect (usually a chance value) |
| [effect](https://psyteachr.github.io/glossary/e#effect) | Some measure of your data, such as the mean value, or the number of standard deviations the mean differs from a chance value. |
| [false negative](https://psyteachr.github.io/glossary/f#false.negative) | When a test concludes there is no effect when there really is an effect |
| [false positive](https://psyteachr.github.io/glossary/f#false.positive) | When a test concludes there is an effect when there really is no effect |
| [function](https://psyteachr.github.io/glossary/f#function.) | A named section of code that can be reused. |
| [nhst](https://psyteachr.github.io/glossary/n#nhst) | Null Hypothesis Signficance Testing |
| [normal distribution](https://psyteachr.github.io/glossary/n#normal.distribution) | A symmetric distribution of data where values near the centre are most probable. |
| [null effect](https://psyteachr.github.io/glossary/n#null.effect) | An outcome that does not show an otherwise expected effect. |
| [p value](https://psyteachr.github.io/glossary/p#p.value) | The probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect) |
| [parameter](https://psyteachr.github.io/glossary/p#parameter) | A value that describes a distribution, such as the mean or SD |
| [poisson distribution](https://psyteachr.github.io/glossary/p#poisson.distribution) | A distribution that models independent events happening over a unit of time |
| [power](https://psyteachr.github.io/glossary/p#power) | The probability of rejecting the null hypothesis when it is false. |
| [probability](https://psyteachr.github.io/glossary/p#probability) | A number between 0 and 1 where 0 indicates impossibility of the event and 1 indicates certainty |
| [sesoi](https://psyteachr.github.io/glossary/s#sesoi) | Smallest Effect Size of Interest: the smallest effect that is theoretically or practically meaningful |
| [significant](https://psyteachr.github.io/glossary/s#significant) | The conclusion when the p\-value is less than the critical alpha. |
| [simulation](https://psyteachr.github.io/glossary/s#simulation) | Generating data from summary parameters |
| [true positive](https://psyteachr.github.io/glossary/t#true.positive) | When a test concludes there is an effect when there is really is an effect |
| [type i error](https://psyteachr.github.io/glossary/t#type.i.error) | A false positive; When a test concludes there is an effect when there is really is no effect |
| [type ii error](https://psyteachr.github.io/glossary/t#type.ii.error) | A false negative; When a test concludes there is no effect when there is really is an effect |
| [uniform distribution](https://psyteachr.github.io/glossary/u#uniform.distribution) | A distribution where all numbers in the range have an equal probability of being sampled |
| [univariate](https://psyteachr.github.io/glossary/u#univariate) | Relating to a single variable. |
8\.10 Exercises
---------------
Download the [exercises](exercises/08_sim_exercise.Rmd). See the [answers](exercises/08_sim_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(8)
# run this to access the answers
dataskills::exercise(8, answers = TRUE)
```
8\.1 Learning Objectives
------------------------
### 8\.1\.1 Basic
1. Generate and plot data randomly sampled from common distributions [(video)](https://youtu.be/iuecrT3q1kg)
* [uniform](sim.html#uniform)
* [binomial](sim.html#binomial)
* [normal](sim.html#normal)
* [poisson](sim.html#poisson)
2. Generate related variables from a [multivariate](sim.html#mvdist) distribution [(video)](https://youtu.be/B14HfWQ1kIc)
3. Define the following [statistical terms](sim.html#stat-terms):
* [p\-value](sim.html#p-value)
* [alpha](sim.html#alpha)
* [power](sim.html#power)
* smallest effect size of interest ([SESOI](#sesoi))
* [false positive](sim.html#false-pos) (type I error)
* [false negative](#false-neg) (type II error)
* confidence interval ([CI](#conf-inf))
4. Test sampled distributions against a null hypothesis [(video)](https://youtu.be/Am3G6rA2S1s)
* [exact binomial test](sim.html#exact-binom)
* [t\-test](sim.html#t-test) (1\-sample, independent samples, paired samples)
* [correlation](sim.html#correlation) (pearson, kendall and spearman)
5. [Calculate power](sim.html#calc-power-binom) using iteration and a sampling function
### 8\.1\.2 Intermediate
6. Calculate the minimum sample size for a specific power level and design
### 8\.1\.1 Basic
1. Generate and plot data randomly sampled from common distributions [(video)](https://youtu.be/iuecrT3q1kg)
* [uniform](sim.html#uniform)
* [binomial](sim.html#binomial)
* [normal](sim.html#normal)
* [poisson](sim.html#poisson)
2. Generate related variables from a [multivariate](sim.html#mvdist) distribution [(video)](https://youtu.be/B14HfWQ1kIc)
3. Define the following [statistical terms](sim.html#stat-terms):
* [p\-value](sim.html#p-value)
* [alpha](sim.html#alpha)
* [power](sim.html#power)
* smallest effect size of interest ([SESOI](#sesoi))
* [false positive](sim.html#false-pos) (type I error)
* [false negative](#false-neg) (type II error)
* confidence interval ([CI](#conf-inf))
4. Test sampled distributions against a null hypothesis [(video)](https://youtu.be/Am3G6rA2S1s)
* [exact binomial test](sim.html#exact-binom)
* [t\-test](sim.html#t-test) (1\-sample, independent samples, paired samples)
* [correlation](sim.html#correlation) (pearson, kendall and spearman)
5. [Calculate power](sim.html#calc-power-binom) using iteration and a sampling function
### 8\.1\.2 Intermediate
6. Calculate the minimum sample size for a specific power level and design
8\.2 Resources
--------------
* [Stub for this lesson](stubs/8_sim.Rmd)
* [Distribution Shiny App](http://shiny.psy.gla.ac.uk/debruine/simulate/) (or run `dataskills::app("simulate")`
* [Simulation tutorials](https://debruine.github.io/tutorials/sim-data.html)
* [Chapter 21: Iteration](http://r4ds.had.co.nz/iteration.html) of *R for Data Science*
* [Improving your statistical inferences](https://www.coursera.org/learn/statistical-inferences/) on Coursera (week 1\)
* [Faux](https://debruine.github.io/faux/) package for data simulation
* [Simulation\-Based Power\-Analysis for Factorial ANOVA Designs](https://psyarxiv.com/baxsf) ([Daniel Lakens and Caldwell 2019](#ref-lakens_caldwell_2019))
* [Understanding mixed effects models through data simulation](https://psyarxiv.com/xp5cy/) ([DeBruine and Barr 2019](#ref-debruine_barr_2019))
8\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(plotly)
library(faux)
set.seed(8675309) # makes sure random numbers are reproducible
```
Simulating data is a very powerful way to test your understanding of statistical concepts. We are going to use [simulations](https://psyteachr.github.io/glossary/s#simulation "Generating data from summary parameters") to learn the basics of [probability](https://psyteachr.github.io/glossary/p#probability "A number between 0 and 1 where 0 indicates impossibility of the event and 1 indicates certainty").
8\.4 Univariate Distributions
-----------------------------
First, we need to understand some different ways data might be distributed and how to simulate data from these distributions. A [univariate](https://psyteachr.github.io/glossary/u#univariate "Relating to a single variable.") distribution is the distribution of a single variable.
### 8\.4\.1 Uniform Distribution
The [uniform distribution](https://psyteachr.github.io/glossary/u#uniform-distribution "A distribution where all numbers in the range have an equal probability of being sampled") is the simplest distribution. All numbers in the range have an equal probability of being sampled.
Take a minute to think of things in your own research that are uniformly distributed.
#### 8\.4\.1\.1 Continuous distribution
`runif(n, min=0, max=1)`
Use `runif()` to sample from a continuous uniform distribution.
```
u <- runif(100000, min = 0, max = 1)
# plot to visualise
ggplot() +
geom_histogram(aes(u), binwidth = 0.05, boundary = 0,
fill = "white", colour = "black")
```
#### 8\.4\.1\.2 Discrete
`sample(x, size, replace = FALSE, prob = NULL)`
Use `sample()` to sample from a [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") distribution.
You can use `sample()` to simulate events like rolling dice or choosing from a deck of cards. The code below simulates rolling a 6\-sided die 10000 times. We set `replace` to `TRUE` so that each event is independent. See what happens if you set `replace` to `FALSE`.
```
rolls <- sample(1:6, 10000, replace = TRUE)
# plot the results
ggplot() +
geom_histogram(aes(rolls), binwidth = 1,
fill = "white", color = "black")
```
Figure 8\.1: Distribution of dice rolls.
You can also use sample to sample from a list of named outcomes.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
sample(pet_types, 10, replace = TRUE)
```
```
## [1] "cat" "cat" "cat" "cat" "ferret" "dog" "bird" "cat"
## [9] "dog" "fish"
```
Ferrets are a much less common pet than cats and dogs, so our sample isn’t very realistic. You can set the probabilities of each item in the list with the `prob` argument.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
pet_prob <- c(0.3, 0.4, 0.1, 0.1, 0.1)
sample(pet_types, 10, replace = TRUE, prob = pet_prob)
```
```
## [1] "fish" "dog" "cat" "dog" "cat" "dog" "fish" "dog" "cat" "fish"
```
### 8\.4\.2 Binomial Distribution
The [binomial distribution](https://psyteachr.github.io/glossary/b#binomial-distribution "The distribution of data where each observation can have one of two outcomes, like success/failure, yes/no or head/tails. ") is useful for modelling binary data, where each observation can have one of two outcomes, like success/failure, yes/no or head/tails.
`rbinom(n, size, prob)`
The `rbinom` function will generate a random binomial distribution.
* `n` \= number of observations
* `size` \= number of trials
* `prob` \= probability of success on each trial
Coin flips are a typical example of a binomial distribution, where we can assign heads to 1 and tails to 0\.
```
# 20 individual coin flips of a fair coin
rbinom(20, 1, 0.5)
```
```
## [1] 1 1 1 0 1 1 0 1 0 0 1 1 1 0 0 0 1 0 0 0
```
```
# 20 individual coin flips of a baised (0.75) coin
rbinom(20, 1, 0.75)
```
```
## [1] 1 1 1 0 1 0 1 1 1 0 1 1 1 0 0 1 1 1 1 1
```
You can generate the total number of heads in 1 set of 20 coin flips by setting `size` to 20 and `n` to 1\.
```
rbinom(1, 20, 0.75)
```
```
## [1] 13
```
You can generate more sets of 20 coin flips by increasing the `n`.
```
rbinom(10, 20, 0.5)
```
```
## [1] 10 14 11 7 11 13 6 10 9 9
```
You should always check your randomly generated data to check that it makes sense. For large samples, it’s easiest to do that graphically. A histogram is usually the best choice for plotting binomial data.
```
flips <- rbinom(1000, 20, 0.5)
ggplot() +
geom_histogram(
aes(flips),
binwidth = 1,
fill = "white",
color = "black"
)
```
Run the simulation above several times, noting how the histogram changes. Try changing the values of `n`, `size`, and `prob`.
### 8\.4\.3 Normal Distribution
`rnorm(n, mean, sd)`
We can simulate a [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable.") of size `n` if we know the `mean` and standard deviation (`sd`). A density plot is usually the best way to visualise this type of data if your `n` is large.
```
dv <- rnorm(1e5, 10, 2)
# proportions of normally-distributed data
# within 1, 2, or 3 SD of the mean
sd1 <- .6827
sd2 <- .9545
sd3 <- .9973
ggplot() +
geom_density(aes(dv), fill = "white") +
geom_vline(xintercept = mean(dv), color = "red") +
geom_vline(xintercept = quantile(dv, .5 - sd1/2), color = "darkgreen") +
geom_vline(xintercept = quantile(dv, .5 + sd1/2), color = "darkgreen") +
geom_vline(xintercept = quantile(dv, .5 - sd2/2), color = "blue") +
geom_vline(xintercept = quantile(dv, .5 + sd2/2), color = "blue") +
geom_vline(xintercept = quantile(dv, .5 - sd3/2), color = "purple") +
geom_vline(xintercept = quantile(dv, .5 + sd3/2), color = "purple") +
scale_x_continuous(
limits = c(0,20),
breaks = seq(0,20)
)
```
Run the simulation above several times, noting how the density plot changes. What do the vertical lines represent? Try changing the values of `n`, `mean`, and `sd`.
### 8\.4\.4 Poisson Distribution
The [Poisson distribution](https://psyteachr.github.io/glossary/p#poisson-distribution "A distribution that models independent events happening over a unit of time") is useful for modelling events, like how many times something happens over a unit of time, as long as the events are independent (e.g., an event having happened in one time period doesn’t make it more or less likely to happen in the next).
`rpois(n, lambda)`
The `rpois` function will generate a random Poisson distribution.
* `n` \= number of observations
* `lambda` \= the mean number of events per observation
Let’s say we want to model how many texts you get each day for a whole. You know that you get an average of 20 texts per day. So we set `n = 365` and `lambda = 20`. Lambda is a [parameter](https://psyteachr.github.io/glossary/p#parameter "A value that describes a distribution, such as the mean or SD") that describes the Poisson distribution, just like mean and standard deviation are parameters that describe the normal distribution.
```
texts <- rpois(n = 365, lambda = 20)
ggplot() +
geom_histogram(
aes(texts),
binwidth = 1,
fill = "white",
color = "black"
)
```
So we can see that over a year, you’re unlikely to get fewer than 5 texts in a day, or more than 35 (although it’s not impossible).
### 8\.4\.1 Uniform Distribution
The [uniform distribution](https://psyteachr.github.io/glossary/u#uniform-distribution "A distribution where all numbers in the range have an equal probability of being sampled") is the simplest distribution. All numbers in the range have an equal probability of being sampled.
Take a minute to think of things in your own research that are uniformly distributed.
#### 8\.4\.1\.1 Continuous distribution
`runif(n, min=0, max=1)`
Use `runif()` to sample from a continuous uniform distribution.
```
u <- runif(100000, min = 0, max = 1)
# plot to visualise
ggplot() +
geom_histogram(aes(u), binwidth = 0.05, boundary = 0,
fill = "white", colour = "black")
```
#### 8\.4\.1\.2 Discrete
`sample(x, size, replace = FALSE, prob = NULL)`
Use `sample()` to sample from a [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") distribution.
You can use `sample()` to simulate events like rolling dice or choosing from a deck of cards. The code below simulates rolling a 6\-sided die 10000 times. We set `replace` to `TRUE` so that each event is independent. See what happens if you set `replace` to `FALSE`.
```
rolls <- sample(1:6, 10000, replace = TRUE)
# plot the results
ggplot() +
geom_histogram(aes(rolls), binwidth = 1,
fill = "white", color = "black")
```
Figure 8\.1: Distribution of dice rolls.
You can also use sample to sample from a list of named outcomes.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
sample(pet_types, 10, replace = TRUE)
```
```
## [1] "cat" "cat" "cat" "cat" "ferret" "dog" "bird" "cat"
## [9] "dog" "fish"
```
Ferrets are a much less common pet than cats and dogs, so our sample isn’t very realistic. You can set the probabilities of each item in the list with the `prob` argument.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
pet_prob <- c(0.3, 0.4, 0.1, 0.1, 0.1)
sample(pet_types, 10, replace = TRUE, prob = pet_prob)
```
```
## [1] "fish" "dog" "cat" "dog" "cat" "dog" "fish" "dog" "cat" "fish"
```
#### 8\.4\.1\.1 Continuous distribution
`runif(n, min=0, max=1)`
Use `runif()` to sample from a continuous uniform distribution.
```
u <- runif(100000, min = 0, max = 1)
# plot to visualise
ggplot() +
geom_histogram(aes(u), binwidth = 0.05, boundary = 0,
fill = "white", colour = "black")
```
#### 8\.4\.1\.2 Discrete
`sample(x, size, replace = FALSE, prob = NULL)`
Use `sample()` to sample from a [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") distribution.
You can use `sample()` to simulate events like rolling dice or choosing from a deck of cards. The code below simulates rolling a 6\-sided die 10000 times. We set `replace` to `TRUE` so that each event is independent. See what happens if you set `replace` to `FALSE`.
```
rolls <- sample(1:6, 10000, replace = TRUE)
# plot the results
ggplot() +
geom_histogram(aes(rolls), binwidth = 1,
fill = "white", color = "black")
```
Figure 8\.1: Distribution of dice rolls.
You can also use sample to sample from a list of named outcomes.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
sample(pet_types, 10, replace = TRUE)
```
```
## [1] "cat" "cat" "cat" "cat" "ferret" "dog" "bird" "cat"
## [9] "dog" "fish"
```
Ferrets are a much less common pet than cats and dogs, so our sample isn’t very realistic. You can set the probabilities of each item in the list with the `prob` argument.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
pet_prob <- c(0.3, 0.4, 0.1, 0.1, 0.1)
sample(pet_types, 10, replace = TRUE, prob = pet_prob)
```
```
## [1] "fish" "dog" "cat" "dog" "cat" "dog" "fish" "dog" "cat" "fish"
```
### 8\.4\.2 Binomial Distribution
The [binomial distribution](https://psyteachr.github.io/glossary/b#binomial-distribution "The distribution of data where each observation can have one of two outcomes, like success/failure, yes/no or head/tails. ") is useful for modelling binary data, where each observation can have one of two outcomes, like success/failure, yes/no or head/tails.
`rbinom(n, size, prob)`
The `rbinom` function will generate a random binomial distribution.
* `n` \= number of observations
* `size` \= number of trials
* `prob` \= probability of success on each trial
Coin flips are a typical example of a binomial distribution, where we can assign heads to 1 and tails to 0\.
```
# 20 individual coin flips of a fair coin
rbinom(20, 1, 0.5)
```
```
## [1] 1 1 1 0 1 1 0 1 0 0 1 1 1 0 0 0 1 0 0 0
```
```
# 20 individual coin flips of a baised (0.75) coin
rbinom(20, 1, 0.75)
```
```
## [1] 1 1 1 0 1 0 1 1 1 0 1 1 1 0 0 1 1 1 1 1
```
You can generate the total number of heads in 1 set of 20 coin flips by setting `size` to 20 and `n` to 1\.
```
rbinom(1, 20, 0.75)
```
```
## [1] 13
```
You can generate more sets of 20 coin flips by increasing the `n`.
```
rbinom(10, 20, 0.5)
```
```
## [1] 10 14 11 7 11 13 6 10 9 9
```
You should always check your randomly generated data to check that it makes sense. For large samples, it’s easiest to do that graphically. A histogram is usually the best choice for plotting binomial data.
```
flips <- rbinom(1000, 20, 0.5)
ggplot() +
geom_histogram(
aes(flips),
binwidth = 1,
fill = "white",
color = "black"
)
```
Run the simulation above several times, noting how the histogram changes. Try changing the values of `n`, `size`, and `prob`.
### 8\.4\.3 Normal Distribution
`rnorm(n, mean, sd)`
We can simulate a [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable.") of size `n` if we know the `mean` and standard deviation (`sd`). A density plot is usually the best way to visualise this type of data if your `n` is large.
```
dv <- rnorm(1e5, 10, 2)
# proportions of normally-distributed data
# within 1, 2, or 3 SD of the mean
sd1 <- .6827
sd2 <- .9545
sd3 <- .9973
ggplot() +
geom_density(aes(dv), fill = "white") +
geom_vline(xintercept = mean(dv), color = "red") +
geom_vline(xintercept = quantile(dv, .5 - sd1/2), color = "darkgreen") +
geom_vline(xintercept = quantile(dv, .5 + sd1/2), color = "darkgreen") +
geom_vline(xintercept = quantile(dv, .5 - sd2/2), color = "blue") +
geom_vline(xintercept = quantile(dv, .5 + sd2/2), color = "blue") +
geom_vline(xintercept = quantile(dv, .5 - sd3/2), color = "purple") +
geom_vline(xintercept = quantile(dv, .5 + sd3/2), color = "purple") +
scale_x_continuous(
limits = c(0,20),
breaks = seq(0,20)
)
```
Run the simulation above several times, noting how the density plot changes. What do the vertical lines represent? Try changing the values of `n`, `mean`, and `sd`.
### 8\.4\.4 Poisson Distribution
The [Poisson distribution](https://psyteachr.github.io/glossary/p#poisson-distribution "A distribution that models independent events happening over a unit of time") is useful for modelling events, like how many times something happens over a unit of time, as long as the events are independent (e.g., an event having happened in one time period doesn’t make it more or less likely to happen in the next).
`rpois(n, lambda)`
The `rpois` function will generate a random Poisson distribution.
* `n` \= number of observations
* `lambda` \= the mean number of events per observation
Let’s say we want to model how many texts you get each day for a whole. You know that you get an average of 20 texts per day. So we set `n = 365` and `lambda = 20`. Lambda is a [parameter](https://psyteachr.github.io/glossary/p#parameter "A value that describes a distribution, such as the mean or SD") that describes the Poisson distribution, just like mean and standard deviation are parameters that describe the normal distribution.
```
texts <- rpois(n = 365, lambda = 20)
ggplot() +
geom_histogram(
aes(texts),
binwidth = 1,
fill = "white",
color = "black"
)
```
So we can see that over a year, you’re unlikely to get fewer than 5 texts in a day, or more than 35 (although it’s not impossible).
8\.5 Multivariate Distributions
-------------------------------
### 8\.5\.1 Bivariate Normal
A [bivariate normal](https://psyteachr.github.io/glossary/b#bivariate-normal "Two normally distributed vectors that have a specified correlation with each other.") distribution is two normally distributed vectors that have a specified relationship, or [correlation](https://psyteachr.github.io/glossary/c#correlation "The relationship two vectors have to each other.") to each other.
What if we want to sample from a population with specific relationships between variables? We can sample from a bivariate normal distribution using `mvrnorm()` from the `MASS` package.
Don’t load MASS with the `library()` function because it will create a conflict with the `select()` function from dplyr and you will always need to preface it with `dplyr::`. Just use `MASS::mvrnorm()`.
You need to know how many observations you want to simulate (`n`) the means of the two variables (`mu`) and you need to calculate a [covariance matrix](https://psyteachr.github.io/glossary/c#covariance-matrix "Parameters showing how a set of vectors vary and are correlated.") (`sigma`) from the correlation between the variables (`rho`) and their standard deviations (`sd`).
```
n <- 1000 # number of random samples
# name the mu values to give the resulting columns names
mu <- c(x = 10, y = 20) # the means of the samples
sd <- c(5, 6) # the SDs of the samples
rho <- 0.5 # population correlation between the two variables
# correlation matrix
cor_mat <- matrix(c( 1, rho,
rho, 1), 2)
# create the covariance matrix
sigma <- (sd %*% t(sd)) * cor_mat
# sample from bivariate normal distribution
bvn <- MASS::mvrnorm(n, mu, sigma)
```
Plot your sampled variables to check everything worked like you expect. It’s easiest to convert the output of `mvnorm` into a tibble in order to use it in ggplot.
```
bvn %>%
as_tibble() %>%
ggplot(aes(x, y)) +
geom_point(alpha = 0.5) +
geom_smooth(method = "lm") +
geom_density2d()
```
```
## `geom_smooth()` using formula 'y ~ x'
```
### 8\.5\.2 Multivariate Normal
You can generate more than 2 correlated variables, but it gets a little trickier to create the correlation matrix.
```
n <- 200 # number of random samples
mu <- c(x = 10, y = 20, z = 30) # the means of the samples
sd <- c(8, 9, 10) # the SDs of the samples
rho1_2 <- 0.5 # correlation between x and y
rho1_3 <- 0 # correlation between x and z
rho2_3 <- 0.7 # correlation between y and z
# correlation matrix
cor_mat <- matrix(c( 1, rho1_2, rho1_3,
rho1_2, 1, rho2_3,
rho1_3, rho2_3, 1), 3)
sigma <- (sd %*% t(sd)) * cor_mat
bvn3 <- MASS::mvrnorm(n, mu, sigma)
cor(bvn3) # check correlation matrix
```
```
## x y z
## x 1.0000000 0.5896674 0.1513108
## y 0.5896674 1.0000000 0.7468737
## z 0.1513108 0.7468737 1.0000000
```
You can use the `plotly` library to make a 3D graph.
```
#set up the marker style
marker_style = list(
color = "#ff0000",
line = list(
color = "#444",
width = 1
),
opacity = 0.5,
size = 5
)
# convert bvn3 to a tibble, plot and add markers
bvn3 %>%
as_tibble() %>%
plot_ly(x = ~x, y = ~y, z = ~z, marker = marker_style) %>%
add_markers()
```
### 8\.5\.3 Faux
Alternatively, you can use the package [faux](https://debruine.github.io/faux/) to generate any number of correlated variables. It also has a function for checking the parameters of your new simulated data (`check_sim_stats()`).
```
bvn3 <- rnorm_multi(
n = n,
vars = 3,
mu = mu,
sd = sd,
r = c(rho1_2, rho1_3, rho2_3),
varnames = c("x", "y", "z")
)
check_sim_stats(bvn3)
```
| n | var | x | y | z | mean | sd |
| --- | --- | --- | --- | --- | --- | --- |
| 200 | x | 1\.00 | 0\.54 | 0\.10 | 10\.35 | 7\.66 |
| 200 | y | 0\.54 | 1\.00 | 0\.67 | 20\.01 | 8\.77 |
| 200 | z | 0\.10 | 0\.67 | 1\.00 | 30\.37 | 9\.59 |
You can also use faux to simulate data for factorial designs. Set up the between\-subject and within\-subject factors as lists with the levels as (named) vectors. Means and standard deviations can be included as vectors or data frames. The function calculates sigma for you, structures your dataset, and outputs a plot of the design.
```
b <- list(pet = c(cat = "Cat Owners",
dog = "Dog Owners"))
w <- list(time = c("morning",
"noon",
"night"))
mu <- data.frame(
cat = c(10, 12, 14),
dog = c(10, 15, 20),
row.names = w$time
)
sd <- c(3, 3, 3, 5, 5, 5)
pet_data <- sim_design(
within = w,
between = b,
n = 100,
mu = mu,
sd = sd,
r = .5)
```
You can use the `check_sim_stats()` function, but you need to set the argument `between` to a vector of all the between\-subject factor columns.
```
check_sim_stats(pet_data, between = "pet")
```
| pet | n | var | morning | noon | night | mean | sd |
| --- | --- | --- | --- | --- | --- | --- | --- |
| cat | 100 | morning | 1\.00 | 0\.57 | 0\.51 | 10\.62 | 3\.48 |
| cat | 100 | noon | 0\.57 | 1\.00 | 0\.59 | 12\.44 | 3\.01 |
| cat | 100 | night | 0\.51 | 0\.59 | 1\.00 | 14\.61 | 3\.14 |
| dog | 100 | morning | 1\.00 | 0\.55 | 0\.50 | 9\.44 | 4\.92 |
| dog | 100 | noon | 0\.55 | 1\.00 | 0\.48 | 14\.18 | 5\.90 |
| dog | 100 | night | 0\.50 | 0\.48 | 1\.00 | 19\.42 | 5\.36 |
See the [faux website](https://debruine.github.io/faux/) for more detailed tutorials.
### 8\.5\.1 Bivariate Normal
A [bivariate normal](https://psyteachr.github.io/glossary/b#bivariate-normal "Two normally distributed vectors that have a specified correlation with each other.") distribution is two normally distributed vectors that have a specified relationship, or [correlation](https://psyteachr.github.io/glossary/c#correlation "The relationship two vectors have to each other.") to each other.
What if we want to sample from a population with specific relationships between variables? We can sample from a bivariate normal distribution using `mvrnorm()` from the `MASS` package.
Don’t load MASS with the `library()` function because it will create a conflict with the `select()` function from dplyr and you will always need to preface it with `dplyr::`. Just use `MASS::mvrnorm()`.
You need to know how many observations you want to simulate (`n`) the means of the two variables (`mu`) and you need to calculate a [covariance matrix](https://psyteachr.github.io/glossary/c#covariance-matrix "Parameters showing how a set of vectors vary and are correlated.") (`sigma`) from the correlation between the variables (`rho`) and their standard deviations (`sd`).
```
n <- 1000 # number of random samples
# name the mu values to give the resulting columns names
mu <- c(x = 10, y = 20) # the means of the samples
sd <- c(5, 6) # the SDs of the samples
rho <- 0.5 # population correlation between the two variables
# correlation matrix
cor_mat <- matrix(c( 1, rho,
rho, 1), 2)
# create the covariance matrix
sigma <- (sd %*% t(sd)) * cor_mat
# sample from bivariate normal distribution
bvn <- MASS::mvrnorm(n, mu, sigma)
```
Plot your sampled variables to check everything worked like you expect. It’s easiest to convert the output of `mvnorm` into a tibble in order to use it in ggplot.
```
bvn %>%
as_tibble() %>%
ggplot(aes(x, y)) +
geom_point(alpha = 0.5) +
geom_smooth(method = "lm") +
geom_density2d()
```
```
## `geom_smooth()` using formula 'y ~ x'
```
### 8\.5\.2 Multivariate Normal
You can generate more than 2 correlated variables, but it gets a little trickier to create the correlation matrix.
```
n <- 200 # number of random samples
mu <- c(x = 10, y = 20, z = 30) # the means of the samples
sd <- c(8, 9, 10) # the SDs of the samples
rho1_2 <- 0.5 # correlation between x and y
rho1_3 <- 0 # correlation between x and z
rho2_3 <- 0.7 # correlation between y and z
# correlation matrix
cor_mat <- matrix(c( 1, rho1_2, rho1_3,
rho1_2, 1, rho2_3,
rho1_3, rho2_3, 1), 3)
sigma <- (sd %*% t(sd)) * cor_mat
bvn3 <- MASS::mvrnorm(n, mu, sigma)
cor(bvn3) # check correlation matrix
```
```
## x y z
## x 1.0000000 0.5896674 0.1513108
## y 0.5896674 1.0000000 0.7468737
## z 0.1513108 0.7468737 1.0000000
```
You can use the `plotly` library to make a 3D graph.
```
#set up the marker style
marker_style = list(
color = "#ff0000",
line = list(
color = "#444",
width = 1
),
opacity = 0.5,
size = 5
)
# convert bvn3 to a tibble, plot and add markers
bvn3 %>%
as_tibble() %>%
plot_ly(x = ~x, y = ~y, z = ~z, marker = marker_style) %>%
add_markers()
```
### 8\.5\.3 Faux
Alternatively, you can use the package [faux](https://debruine.github.io/faux/) to generate any number of correlated variables. It also has a function for checking the parameters of your new simulated data (`check_sim_stats()`).
```
bvn3 <- rnorm_multi(
n = n,
vars = 3,
mu = mu,
sd = sd,
r = c(rho1_2, rho1_3, rho2_3),
varnames = c("x", "y", "z")
)
check_sim_stats(bvn3)
```
| n | var | x | y | z | mean | sd |
| --- | --- | --- | --- | --- | --- | --- |
| 200 | x | 1\.00 | 0\.54 | 0\.10 | 10\.35 | 7\.66 |
| 200 | y | 0\.54 | 1\.00 | 0\.67 | 20\.01 | 8\.77 |
| 200 | z | 0\.10 | 0\.67 | 1\.00 | 30\.37 | 9\.59 |
You can also use faux to simulate data for factorial designs. Set up the between\-subject and within\-subject factors as lists with the levels as (named) vectors. Means and standard deviations can be included as vectors or data frames. The function calculates sigma for you, structures your dataset, and outputs a plot of the design.
```
b <- list(pet = c(cat = "Cat Owners",
dog = "Dog Owners"))
w <- list(time = c("morning",
"noon",
"night"))
mu <- data.frame(
cat = c(10, 12, 14),
dog = c(10, 15, 20),
row.names = w$time
)
sd <- c(3, 3, 3, 5, 5, 5)
pet_data <- sim_design(
within = w,
between = b,
n = 100,
mu = mu,
sd = sd,
r = .5)
```
You can use the `check_sim_stats()` function, but you need to set the argument `between` to a vector of all the between\-subject factor columns.
```
check_sim_stats(pet_data, between = "pet")
```
| pet | n | var | morning | noon | night | mean | sd |
| --- | --- | --- | --- | --- | --- | --- | --- |
| cat | 100 | morning | 1\.00 | 0\.57 | 0\.51 | 10\.62 | 3\.48 |
| cat | 100 | noon | 0\.57 | 1\.00 | 0\.59 | 12\.44 | 3\.01 |
| cat | 100 | night | 0\.51 | 0\.59 | 1\.00 | 14\.61 | 3\.14 |
| dog | 100 | morning | 1\.00 | 0\.55 | 0\.50 | 9\.44 | 4\.92 |
| dog | 100 | noon | 0\.55 | 1\.00 | 0\.48 | 14\.18 | 5\.90 |
| dog | 100 | night | 0\.50 | 0\.48 | 1\.00 | 19\.42 | 5\.36 |
See the [faux website](https://debruine.github.io/faux/) for more detailed tutorials.
8\.6 Statistical terms
----------------------
Let’s review some important statistical terms before we review tests of distributions.
### 8\.6\.1 Effect
The [effect](https://psyteachr.github.io/glossary/e#effect "Some measure of your data, such as the mean value, or the number of standard deviations the mean differs from a chance value.") is some measure of your data. This will depend on the type of data you have and the type of statistical test you are using. For example, if you flipped a coin 100 times and it landed heads 66 times, the effect would be 66/100\. You can then use the exact binomial test to compare this effect to the [null effect](https://psyteachr.github.io/glossary/n#null-effect "An outcome that does not show an otherwise expected effect.") you would expect from a fair coin (50/100\) or to any other effect you choose. The [effect size](https://psyteachr.github.io/glossary/e#effect-size "The difference between the effect in your data and the null effect (usually a chance value)") refers to the difference between the effect in your data and the null effect (usually a chance value).
### 8\.6\.2 P\-value
The [p\-value](https://psyteachr.github.io/glossary/p#p-value "The probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect)") of a test is the probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect). So if you used a binomial test to test against a chance probability of 1/6 (e.g., the probability of rolling 1 with a 6\-sided die), then a p\-value of 0\.17 means that you could expect to see effects at least as extreme as your data 17% of the time just by chance alone.
### 8\.6\.3 Alpha
If you are using null hypothesis significance testing ([NHST](https://psyteachr.github.io/glossary/n#nhst "Null Hypothesis Signficance Testing")), then you need to decide on a cutoff value ([alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot")) for making a decision to reject the null hypothesis. We call p\-values below the alpha cutoff [significant](https://psyteachr.github.io/glossary/s#significant "The conclusion when the p-value is less than the critical alpha. "). In psychology, alpha is traditionally set at 0\.05, but there are good arguments for [setting a different criterion in some circumstances](http://daniellakens.blogspot.com/2019/05/justifying-your-alpha-by-minimizing-or.html).
### 8\.6\.4 False Positive/Negative
The probability that a test concludes there is an effect when there is really no effect (e.g., concludes a fair coin is biased) is called the [false positive](https://psyteachr.github.io/glossary/f#false-positive "When a test concludes there is an effect when there really is no effect") rate (or [Type I Error](https://psyteachr.github.io/glossary/t#type-i-error "A false positive; When a test concludes there is an effect when there is really is no effect") Rate). The [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") is the false positive rate we accept for a test. The probability that a test concludes there is no effect when there really is one (e.g., concludes a biased coin is fair) is called the [false negative](https://psyteachr.github.io/glossary/f#false-negative "When a test concludes there is no effect when there really is an effect") rate (or [Type II Error](https://psyteachr.github.io/glossary/t#type-ii-error "A false negative; When a test concludes there is no effect when there is really is an effect") Rate). The [beta](https://psyteachr.github.io/glossary/b#beta "The false negative rate we accept for a statistical test.") is the false negative rate we accept for a test.
The false positive rate is not the overall probability of getting a false positive, but the probability of a false positive *under the null hypothesis*. Similarly, the false negative rate is the probability of a false negative *under the alternative hypothesis*. Unless we know the probability that we are testing a null effect, we can’t say anything about the overall probability of false positives or negatives. If 100% of the hypotheses we test are false, then all significant effects are false positives, but if all of the hypotheses we test are true, then all of the positives are true positives and the overall false positive rate is 0\.
### 8\.6\.5 Power and SESOI
[Power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") is equal to 1 minus beta (i.e., the [true positive](https://psyteachr.github.io/glossary/t#true-positive "When a test concludes there is an effect when there is really is an effect") rate), and depends on the effect size, how many samples we take (n), and what we set alpha to. For any test, if you specify all but one of these values, you can calculate the last. The effect size you use in power calculations should be the smallest effect size of interest ([SESOI](https://psyteachr.github.io/glossary/s#sesoi "Smallest Effect Size of Interest: the smallest effect that is theoretically or practically meaningful")). See ([Daniël Lakens, Scheel, and Isager 2018](#ref-TOSTtutorial))([https://doi.org/10\.1177/2515245918770963](https://doi.org/10.1177/2515245918770963)) for a tutorial on methods for choosing an SESOI.
Let’s say you want to be able to detect at least a 15% difference from chance (50%) in a coin’s fairness, and you want your test to have a 5% chance of false positives and a 10% chance of false negatives. What are the following values?
* alpha \=
* beta \=
* false positive rate \=
* false negative rate \=
* power \=
* SESOI \=
### 8\.6\.6 Confidence Intervals
The [confidence interval](https://psyteachr.github.io/glossary/c#confidence-interval "A type of interval estimate used to summarise a given statistic or measurement where a proportion of intervals calculated from the sample(s) will contain the true value of the statistic.") is a range around some value (such as a mean) that has some probability of containing the parameter, if you repeated the process many times. Traditionally in psychology, we use 95% confidence intervals, but you can calculate CIs for any percentage.
A 95% CI does *not* mean that there is a 95% probability that the true mean lies within this range, but that, if you repeated the study many times and calculated the CI this same way every time, you’d expect the true mean to be inside the CI in 95% of the studies. This seems like a subtle distinction, but can lead to some misunderstandings. See ([Morey et al. 2016](#ref-Morey2016))([https://link.springer.com/article/10\.3758/s13423\-015\-0947\-8](https://link.springer.com/article/10.3758/s13423-015-0947-8)) for more detailed discussion.
### 8\.6\.1 Effect
The [effect](https://psyteachr.github.io/glossary/e#effect "Some measure of your data, such as the mean value, or the number of standard deviations the mean differs from a chance value.") is some measure of your data. This will depend on the type of data you have and the type of statistical test you are using. For example, if you flipped a coin 100 times and it landed heads 66 times, the effect would be 66/100\. You can then use the exact binomial test to compare this effect to the [null effect](https://psyteachr.github.io/glossary/n#null-effect "An outcome that does not show an otherwise expected effect.") you would expect from a fair coin (50/100\) or to any other effect you choose. The [effect size](https://psyteachr.github.io/glossary/e#effect-size "The difference between the effect in your data and the null effect (usually a chance value)") refers to the difference between the effect in your data and the null effect (usually a chance value).
### 8\.6\.2 P\-value
The [p\-value](https://psyteachr.github.io/glossary/p#p-value "The probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect)") of a test is the probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect). So if you used a binomial test to test against a chance probability of 1/6 (e.g., the probability of rolling 1 with a 6\-sided die), then a p\-value of 0\.17 means that you could expect to see effects at least as extreme as your data 17% of the time just by chance alone.
### 8\.6\.3 Alpha
If you are using null hypothesis significance testing ([NHST](https://psyteachr.github.io/glossary/n#nhst "Null Hypothesis Signficance Testing")), then you need to decide on a cutoff value ([alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot")) for making a decision to reject the null hypothesis. We call p\-values below the alpha cutoff [significant](https://psyteachr.github.io/glossary/s#significant "The conclusion when the p-value is less than the critical alpha. "). In psychology, alpha is traditionally set at 0\.05, but there are good arguments for [setting a different criterion in some circumstances](http://daniellakens.blogspot.com/2019/05/justifying-your-alpha-by-minimizing-or.html).
### 8\.6\.4 False Positive/Negative
The probability that a test concludes there is an effect when there is really no effect (e.g., concludes a fair coin is biased) is called the [false positive](https://psyteachr.github.io/glossary/f#false-positive "When a test concludes there is an effect when there really is no effect") rate (or [Type I Error](https://psyteachr.github.io/glossary/t#type-i-error "A false positive; When a test concludes there is an effect when there is really is no effect") Rate). The [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") is the false positive rate we accept for a test. The probability that a test concludes there is no effect when there really is one (e.g., concludes a biased coin is fair) is called the [false negative](https://psyteachr.github.io/glossary/f#false-negative "When a test concludes there is no effect when there really is an effect") rate (or [Type II Error](https://psyteachr.github.io/glossary/t#type-ii-error "A false negative; When a test concludes there is no effect when there is really is an effect") Rate). The [beta](https://psyteachr.github.io/glossary/b#beta "The false negative rate we accept for a statistical test.") is the false negative rate we accept for a test.
The false positive rate is not the overall probability of getting a false positive, but the probability of a false positive *under the null hypothesis*. Similarly, the false negative rate is the probability of a false negative *under the alternative hypothesis*. Unless we know the probability that we are testing a null effect, we can’t say anything about the overall probability of false positives or negatives. If 100% of the hypotheses we test are false, then all significant effects are false positives, but if all of the hypotheses we test are true, then all of the positives are true positives and the overall false positive rate is 0\.
### 8\.6\.5 Power and SESOI
[Power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") is equal to 1 minus beta (i.e., the [true positive](https://psyteachr.github.io/glossary/t#true-positive "When a test concludes there is an effect when there is really is an effect") rate), and depends on the effect size, how many samples we take (n), and what we set alpha to. For any test, if you specify all but one of these values, you can calculate the last. The effect size you use in power calculations should be the smallest effect size of interest ([SESOI](https://psyteachr.github.io/glossary/s#sesoi "Smallest Effect Size of Interest: the smallest effect that is theoretically or practically meaningful")). See ([Daniël Lakens, Scheel, and Isager 2018](#ref-TOSTtutorial))([https://doi.org/10\.1177/2515245918770963](https://doi.org/10.1177/2515245918770963)) for a tutorial on methods for choosing an SESOI.
Let’s say you want to be able to detect at least a 15% difference from chance (50%) in a coin’s fairness, and you want your test to have a 5% chance of false positives and a 10% chance of false negatives. What are the following values?
* alpha \=
* beta \=
* false positive rate \=
* false negative rate \=
* power \=
* SESOI \=
### 8\.6\.6 Confidence Intervals
The [confidence interval](https://psyteachr.github.io/glossary/c#confidence-interval "A type of interval estimate used to summarise a given statistic or measurement where a proportion of intervals calculated from the sample(s) will contain the true value of the statistic.") is a range around some value (such as a mean) that has some probability of containing the parameter, if you repeated the process many times. Traditionally in psychology, we use 95% confidence intervals, but you can calculate CIs for any percentage.
A 95% CI does *not* mean that there is a 95% probability that the true mean lies within this range, but that, if you repeated the study many times and calculated the CI this same way every time, you’d expect the true mean to be inside the CI in 95% of the studies. This seems like a subtle distinction, but can lead to some misunderstandings. See ([Morey et al. 2016](#ref-Morey2016))([https://link.springer.com/article/10\.3758/s13423\-015\-0947\-8](https://link.springer.com/article/10.3758/s13423-015-0947-8)) for more detailed discussion.
8\.7 Tests
----------
### 8\.7\.1 Exact binomial test
`binom.test(x, n, p)`
You can test a binomial distribution against a specific probability using the exact binomial test.
* `x` \= the number of successes
* `n` \= the number of trials
* `p` \= hypothesised probability of success
Here we can test a series of 10 coin flips from a fair coin and a biased coin against the hypothesised probability of 0\.5 (even odds).
```
n <- 10
fair_coin <- rbinom(1, n, 0.5)
biased_coin <- rbinom(1, n, 0.6)
binom.test(fair_coin, n, p = 0.5)
binom.test(biased_coin, n, p = 0.5)
```
```
##
## Exact binomial test
##
## data: fair_coin and n
## number of successes = 6, number of trials = 10, p-value = 0.7539
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.2623781 0.8784477
## sample estimates:
## probability of success
## 0.6
##
##
## Exact binomial test
##
## data: biased_coin and n
## number of successes = 8, number of trials = 10, p-value = 0.1094
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.4439045 0.9747893
## sample estimates:
## probability of success
## 0.8
```
Run the code above several times, noting the p\-values for the fair and biased coins. Alternatively, you can [simulate coin flips](http://shiny.psy.gla.ac.uk/debruine/coinsim/) online and build up a graph of results and p\-values.
* How does the p\-value vary for the fair and biased coins?
* What happens to the confidence intervals if you increase n from 10 to 100?
* What criterion would you use to tell if the observed data indicate the coin is fair or biased?
* How often do you conclude the fair coin is biased (false positives)?
* How often do you conclude the biased coin is fair (false negatives)?
#### 8\.7\.1\.1 Sampling function
To estimate these rates, we need to repeat the sampling above many times. A [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") is ideal for repeating the exact same procedure over and over. Set the arguments of the function to variables that you might want to change. Here, we will want to estimate power for:
* different sample sizes (`n`)
* different effects (`bias`)
* different hypothesised probabilities (`p`, defaults to 0\.5\)
```
sim_binom_test <- function(n, bias, p = 0.5) {
# simulate 1 coin flip n times with the specified bias
coin <- rbinom(1, n, bias)
# run a binomial test on the simulated data for the specified p
btest <- binom.test(coin, n, p)
# return the p-value of this test
btest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_binom_test(100, 0.6)
```
```
## [1] 0.1332106
```
#### 8\.7\.1\.2 Calculate power
Then you can use the `replicate()` function to run it many times and save all the output values. You can calculate the [power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") of your analysis by checking the proportion of your simulated analyses that have a p\-value less than your [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") (the probability of rejecting the null hypothesis when the null hypothesis is true).
```
my_reps <- replicate(1e4, sim_binom_test(100, 0.6))
alpha <- 0.05 # this does not always have to be 0.05
mean(my_reps < alpha)
```
```
## [1] 0.4561
```
`1e4` is just scientific notation for a 1 followed by 4 zeros (`10000`). When you’re running simulations, you usually want to run a lot of them. It’s a pain to keep track of whether you’ve typed 5 or 6 zeros (100000 vs 1000000\) and this will change your running time by an order of magnitude.
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
### 8\.7\.2 T\-test
`t.test(x, y, alternative, mu, paired)`
Use a t\-test to compare the mean of one distribution to a null hypothesis (one\-sample t\-test), compare the means of two samples (independent\-samples t\-test), or compare pairs of values (paired\-samples t\-test).
You can run a one\-sample t\-test comparing the mean of your data to `mu`. Here is a simulated distribution with a mean of 0\.5 and an SD of 1, creating an effect size of 0\.5 SD when tested against a `mu` of 0\. Run the simulation a few times to see how often the t\-test returns a significant p\-value (or run it in the [shiny app](http://shiny.psy.gla.ac.uk/debruine/normsim/)).
```
sim_norm <- rnorm(100, 0.5, 1)
t.test(sim_norm, mu = 0)
```
```
##
## One Sample t-test
##
## data: sim_norm
## t = 6.2874, df = 99, p-value = 8.758e-09
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 0.4049912 0.7784761
## sample estimates:
## mean of x
## 0.5917337
```
Run an independent\-samples t\-test by comparing two lists of values.
```
a <- rnorm(100, 0.5, 1)
b <- rnorm(100, 0.7, 1)
t_ind <- t.test(a, b, paired = FALSE)
t_ind
```
```
##
## Welch Two Sample t-test
##
## data: a and b
## t = -1.8061, df = 197.94, p-value = 0.07243
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.54825320 0.02408469
## sample estimates:
## mean of x mean of y
## 0.4585985 0.7206828
```
The `paired` argument defaults to `FALSE`, but it’s good practice to always explicitly set it so you are never confused about what type of test you are performing.
#### 8\.7\.2\.1 Sampling function
We can use the `names()` function to find out the names of all the t.test parameters and use this to just get one type of data, like the test statistic (e.g., t\-value).
```
names(t_ind)
t_ind$statistic
```
```
## [1] "statistic" "parameter" "p.value" "conf.int" "estimate"
## [6] "null.value" "stderr" "alternative" "method" "data.name"
## t
## -1.806051
```
If you want to run the simulation many times and record information each time, first you need to turn your simulation into a function.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2) {
# simulate v1
v1 <- rnorm(n, m1, sd1)
#simulate v2
v2 <- rnorm(n, m2, sd2)
# compare using an independent samples t-test
t_ind <- t.test(v1, v2, paired = FALSE)
# return the p-value
return(t_ind$p.value)
}
```
Run it a few times to check that it gives you sensible values.
```
sim_t_ind(100, 0.7, 1, 0.5, 1)
```
```
## [1] 0.362521
```
#### 8\.7\.2\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_t_ind(100, 0.7, 1, 0.5, 1))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.2925
```
Run the code above several times. How much does the power value fluctuate? How many replications do you need to run to get a reliable estimate of power?
Compare your power estimate from simluation to a power calculation using `power.t.test()`. Here, `delta` is the difference between `m1` and `m2` above.
```
power.t.test(n = 100,
delta = 0.2,
sd = 1,
sig.level = alpha,
type = "two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
What do you think the distribution of p\-values is when there is no effect (i.e., the means are identical)? Check this yourself.
Make sure the `boundary` argument is set to `0` for p\-value histograms. See what happens with a null effect if `boundary` is not set.
### 8\.7\.3 Correlation
You can test if continuous variables are related to each other using the `cor()` function. Let’s use `rnorm_multi()` to make a quick table of correlated values.
```
dat <- rnorm_multi(
n = 100,
vars = 2,
r = -0.5,
varnames = c("x", "y")
)
cor(dat$x, dat$y)
```
```
## [1] -0.4960331
```
Set `n` to a large number like 1e6 so that the correlations are less affected by chance. Change the value of the **mean** for `a`, `x`, or `y`. Does it change the correlation between `x` and `y`? What happens when you increase or decrease the **sd**? Can you work out any rules here?
`cor()` defaults to Pearson’s correlations. Set the `method` argument to use Kendall or Spearman correlations.
```
cor(dat$x, dat$y, method = "spearman")
```
```
## [1] -0.4724992
```
#### 8\.7\.3\.1 Sampling function
Create a function that creates two variables with `n` observations and `r` correlation. Use the function `cor.test()` to give you p\-values for the correlation.
```
sim_cor_test <- function(n = 100, r = 0) {
dat <- rnorm_multi(
n = n,
vars = 2,
r = r,
varnames = c("x", "y")
)
ctest <- cor.test(dat$x, dat$y)
ctest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_cor_test(50, .5)
```
```
## [1] 0.001354836
```
#### 8\.7\.3\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_cor_test(50, 0.5))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.965
```
Compare to the value calcuated by the pwr package.
```
pwr::pwr.r.test(n = 50, r = 0.5)
```
```
##
## approximate correlation power calculation (arctangh transformation)
##
## n = 50
## r = 0.5
## sig.level = 0.05
## power = 0.9669813
## alternative = two.sided
```
### 8\.7\.1 Exact binomial test
`binom.test(x, n, p)`
You can test a binomial distribution against a specific probability using the exact binomial test.
* `x` \= the number of successes
* `n` \= the number of trials
* `p` \= hypothesised probability of success
Here we can test a series of 10 coin flips from a fair coin and a biased coin against the hypothesised probability of 0\.5 (even odds).
```
n <- 10
fair_coin <- rbinom(1, n, 0.5)
biased_coin <- rbinom(1, n, 0.6)
binom.test(fair_coin, n, p = 0.5)
binom.test(biased_coin, n, p = 0.5)
```
```
##
## Exact binomial test
##
## data: fair_coin and n
## number of successes = 6, number of trials = 10, p-value = 0.7539
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.2623781 0.8784477
## sample estimates:
## probability of success
## 0.6
##
##
## Exact binomial test
##
## data: biased_coin and n
## number of successes = 8, number of trials = 10, p-value = 0.1094
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.4439045 0.9747893
## sample estimates:
## probability of success
## 0.8
```
Run the code above several times, noting the p\-values for the fair and biased coins. Alternatively, you can [simulate coin flips](http://shiny.psy.gla.ac.uk/debruine/coinsim/) online and build up a graph of results and p\-values.
* How does the p\-value vary for the fair and biased coins?
* What happens to the confidence intervals if you increase n from 10 to 100?
* What criterion would you use to tell if the observed data indicate the coin is fair or biased?
* How often do you conclude the fair coin is biased (false positives)?
* How often do you conclude the biased coin is fair (false negatives)?
#### 8\.7\.1\.1 Sampling function
To estimate these rates, we need to repeat the sampling above many times. A [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") is ideal for repeating the exact same procedure over and over. Set the arguments of the function to variables that you might want to change. Here, we will want to estimate power for:
* different sample sizes (`n`)
* different effects (`bias`)
* different hypothesised probabilities (`p`, defaults to 0\.5\)
```
sim_binom_test <- function(n, bias, p = 0.5) {
# simulate 1 coin flip n times with the specified bias
coin <- rbinom(1, n, bias)
# run a binomial test on the simulated data for the specified p
btest <- binom.test(coin, n, p)
# return the p-value of this test
btest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_binom_test(100, 0.6)
```
```
## [1] 0.1332106
```
#### 8\.7\.1\.2 Calculate power
Then you can use the `replicate()` function to run it many times and save all the output values. You can calculate the [power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") of your analysis by checking the proportion of your simulated analyses that have a p\-value less than your [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") (the probability of rejecting the null hypothesis when the null hypothesis is true).
```
my_reps <- replicate(1e4, sim_binom_test(100, 0.6))
alpha <- 0.05 # this does not always have to be 0.05
mean(my_reps < alpha)
```
```
## [1] 0.4561
```
`1e4` is just scientific notation for a 1 followed by 4 zeros (`10000`). When you’re running simulations, you usually want to run a lot of them. It’s a pain to keep track of whether you’ve typed 5 or 6 zeros (100000 vs 1000000\) and this will change your running time by an order of magnitude.
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
#### 8\.7\.1\.1 Sampling function
To estimate these rates, we need to repeat the sampling above many times. A [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") is ideal for repeating the exact same procedure over and over. Set the arguments of the function to variables that you might want to change. Here, we will want to estimate power for:
* different sample sizes (`n`)
* different effects (`bias`)
* different hypothesised probabilities (`p`, defaults to 0\.5\)
```
sim_binom_test <- function(n, bias, p = 0.5) {
# simulate 1 coin flip n times with the specified bias
coin <- rbinom(1, n, bias)
# run a binomial test on the simulated data for the specified p
btest <- binom.test(coin, n, p)
# return the p-value of this test
btest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_binom_test(100, 0.6)
```
```
## [1] 0.1332106
```
#### 8\.7\.1\.2 Calculate power
Then you can use the `replicate()` function to run it many times and save all the output values. You can calculate the [power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") of your analysis by checking the proportion of your simulated analyses that have a p\-value less than your [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") (the probability of rejecting the null hypothesis when the null hypothesis is true).
```
my_reps <- replicate(1e4, sim_binom_test(100, 0.6))
alpha <- 0.05 # this does not always have to be 0.05
mean(my_reps < alpha)
```
```
## [1] 0.4561
```
`1e4` is just scientific notation for a 1 followed by 4 zeros (`10000`). When you’re running simulations, you usually want to run a lot of them. It’s a pain to keep track of whether you’ve typed 5 or 6 zeros (100000 vs 1000000\) and this will change your running time by an order of magnitude.
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
### 8\.7\.2 T\-test
`t.test(x, y, alternative, mu, paired)`
Use a t\-test to compare the mean of one distribution to a null hypothesis (one\-sample t\-test), compare the means of two samples (independent\-samples t\-test), or compare pairs of values (paired\-samples t\-test).
You can run a one\-sample t\-test comparing the mean of your data to `mu`. Here is a simulated distribution with a mean of 0\.5 and an SD of 1, creating an effect size of 0\.5 SD when tested against a `mu` of 0\. Run the simulation a few times to see how often the t\-test returns a significant p\-value (or run it in the [shiny app](http://shiny.psy.gla.ac.uk/debruine/normsim/)).
```
sim_norm <- rnorm(100, 0.5, 1)
t.test(sim_norm, mu = 0)
```
```
##
## One Sample t-test
##
## data: sim_norm
## t = 6.2874, df = 99, p-value = 8.758e-09
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 0.4049912 0.7784761
## sample estimates:
## mean of x
## 0.5917337
```
Run an independent\-samples t\-test by comparing two lists of values.
```
a <- rnorm(100, 0.5, 1)
b <- rnorm(100, 0.7, 1)
t_ind <- t.test(a, b, paired = FALSE)
t_ind
```
```
##
## Welch Two Sample t-test
##
## data: a and b
## t = -1.8061, df = 197.94, p-value = 0.07243
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.54825320 0.02408469
## sample estimates:
## mean of x mean of y
## 0.4585985 0.7206828
```
The `paired` argument defaults to `FALSE`, but it’s good practice to always explicitly set it so you are never confused about what type of test you are performing.
#### 8\.7\.2\.1 Sampling function
We can use the `names()` function to find out the names of all the t.test parameters and use this to just get one type of data, like the test statistic (e.g., t\-value).
```
names(t_ind)
t_ind$statistic
```
```
## [1] "statistic" "parameter" "p.value" "conf.int" "estimate"
## [6] "null.value" "stderr" "alternative" "method" "data.name"
## t
## -1.806051
```
If you want to run the simulation many times and record information each time, first you need to turn your simulation into a function.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2) {
# simulate v1
v1 <- rnorm(n, m1, sd1)
#simulate v2
v2 <- rnorm(n, m2, sd2)
# compare using an independent samples t-test
t_ind <- t.test(v1, v2, paired = FALSE)
# return the p-value
return(t_ind$p.value)
}
```
Run it a few times to check that it gives you sensible values.
```
sim_t_ind(100, 0.7, 1, 0.5, 1)
```
```
## [1] 0.362521
```
#### 8\.7\.2\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_t_ind(100, 0.7, 1, 0.5, 1))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.2925
```
Run the code above several times. How much does the power value fluctuate? How many replications do you need to run to get a reliable estimate of power?
Compare your power estimate from simluation to a power calculation using `power.t.test()`. Here, `delta` is the difference between `m1` and `m2` above.
```
power.t.test(n = 100,
delta = 0.2,
sd = 1,
sig.level = alpha,
type = "two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
What do you think the distribution of p\-values is when there is no effect (i.e., the means are identical)? Check this yourself.
Make sure the `boundary` argument is set to `0` for p\-value histograms. See what happens with a null effect if `boundary` is not set.
#### 8\.7\.2\.1 Sampling function
We can use the `names()` function to find out the names of all the t.test parameters and use this to just get one type of data, like the test statistic (e.g., t\-value).
```
names(t_ind)
t_ind$statistic
```
```
## [1] "statistic" "parameter" "p.value" "conf.int" "estimate"
## [6] "null.value" "stderr" "alternative" "method" "data.name"
## t
## -1.806051
```
If you want to run the simulation many times and record information each time, first you need to turn your simulation into a function.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2) {
# simulate v1
v1 <- rnorm(n, m1, sd1)
#simulate v2
v2 <- rnorm(n, m2, sd2)
# compare using an independent samples t-test
t_ind <- t.test(v1, v2, paired = FALSE)
# return the p-value
return(t_ind$p.value)
}
```
Run it a few times to check that it gives you sensible values.
```
sim_t_ind(100, 0.7, 1, 0.5, 1)
```
```
## [1] 0.362521
```
#### 8\.7\.2\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_t_ind(100, 0.7, 1, 0.5, 1))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.2925
```
Run the code above several times. How much does the power value fluctuate? How many replications do you need to run to get a reliable estimate of power?
Compare your power estimate from simluation to a power calculation using `power.t.test()`. Here, `delta` is the difference between `m1` and `m2` above.
```
power.t.test(n = 100,
delta = 0.2,
sd = 1,
sig.level = alpha,
type = "two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
What do you think the distribution of p\-values is when there is no effect (i.e., the means are identical)? Check this yourself.
Make sure the `boundary` argument is set to `0` for p\-value histograms. See what happens with a null effect if `boundary` is not set.
### 8\.7\.3 Correlation
You can test if continuous variables are related to each other using the `cor()` function. Let’s use `rnorm_multi()` to make a quick table of correlated values.
```
dat <- rnorm_multi(
n = 100,
vars = 2,
r = -0.5,
varnames = c("x", "y")
)
cor(dat$x, dat$y)
```
```
## [1] -0.4960331
```
Set `n` to a large number like 1e6 so that the correlations are less affected by chance. Change the value of the **mean** for `a`, `x`, or `y`. Does it change the correlation between `x` and `y`? What happens when you increase or decrease the **sd**? Can you work out any rules here?
`cor()` defaults to Pearson’s correlations. Set the `method` argument to use Kendall or Spearman correlations.
```
cor(dat$x, dat$y, method = "spearman")
```
```
## [1] -0.4724992
```
#### 8\.7\.3\.1 Sampling function
Create a function that creates two variables with `n` observations and `r` correlation. Use the function `cor.test()` to give you p\-values for the correlation.
```
sim_cor_test <- function(n = 100, r = 0) {
dat <- rnorm_multi(
n = n,
vars = 2,
r = r,
varnames = c("x", "y")
)
ctest <- cor.test(dat$x, dat$y)
ctest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_cor_test(50, .5)
```
```
## [1] 0.001354836
```
#### 8\.7\.3\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_cor_test(50, 0.5))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.965
```
Compare to the value calcuated by the pwr package.
```
pwr::pwr.r.test(n = 50, r = 0.5)
```
```
##
## approximate correlation power calculation (arctangh transformation)
##
## n = 50
## r = 0.5
## sig.level = 0.05
## power = 0.9669813
## alternative = two.sided
```
#### 8\.7\.3\.1 Sampling function
Create a function that creates two variables with `n` observations and `r` correlation. Use the function `cor.test()` to give you p\-values for the correlation.
```
sim_cor_test <- function(n = 100, r = 0) {
dat <- rnorm_multi(
n = n,
vars = 2,
r = r,
varnames = c("x", "y")
)
ctest <- cor.test(dat$x, dat$y)
ctest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_cor_test(50, .5)
```
```
## [1] 0.001354836
```
#### 8\.7\.3\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_cor_test(50, 0.5))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.965
```
Compare to the value calcuated by the pwr package.
```
pwr::pwr.r.test(n = 50, r = 0.5)
```
```
##
## approximate correlation power calculation (arctangh transformation)
##
## n = 50
## r = 0.5
## sig.level = 0.05
## power = 0.9669813
## alternative = two.sided
```
8\.8 Example
------------
This example uses the [Growth Chart Data Tables](https://www.cdc.gov/growthcharts/data/zscore/zstatage.csv) from the [US CDC](https://www.cdc.gov/growthcharts/zscore.htm). The data consist of height in centimeters for the z\-scores of –2, \-1\.5, \-1, \-0\.5, 0, 0\.5, 1, 1\.5, and 2 by sex (1\=male; 2\=female) and half\-month of age (from 24\.0 to 240\.5 months).
### 8\.8\.1 Load \& wrangle
We have to do a little data wrangling first. Have a look at the data after you import it and relabel `Sex` to `male` and `female` instead of `1` and `2`. Also convert `Agemos` (age in months) to years. Relabel the column `0` as `mean` and calculate a new column named `sd` as the difference between columns `1` and `0`.
```
orig_height_age <- read_csv("https://www.cdc.gov/growthcharts/data/zscore/zstatage.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## Sex = col_character(),
## Agemos = col_character(),
## `-2` = col_double(),
## `-1.5` = col_double(),
## `-1` = col_double(),
## `-0.5` = col_double(),
## `0` = col_double(),
## `0.5` = col_double(),
## `1` = col_double(),
## `1.5` = col_double(),
## `2` = col_double()
## )
```
```
height_age <- orig_height_age %>%
filter(Sex %in% c(1,2)) %>%
mutate(
sex = recode(Sex, "1" = "male", "2" = "female"),
age = as.numeric(Agemos)/12,
sd = `1` - `0`
) %>%
select(sex, age, mean = `0`, sd)
```
### 8\.8\.2 Plot
Plot your new data frame to see how mean height changes with age for boys and girls.
```
ggplot(height_age, aes(age, mean, color = sex)) +
geom_smooth(aes(ymin = mean - sd,
ymax = mean + sd),
stat="identity")
```
### 8\.8\.3 Simulate a population
Simulate 50 random male heights and 50 random female heights for 20\-year\-olds using the `rnorm()` function and the means and SDs from the `height_age` table. Plot the data.
```
age_filter <- 20
m <- filter(height_age, age == age_filter, sex == "male")
f <- filter(height_age, age == age_filter, sex == "female")
sim_height <- tibble(
male = rnorm(50, m$mean, m$sd),
female = rnorm(50, f$mean, f$sd)
) %>%
gather("sex", "height", male:female)
ggplot(sim_height) +
geom_density(aes(height, fill = sex), alpha = 0.5) +
xlim(125, 225)
```
Run the simulation above several times, noting how the density plot changes. Try changing the age you’re simulating.
### 8\.8\.4 Analyse simulated data
Use the `sim_t_ind(n, m1, sd1, m2, sd2)` function we created above to generate one simulation with a sample size of 50 in each group using the means and SDs of male and female 14\-year\-olds.
```
age_filter <- 14
m <- filter(height_age, age == age_filter, sex == "male")
f <- filter(height_age, age == age_filter, sex == "female")
sim_t_ind(50, m$mean, m$sd, f$mean, f$sd)
```
```
## [1] 0.0005255744
```
### 8\.8\.5 Replicate simulation
Now replicate this 1e4 times using the `replicate()` function. This function will save the returned p\-values in a list (`my_reps`). We can then check what proportion of those p\-values are less than our alpha value. This is the power of our test.
```
my_reps <- replicate(1e4, sim_t_ind(50, m$mean, m$sd, f$mean, f$sd))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.6403
```
### 8\.8\.6 One\-tailed prediction
This design has about 65% power to detect the sex difference in height (with a 2\-tailed test). Modify the `sim_t_ind` function for a 1\-tailed prediction.
You could just set `alternative` equal to “greater” in the function, but it might be better to add the `alt` argument to your function (giving it the same default value as `t.test`) and change the value of `alternative` in the function to `alt`.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2, alt = "two.sided") {
v1 <- rnorm(n, m1, sd1)
v2 <- rnorm(n, m2, sd2)
t_ind <- t.test(v1, v2, paired = FALSE, alternative = alt)
return(t_ind$p.value)
}
alpha <- 0.05
my_reps <- replicate(1e4, sim_t_ind(50, m$mean, m$sd, f$mean, f$sd, "greater"))
mean(my_reps < alpha)
```
```
## [1] 0.752
```
### 8\.8\.7 Range of sample sizes
What if we want to find out what sample size will give us 80% power? We can try trial and error. We know the number should be slightly larger than 50\. But you can search more systematically by repeating your power calculation for a range of sample sizes.
This might seem like overkill for a t\-test, where you can easily look up sample size calculators online, but it is a valuable skill to learn for when your analyses become more complicated.
Start with a relatively low number of replications and/or more spread\-out samples to estimate where you should be looking more specifically. Then you can repeat with a narrower/denser range of sample sizes and more iterations.
```
# make another custom function to return power
pwr_func <- function(n, reps = 100, alpha = 0.05) {
ps <- replicate(reps, sim_t_ind(n, m$mean, m$sd, f$mean, f$sd, "greater"))
mean(ps < alpha)
}
# make a table of the n values you want to check
power_table <- tibble(
n = seq(20, 100, by = 5)
) %>%
# run the power function for each n
mutate(power = map_dbl(n, pwr_func))
# plot the results
ggplot(power_table, aes(n, power)) +
geom_smooth() +
geom_point() +
geom_hline(yintercept = 0.8)
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
Now we can narrow down our search to values around 55 (plus or minus 5\) and increase the number of replications from 1e3 to 1e4\.
```
power_table <- tibble(
n = seq(50, 60)
) %>%
mutate(power = map_dbl(n, pwr_func, reps = 1e4))
ggplot(power_table, aes(n, power)) +
geom_smooth() +
geom_point() +
geom_hline(yintercept = 0.8)
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
### 8\.8\.1 Load \& wrangle
We have to do a little data wrangling first. Have a look at the data after you import it and relabel `Sex` to `male` and `female` instead of `1` and `2`. Also convert `Agemos` (age in months) to years. Relabel the column `0` as `mean` and calculate a new column named `sd` as the difference between columns `1` and `0`.
```
orig_height_age <- read_csv("https://www.cdc.gov/growthcharts/data/zscore/zstatage.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## Sex = col_character(),
## Agemos = col_character(),
## `-2` = col_double(),
## `-1.5` = col_double(),
## `-1` = col_double(),
## `-0.5` = col_double(),
## `0` = col_double(),
## `0.5` = col_double(),
## `1` = col_double(),
## `1.5` = col_double(),
## `2` = col_double()
## )
```
```
height_age <- orig_height_age %>%
filter(Sex %in% c(1,2)) %>%
mutate(
sex = recode(Sex, "1" = "male", "2" = "female"),
age = as.numeric(Agemos)/12,
sd = `1` - `0`
) %>%
select(sex, age, mean = `0`, sd)
```
### 8\.8\.2 Plot
Plot your new data frame to see how mean height changes with age for boys and girls.
```
ggplot(height_age, aes(age, mean, color = sex)) +
geom_smooth(aes(ymin = mean - sd,
ymax = mean + sd),
stat="identity")
```
### 8\.8\.3 Simulate a population
Simulate 50 random male heights and 50 random female heights for 20\-year\-olds using the `rnorm()` function and the means and SDs from the `height_age` table. Plot the data.
```
age_filter <- 20
m <- filter(height_age, age == age_filter, sex == "male")
f <- filter(height_age, age == age_filter, sex == "female")
sim_height <- tibble(
male = rnorm(50, m$mean, m$sd),
female = rnorm(50, f$mean, f$sd)
) %>%
gather("sex", "height", male:female)
ggplot(sim_height) +
geom_density(aes(height, fill = sex), alpha = 0.5) +
xlim(125, 225)
```
Run the simulation above several times, noting how the density plot changes. Try changing the age you’re simulating.
### 8\.8\.4 Analyse simulated data
Use the `sim_t_ind(n, m1, sd1, m2, sd2)` function we created above to generate one simulation with a sample size of 50 in each group using the means and SDs of male and female 14\-year\-olds.
```
age_filter <- 14
m <- filter(height_age, age == age_filter, sex == "male")
f <- filter(height_age, age == age_filter, sex == "female")
sim_t_ind(50, m$mean, m$sd, f$mean, f$sd)
```
```
## [1] 0.0005255744
```
### 8\.8\.5 Replicate simulation
Now replicate this 1e4 times using the `replicate()` function. This function will save the returned p\-values in a list (`my_reps`). We can then check what proportion of those p\-values are less than our alpha value. This is the power of our test.
```
my_reps <- replicate(1e4, sim_t_ind(50, m$mean, m$sd, f$mean, f$sd))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.6403
```
### 8\.8\.6 One\-tailed prediction
This design has about 65% power to detect the sex difference in height (with a 2\-tailed test). Modify the `sim_t_ind` function for a 1\-tailed prediction.
You could just set `alternative` equal to “greater” in the function, but it might be better to add the `alt` argument to your function (giving it the same default value as `t.test`) and change the value of `alternative` in the function to `alt`.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2, alt = "two.sided") {
v1 <- rnorm(n, m1, sd1)
v2 <- rnorm(n, m2, sd2)
t_ind <- t.test(v1, v2, paired = FALSE, alternative = alt)
return(t_ind$p.value)
}
alpha <- 0.05
my_reps <- replicate(1e4, sim_t_ind(50, m$mean, m$sd, f$mean, f$sd, "greater"))
mean(my_reps < alpha)
```
```
## [1] 0.752
```
### 8\.8\.7 Range of sample sizes
What if we want to find out what sample size will give us 80% power? We can try trial and error. We know the number should be slightly larger than 50\. But you can search more systematically by repeating your power calculation for a range of sample sizes.
This might seem like overkill for a t\-test, where you can easily look up sample size calculators online, but it is a valuable skill to learn for when your analyses become more complicated.
Start with a relatively low number of replications and/or more spread\-out samples to estimate where you should be looking more specifically. Then you can repeat with a narrower/denser range of sample sizes and more iterations.
```
# make another custom function to return power
pwr_func <- function(n, reps = 100, alpha = 0.05) {
ps <- replicate(reps, sim_t_ind(n, m$mean, m$sd, f$mean, f$sd, "greater"))
mean(ps < alpha)
}
# make a table of the n values you want to check
power_table <- tibble(
n = seq(20, 100, by = 5)
) %>%
# run the power function for each n
mutate(power = map_dbl(n, pwr_func))
# plot the results
ggplot(power_table, aes(n, power)) +
geom_smooth() +
geom_point() +
geom_hline(yintercept = 0.8)
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
Now we can narrow down our search to values around 55 (plus or minus 5\) and increase the number of replications from 1e3 to 1e4\.
```
power_table <- tibble(
n = seq(50, 60)
) %>%
mutate(power = map_dbl(n, pwr_func, reps = 1e4))
ggplot(power_table, aes(n, power)) +
geom_smooth() +
geom_point() +
geom_hline(yintercept = 0.8)
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
8\.9 Glossary
-------------
| term | definition |
| --- | --- |
| [alpha](https://psyteachr.github.io/glossary/a#alpha) | (stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot |
| [beta](https://psyteachr.github.io/glossary/b#beta) | The false negative rate we accept for a statistical test. |
| [binomial distribution](https://psyteachr.github.io/glossary/b#binomial.distribution) | The distribution of data where each observation can have one of two outcomes, like success/failure, yes/no or head/tails. |
| [bivariate normal](https://psyteachr.github.io/glossary/b#bivariate.normal) | Two normally distributed vectors that have a specified correlation with each other. |
| [confidence interval](https://psyteachr.github.io/glossary/c#confidence.interval) | A type of interval estimate used to summarise a given statistic or measurement where a proportion of intervals calculated from the sample(s) will contain the true value of the statistic. |
| [correlation](https://psyteachr.github.io/glossary/c#correlation) | The relationship two vectors have to each other. |
| [covariance matrix](https://psyteachr.github.io/glossary/c#covariance.matrix) | Parameters showing how a set of vectors vary and are correlated. |
| [discrete](https://psyteachr.github.io/glossary/d#discrete) | Data that can only take certain values, such as integers. |
| [effect size](https://psyteachr.github.io/glossary/e#effect.size) | The difference between the effect in your data and the null effect (usually a chance value) |
| [effect](https://psyteachr.github.io/glossary/e#effect) | Some measure of your data, such as the mean value, or the number of standard deviations the mean differs from a chance value. |
| [false negative](https://psyteachr.github.io/glossary/f#false.negative) | When a test concludes there is no effect when there really is an effect |
| [false positive](https://psyteachr.github.io/glossary/f#false.positive) | When a test concludes there is an effect when there really is no effect |
| [function](https://psyteachr.github.io/glossary/f#function.) | A named section of code that can be reused. |
| [nhst](https://psyteachr.github.io/glossary/n#nhst) | Null Hypothesis Signficance Testing |
| [normal distribution](https://psyteachr.github.io/glossary/n#normal.distribution) | A symmetric distribution of data where values near the centre are most probable. |
| [null effect](https://psyteachr.github.io/glossary/n#null.effect) | An outcome that does not show an otherwise expected effect. |
| [p value](https://psyteachr.github.io/glossary/p#p.value) | The probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect) |
| [parameter](https://psyteachr.github.io/glossary/p#parameter) | A value that describes a distribution, such as the mean or SD |
| [poisson distribution](https://psyteachr.github.io/glossary/p#poisson.distribution) | A distribution that models independent events happening over a unit of time |
| [power](https://psyteachr.github.io/glossary/p#power) | The probability of rejecting the null hypothesis when it is false. |
| [probability](https://psyteachr.github.io/glossary/p#probability) | A number between 0 and 1 where 0 indicates impossibility of the event and 1 indicates certainty |
| [sesoi](https://psyteachr.github.io/glossary/s#sesoi) | Smallest Effect Size of Interest: the smallest effect that is theoretically or practically meaningful |
| [significant](https://psyteachr.github.io/glossary/s#significant) | The conclusion when the p\-value is less than the critical alpha. |
| [simulation](https://psyteachr.github.io/glossary/s#simulation) | Generating data from summary parameters |
| [true positive](https://psyteachr.github.io/glossary/t#true.positive) | When a test concludes there is an effect when there is really is an effect |
| [type i error](https://psyteachr.github.io/glossary/t#type.i.error) | A false positive; When a test concludes there is an effect when there is really is no effect |
| [type ii error](https://psyteachr.github.io/glossary/t#type.ii.error) | A false negative; When a test concludes there is no effect when there is really is an effect |
| [uniform distribution](https://psyteachr.github.io/glossary/u#uniform.distribution) | A distribution where all numbers in the range have an equal probability of being sampled |
| [univariate](https://psyteachr.github.io/glossary/u#univariate) | Relating to a single variable. |
8\.10 Exercises
---------------
Download the [exercises](exercises/08_sim_exercise.Rmd). See the [answers](exercises/08_sim_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(8)
# run this to access the answers
dataskills::exercise(8, answers = TRUE)
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/sim.html |
Chapter 8 Probability \& Simulation
===================================
8\.1 Learning Objectives
------------------------
### 8\.1\.1 Basic
1. Generate and plot data randomly sampled from common distributions [(video)](https://youtu.be/iuecrT3q1kg)
* [uniform](sim.html#uniform)
* [binomial](sim.html#binomial)
* [normal](sim.html#normal)
* [poisson](sim.html#poisson)
2. Generate related variables from a [multivariate](sim.html#mvdist) distribution [(video)](https://youtu.be/B14HfWQ1kIc)
3. Define the following [statistical terms](sim.html#stat-terms):
* [p\-value](sim.html#p-value)
* [alpha](sim.html#alpha)
* [power](sim.html#power)
* smallest effect size of interest ([SESOI](#sesoi))
* [false positive](sim.html#false-pos) (type I error)
* [false negative](#false-neg) (type II error)
* confidence interval ([CI](#conf-inf))
4. Test sampled distributions against a null hypothesis [(video)](https://youtu.be/Am3G6rA2S1s)
* [exact binomial test](sim.html#exact-binom)
* [t\-test](sim.html#t-test) (1\-sample, independent samples, paired samples)
* [correlation](sim.html#correlation) (pearson, kendall and spearman)
5. [Calculate power](sim.html#calc-power-binom) using iteration and a sampling function
### 8\.1\.2 Intermediate
6. Calculate the minimum sample size for a specific power level and design
8\.2 Resources
--------------
* [Stub for this lesson](stubs/8_sim.Rmd)
* [Distribution Shiny App](http://shiny.psy.gla.ac.uk/debruine/simulate/) (or run `dataskills::app("simulate")`
* [Simulation tutorials](https://debruine.github.io/tutorials/sim-data.html)
* [Chapter 21: Iteration](http://r4ds.had.co.nz/iteration.html) of *R for Data Science*
* [Improving your statistical inferences](https://www.coursera.org/learn/statistical-inferences/) on Coursera (week 1\)
* [Faux](https://debruine.github.io/faux/) package for data simulation
* [Simulation\-Based Power\-Analysis for Factorial ANOVA Designs](https://psyarxiv.com/baxsf) ([Daniel Lakens and Caldwell 2019](#ref-lakens_caldwell_2019))
* [Understanding mixed effects models through data simulation](https://psyarxiv.com/xp5cy/) ([DeBruine and Barr 2019](#ref-debruine_barr_2019))
8\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(plotly)
library(faux)
set.seed(8675309) # makes sure random numbers are reproducible
```
Simulating data is a very powerful way to test your understanding of statistical concepts. We are going to use [simulations](https://psyteachr.github.io/glossary/s#simulation "Generating data from summary parameters") to learn the basics of [probability](https://psyteachr.github.io/glossary/p#probability "A number between 0 and 1 where 0 indicates impossibility of the event and 1 indicates certainty").
8\.4 Univariate Distributions
-----------------------------
First, we need to understand some different ways data might be distributed and how to simulate data from these distributions. A [univariate](https://psyteachr.github.io/glossary/u#univariate "Relating to a single variable.") distribution is the distribution of a single variable.
### 8\.4\.1 Uniform Distribution
The [uniform distribution](https://psyteachr.github.io/glossary/u#uniform-distribution "A distribution where all numbers in the range have an equal probability of being sampled") is the simplest distribution. All numbers in the range have an equal probability of being sampled.
Take a minute to think of things in your own research that are uniformly distributed.
#### 8\.4\.1\.1 Continuous distribution
`runif(n, min=0, max=1)`
Use `runif()` to sample from a continuous uniform distribution.
```
u <- runif(100000, min = 0, max = 1)
# plot to visualise
ggplot() +
geom_histogram(aes(u), binwidth = 0.05, boundary = 0,
fill = "white", colour = "black")
```
#### 8\.4\.1\.2 Discrete
`sample(x, size, replace = FALSE, prob = NULL)`
Use `sample()` to sample from a [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") distribution.
You can use `sample()` to simulate events like rolling dice or choosing from a deck of cards. The code below simulates rolling a 6\-sided die 10000 times. We set `replace` to `TRUE` so that each event is independent. See what happens if you set `replace` to `FALSE`.
```
rolls <- sample(1:6, 10000, replace = TRUE)
# plot the results
ggplot() +
geom_histogram(aes(rolls), binwidth = 1,
fill = "white", color = "black")
```
Figure 8\.1: Distribution of dice rolls.
You can also use sample to sample from a list of named outcomes.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
sample(pet_types, 10, replace = TRUE)
```
```
## [1] "cat" "cat" "cat" "cat" "ferret" "dog" "bird" "cat"
## [9] "dog" "fish"
```
Ferrets are a much less common pet than cats and dogs, so our sample isn’t very realistic. You can set the probabilities of each item in the list with the `prob` argument.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
pet_prob <- c(0.3, 0.4, 0.1, 0.1, 0.1)
sample(pet_types, 10, replace = TRUE, prob = pet_prob)
```
```
## [1] "fish" "dog" "cat" "dog" "cat" "dog" "fish" "dog" "cat" "fish"
```
### 8\.4\.2 Binomial Distribution
The [binomial distribution](https://psyteachr.github.io/glossary/b#binomial-distribution "The distribution of data where each observation can have one of two outcomes, like success/failure, yes/no or head/tails. ") is useful for modelling binary data, where each observation can have one of two outcomes, like success/failure, yes/no or head/tails.
`rbinom(n, size, prob)`
The `rbinom` function will generate a random binomial distribution.
* `n` \= number of observations
* `size` \= number of trials
* `prob` \= probability of success on each trial
Coin flips are a typical example of a binomial distribution, where we can assign heads to 1 and tails to 0\.
```
# 20 individual coin flips of a fair coin
rbinom(20, 1, 0.5)
```
```
## [1] 1 1 1 0 1 1 0 1 0 0 1 1 1 0 0 0 1 0 0 0
```
```
# 20 individual coin flips of a baised (0.75) coin
rbinom(20, 1, 0.75)
```
```
## [1] 1 1 1 0 1 0 1 1 1 0 1 1 1 0 0 1 1 1 1 1
```
You can generate the total number of heads in 1 set of 20 coin flips by setting `size` to 20 and `n` to 1\.
```
rbinom(1, 20, 0.75)
```
```
## [1] 13
```
You can generate more sets of 20 coin flips by increasing the `n`.
```
rbinom(10, 20, 0.5)
```
```
## [1] 10 14 11 7 11 13 6 10 9 9
```
You should always check your randomly generated data to check that it makes sense. For large samples, it’s easiest to do that graphically. A histogram is usually the best choice for plotting binomial data.
```
flips <- rbinom(1000, 20, 0.5)
ggplot() +
geom_histogram(
aes(flips),
binwidth = 1,
fill = "white",
color = "black"
)
```
Run the simulation above several times, noting how the histogram changes. Try changing the values of `n`, `size`, and `prob`.
### 8\.4\.3 Normal Distribution
`rnorm(n, mean, sd)`
We can simulate a [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable.") of size `n` if we know the `mean` and standard deviation (`sd`). A density plot is usually the best way to visualise this type of data if your `n` is large.
```
dv <- rnorm(1e5, 10, 2)
# proportions of normally-distributed data
# within 1, 2, or 3 SD of the mean
sd1 <- .6827
sd2 <- .9545
sd3 <- .9973
ggplot() +
geom_density(aes(dv), fill = "white") +
geom_vline(xintercept = mean(dv), color = "red") +
geom_vline(xintercept = quantile(dv, .5 - sd1/2), color = "darkgreen") +
geom_vline(xintercept = quantile(dv, .5 + sd1/2), color = "darkgreen") +
geom_vline(xintercept = quantile(dv, .5 - sd2/2), color = "blue") +
geom_vline(xintercept = quantile(dv, .5 + sd2/2), color = "blue") +
geom_vline(xintercept = quantile(dv, .5 - sd3/2), color = "purple") +
geom_vline(xintercept = quantile(dv, .5 + sd3/2), color = "purple") +
scale_x_continuous(
limits = c(0,20),
breaks = seq(0,20)
)
```
Run the simulation above several times, noting how the density plot changes. What do the vertical lines represent? Try changing the values of `n`, `mean`, and `sd`.
### 8\.4\.4 Poisson Distribution
The [Poisson distribution](https://psyteachr.github.io/glossary/p#poisson-distribution "A distribution that models independent events happening over a unit of time") is useful for modelling events, like how many times something happens over a unit of time, as long as the events are independent (e.g., an event having happened in one time period doesn’t make it more or less likely to happen in the next).
`rpois(n, lambda)`
The `rpois` function will generate a random Poisson distribution.
* `n` \= number of observations
* `lambda` \= the mean number of events per observation
Let’s say we want to model how many texts you get each day for a whole. You know that you get an average of 20 texts per day. So we set `n = 365` and `lambda = 20`. Lambda is a [parameter](https://psyteachr.github.io/glossary/p#parameter "A value that describes a distribution, such as the mean or SD") that describes the Poisson distribution, just like mean and standard deviation are parameters that describe the normal distribution.
```
texts <- rpois(n = 365, lambda = 20)
ggplot() +
geom_histogram(
aes(texts),
binwidth = 1,
fill = "white",
color = "black"
)
```
So we can see that over a year, you’re unlikely to get fewer than 5 texts in a day, or more than 35 (although it’s not impossible).
8\.5 Multivariate Distributions
-------------------------------
### 8\.5\.1 Bivariate Normal
A [bivariate normal](https://psyteachr.github.io/glossary/b#bivariate-normal "Two normally distributed vectors that have a specified correlation with each other.") distribution is two normally distributed vectors that have a specified relationship, or [correlation](https://psyteachr.github.io/glossary/c#correlation "The relationship two vectors have to each other.") to each other.
What if we want to sample from a population with specific relationships between variables? We can sample from a bivariate normal distribution using `mvrnorm()` from the `MASS` package.
Don’t load MASS with the `library()` function because it will create a conflict with the `select()` function from dplyr and you will always need to preface it with `dplyr::`. Just use `MASS::mvrnorm()`.
You need to know how many observations you want to simulate (`n`) the means of the two variables (`mu`) and you need to calculate a [covariance matrix](https://psyteachr.github.io/glossary/c#covariance-matrix "Parameters showing how a set of vectors vary and are correlated.") (`sigma`) from the correlation between the variables (`rho`) and their standard deviations (`sd`).
```
n <- 1000 # number of random samples
# name the mu values to give the resulting columns names
mu <- c(x = 10, y = 20) # the means of the samples
sd <- c(5, 6) # the SDs of the samples
rho <- 0.5 # population correlation between the two variables
# correlation matrix
cor_mat <- matrix(c( 1, rho,
rho, 1), 2)
# create the covariance matrix
sigma <- (sd %*% t(sd)) * cor_mat
# sample from bivariate normal distribution
bvn <- MASS::mvrnorm(n, mu, sigma)
```
Plot your sampled variables to check everything worked like you expect. It’s easiest to convert the output of `mvnorm` into a tibble in order to use it in ggplot.
```
bvn %>%
as_tibble() %>%
ggplot(aes(x, y)) +
geom_point(alpha = 0.5) +
geom_smooth(method = "lm") +
geom_density2d()
```
```
## `geom_smooth()` using formula 'y ~ x'
```
### 8\.5\.2 Multivariate Normal
You can generate more than 2 correlated variables, but it gets a little trickier to create the correlation matrix.
```
n <- 200 # number of random samples
mu <- c(x = 10, y = 20, z = 30) # the means of the samples
sd <- c(8, 9, 10) # the SDs of the samples
rho1_2 <- 0.5 # correlation between x and y
rho1_3 <- 0 # correlation between x and z
rho2_3 <- 0.7 # correlation between y and z
# correlation matrix
cor_mat <- matrix(c( 1, rho1_2, rho1_3,
rho1_2, 1, rho2_3,
rho1_3, rho2_3, 1), 3)
sigma <- (sd %*% t(sd)) * cor_mat
bvn3 <- MASS::mvrnorm(n, mu, sigma)
cor(bvn3) # check correlation matrix
```
```
## x y z
## x 1.0000000 0.5896674 0.1513108
## y 0.5896674 1.0000000 0.7468737
## z 0.1513108 0.7468737 1.0000000
```
You can use the `plotly` library to make a 3D graph.
```
#set up the marker style
marker_style = list(
color = "#ff0000",
line = list(
color = "#444",
width = 1
),
opacity = 0.5,
size = 5
)
# convert bvn3 to a tibble, plot and add markers
bvn3 %>%
as_tibble() %>%
plot_ly(x = ~x, y = ~y, z = ~z, marker = marker_style) %>%
add_markers()
```
### 8\.5\.3 Faux
Alternatively, you can use the package [faux](https://debruine.github.io/faux/) to generate any number of correlated variables. It also has a function for checking the parameters of your new simulated data (`check_sim_stats()`).
```
bvn3 <- rnorm_multi(
n = n,
vars = 3,
mu = mu,
sd = sd,
r = c(rho1_2, rho1_3, rho2_3),
varnames = c("x", "y", "z")
)
check_sim_stats(bvn3)
```
| n | var | x | y | z | mean | sd |
| --- | --- | --- | --- | --- | --- | --- |
| 200 | x | 1\.00 | 0\.54 | 0\.10 | 10\.35 | 7\.66 |
| 200 | y | 0\.54 | 1\.00 | 0\.67 | 20\.01 | 8\.77 |
| 200 | z | 0\.10 | 0\.67 | 1\.00 | 30\.37 | 9\.59 |
You can also use faux to simulate data for factorial designs. Set up the between\-subject and within\-subject factors as lists with the levels as (named) vectors. Means and standard deviations can be included as vectors or data frames. The function calculates sigma for you, structures your dataset, and outputs a plot of the design.
```
b <- list(pet = c(cat = "Cat Owners",
dog = "Dog Owners"))
w <- list(time = c("morning",
"noon",
"night"))
mu <- data.frame(
cat = c(10, 12, 14),
dog = c(10, 15, 20),
row.names = w$time
)
sd <- c(3, 3, 3, 5, 5, 5)
pet_data <- sim_design(
within = w,
between = b,
n = 100,
mu = mu,
sd = sd,
r = .5)
```
You can use the `check_sim_stats()` function, but you need to set the argument `between` to a vector of all the between\-subject factor columns.
```
check_sim_stats(pet_data, between = "pet")
```
| pet | n | var | morning | noon | night | mean | sd |
| --- | --- | --- | --- | --- | --- | --- | --- |
| cat | 100 | morning | 1\.00 | 0\.57 | 0\.51 | 10\.62 | 3\.48 |
| cat | 100 | noon | 0\.57 | 1\.00 | 0\.59 | 12\.44 | 3\.01 |
| cat | 100 | night | 0\.51 | 0\.59 | 1\.00 | 14\.61 | 3\.14 |
| dog | 100 | morning | 1\.00 | 0\.55 | 0\.50 | 9\.44 | 4\.92 |
| dog | 100 | noon | 0\.55 | 1\.00 | 0\.48 | 14\.18 | 5\.90 |
| dog | 100 | night | 0\.50 | 0\.48 | 1\.00 | 19\.42 | 5\.36 |
See the [faux website](https://debruine.github.io/faux/) for more detailed tutorials.
8\.6 Statistical terms
----------------------
Let’s review some important statistical terms before we review tests of distributions.
### 8\.6\.1 Effect
The [effect](https://psyteachr.github.io/glossary/e#effect "Some measure of your data, such as the mean value, or the number of standard deviations the mean differs from a chance value.") is some measure of your data. This will depend on the type of data you have and the type of statistical test you are using. For example, if you flipped a coin 100 times and it landed heads 66 times, the effect would be 66/100\. You can then use the exact binomial test to compare this effect to the [null effect](https://psyteachr.github.io/glossary/n#null-effect "An outcome that does not show an otherwise expected effect.") you would expect from a fair coin (50/100\) or to any other effect you choose. The [effect size](https://psyteachr.github.io/glossary/e#effect-size "The difference between the effect in your data and the null effect (usually a chance value)") refers to the difference between the effect in your data and the null effect (usually a chance value).
### 8\.6\.2 P\-value
The [p\-value](https://psyteachr.github.io/glossary/p#p-value "The probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect)") of a test is the probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect). So if you used a binomial test to test against a chance probability of 1/6 (e.g., the probability of rolling 1 with a 6\-sided die), then a p\-value of 0\.17 means that you could expect to see effects at least as extreme as your data 17% of the time just by chance alone.
### 8\.6\.3 Alpha
If you are using null hypothesis significance testing ([NHST](https://psyteachr.github.io/glossary/n#nhst "Null Hypothesis Signficance Testing")), then you need to decide on a cutoff value ([alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot")) for making a decision to reject the null hypothesis. We call p\-values below the alpha cutoff [significant](https://psyteachr.github.io/glossary/s#significant "The conclusion when the p-value is less than the critical alpha. "). In psychology, alpha is traditionally set at 0\.05, but there are good arguments for [setting a different criterion in some circumstances](http://daniellakens.blogspot.com/2019/05/justifying-your-alpha-by-minimizing-or.html).
### 8\.6\.4 False Positive/Negative
The probability that a test concludes there is an effect when there is really no effect (e.g., concludes a fair coin is biased) is called the [false positive](https://psyteachr.github.io/glossary/f#false-positive "When a test concludes there is an effect when there really is no effect") rate (or [Type I Error](https://psyteachr.github.io/glossary/t#type-i-error "A false positive; When a test concludes there is an effect when there is really is no effect") Rate). The [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") is the false positive rate we accept for a test. The probability that a test concludes there is no effect when there really is one (e.g., concludes a biased coin is fair) is called the [false negative](https://psyteachr.github.io/glossary/f#false-negative "When a test concludes there is no effect when there really is an effect") rate (or [Type II Error](https://psyteachr.github.io/glossary/t#type-ii-error "A false negative; When a test concludes there is no effect when there is really is an effect") Rate). The [beta](https://psyteachr.github.io/glossary/b#beta "The false negative rate we accept for a statistical test.") is the false negative rate we accept for a test.
The false positive rate is not the overall probability of getting a false positive, but the probability of a false positive *under the null hypothesis*. Similarly, the false negative rate is the probability of a false negative *under the alternative hypothesis*. Unless we know the probability that we are testing a null effect, we can’t say anything about the overall probability of false positives or negatives. If 100% of the hypotheses we test are false, then all significant effects are false positives, but if all of the hypotheses we test are true, then all of the positives are true positives and the overall false positive rate is 0\.
### 8\.6\.5 Power and SESOI
[Power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") is equal to 1 minus beta (i.e., the [true positive](https://psyteachr.github.io/glossary/t#true-positive "When a test concludes there is an effect when there is really is an effect") rate), and depends on the effect size, how many samples we take (n), and what we set alpha to. For any test, if you specify all but one of these values, you can calculate the last. The effect size you use in power calculations should be the smallest effect size of interest ([SESOI](https://psyteachr.github.io/glossary/s#sesoi "Smallest Effect Size of Interest: the smallest effect that is theoretically or practically meaningful")). See ([Daniël Lakens, Scheel, and Isager 2018](#ref-TOSTtutorial))([https://doi.org/10\.1177/2515245918770963](https://doi.org/10.1177/2515245918770963)) for a tutorial on methods for choosing an SESOI.
Let’s say you want to be able to detect at least a 15% difference from chance (50%) in a coin’s fairness, and you want your test to have a 5% chance of false positives and a 10% chance of false negatives. What are the following values?
* alpha \=
* beta \=
* false positive rate \=
* false negative rate \=
* power \=
* SESOI \=
### 8\.6\.6 Confidence Intervals
The [confidence interval](https://psyteachr.github.io/glossary/c#confidence-interval "A type of interval estimate used to summarise a given statistic or measurement where a proportion of intervals calculated from the sample(s) will contain the true value of the statistic.") is a range around some value (such as a mean) that has some probability of containing the parameter, if you repeated the process many times. Traditionally in psychology, we use 95% confidence intervals, but you can calculate CIs for any percentage.
A 95% CI does *not* mean that there is a 95% probability that the true mean lies within this range, but that, if you repeated the study many times and calculated the CI this same way every time, you’d expect the true mean to be inside the CI in 95% of the studies. This seems like a subtle distinction, but can lead to some misunderstandings. See ([Morey et al. 2016](#ref-Morey2016))([https://link.springer.com/article/10\.3758/s13423\-015\-0947\-8](https://link.springer.com/article/10.3758/s13423-015-0947-8)) for more detailed discussion.
8\.7 Tests
----------
### 8\.7\.1 Exact binomial test
`binom.test(x, n, p)`
You can test a binomial distribution against a specific probability using the exact binomial test.
* `x` \= the number of successes
* `n` \= the number of trials
* `p` \= hypothesised probability of success
Here we can test a series of 10 coin flips from a fair coin and a biased coin against the hypothesised probability of 0\.5 (even odds).
```
n <- 10
fair_coin <- rbinom(1, n, 0.5)
biased_coin <- rbinom(1, n, 0.6)
binom.test(fair_coin, n, p = 0.5)
binom.test(biased_coin, n, p = 0.5)
```
```
##
## Exact binomial test
##
## data: fair_coin and n
## number of successes = 6, number of trials = 10, p-value = 0.7539
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.2623781 0.8784477
## sample estimates:
## probability of success
## 0.6
##
##
## Exact binomial test
##
## data: biased_coin and n
## number of successes = 8, number of trials = 10, p-value = 0.1094
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.4439045 0.9747893
## sample estimates:
## probability of success
## 0.8
```
Run the code above several times, noting the p\-values for the fair and biased coins. Alternatively, you can [simulate coin flips](http://shiny.psy.gla.ac.uk/debruine/coinsim/) online and build up a graph of results and p\-values.
* How does the p\-value vary for the fair and biased coins?
* What happens to the confidence intervals if you increase n from 10 to 100?
* What criterion would you use to tell if the observed data indicate the coin is fair or biased?
* How often do you conclude the fair coin is biased (false positives)?
* How often do you conclude the biased coin is fair (false negatives)?
#### 8\.7\.1\.1 Sampling function
To estimate these rates, we need to repeat the sampling above many times. A [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") is ideal for repeating the exact same procedure over and over. Set the arguments of the function to variables that you might want to change. Here, we will want to estimate power for:
* different sample sizes (`n`)
* different effects (`bias`)
* different hypothesised probabilities (`p`, defaults to 0\.5\)
```
sim_binom_test <- function(n, bias, p = 0.5) {
# simulate 1 coin flip n times with the specified bias
coin <- rbinom(1, n, bias)
# run a binomial test on the simulated data for the specified p
btest <- binom.test(coin, n, p)
# return the p-value of this test
btest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_binom_test(100, 0.6)
```
```
## [1] 0.1332106
```
#### 8\.7\.1\.2 Calculate power
Then you can use the `replicate()` function to run it many times and save all the output values. You can calculate the [power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") of your analysis by checking the proportion of your simulated analyses that have a p\-value less than your [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") (the probability of rejecting the null hypothesis when the null hypothesis is true).
```
my_reps <- replicate(1e4, sim_binom_test(100, 0.6))
alpha <- 0.05 # this does not always have to be 0.05
mean(my_reps < alpha)
```
```
## [1] 0.4561
```
`1e4` is just scientific notation for a 1 followed by 4 zeros (`10000`). When you’re running simulations, you usually want to run a lot of them. It’s a pain to keep track of whether you’ve typed 5 or 6 zeros (100000 vs 1000000\) and this will change your running time by an order of magnitude.
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
### 8\.7\.2 T\-test
`t.test(x, y, alternative, mu, paired)`
Use a t\-test to compare the mean of one distribution to a null hypothesis (one\-sample t\-test), compare the means of two samples (independent\-samples t\-test), or compare pairs of values (paired\-samples t\-test).
You can run a one\-sample t\-test comparing the mean of your data to `mu`. Here is a simulated distribution with a mean of 0\.5 and an SD of 1, creating an effect size of 0\.5 SD when tested against a `mu` of 0\. Run the simulation a few times to see how often the t\-test returns a significant p\-value (or run it in the [shiny app](http://shiny.psy.gla.ac.uk/debruine/normsim/)).
```
sim_norm <- rnorm(100, 0.5, 1)
t.test(sim_norm, mu = 0)
```
```
##
## One Sample t-test
##
## data: sim_norm
## t = 6.2874, df = 99, p-value = 8.758e-09
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 0.4049912 0.7784761
## sample estimates:
## mean of x
## 0.5917337
```
Run an independent\-samples t\-test by comparing two lists of values.
```
a <- rnorm(100, 0.5, 1)
b <- rnorm(100, 0.7, 1)
t_ind <- t.test(a, b, paired = FALSE)
t_ind
```
```
##
## Welch Two Sample t-test
##
## data: a and b
## t = -1.8061, df = 197.94, p-value = 0.07243
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.54825320 0.02408469
## sample estimates:
## mean of x mean of y
## 0.4585985 0.7206828
```
The `paired` argument defaults to `FALSE`, but it’s good practice to always explicitly set it so you are never confused about what type of test you are performing.
#### 8\.7\.2\.1 Sampling function
We can use the `names()` function to find out the names of all the t.test parameters and use this to just get one type of data, like the test statistic (e.g., t\-value).
```
names(t_ind)
t_ind$statistic
```
```
## [1] "statistic" "parameter" "p.value" "conf.int" "estimate"
## [6] "null.value" "stderr" "alternative" "method" "data.name"
## t
## -1.806051
```
If you want to run the simulation many times and record information each time, first you need to turn your simulation into a function.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2) {
# simulate v1
v1 <- rnorm(n, m1, sd1)
#simulate v2
v2 <- rnorm(n, m2, sd2)
# compare using an independent samples t-test
t_ind <- t.test(v1, v2, paired = FALSE)
# return the p-value
return(t_ind$p.value)
}
```
Run it a few times to check that it gives you sensible values.
```
sim_t_ind(100, 0.7, 1, 0.5, 1)
```
```
## [1] 0.362521
```
#### 8\.7\.2\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_t_ind(100, 0.7, 1, 0.5, 1))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.2925
```
Run the code above several times. How much does the power value fluctuate? How many replications do you need to run to get a reliable estimate of power?
Compare your power estimate from simluation to a power calculation using `power.t.test()`. Here, `delta` is the difference between `m1` and `m2` above.
```
power.t.test(n = 100,
delta = 0.2,
sd = 1,
sig.level = alpha,
type = "two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
What do you think the distribution of p\-values is when there is no effect (i.e., the means are identical)? Check this yourself.
Make sure the `boundary` argument is set to `0` for p\-value histograms. See what happens with a null effect if `boundary` is not set.
### 8\.7\.3 Correlation
You can test if continuous variables are related to each other using the `cor()` function. Let’s use `rnorm_multi()` to make a quick table of correlated values.
```
dat <- rnorm_multi(
n = 100,
vars = 2,
r = -0.5,
varnames = c("x", "y")
)
cor(dat$x, dat$y)
```
```
## [1] -0.4960331
```
Set `n` to a large number like 1e6 so that the correlations are less affected by chance. Change the value of the **mean** for `a`, `x`, or `y`. Does it change the correlation between `x` and `y`? What happens when you increase or decrease the **sd**? Can you work out any rules here?
`cor()` defaults to Pearson’s correlations. Set the `method` argument to use Kendall or Spearman correlations.
```
cor(dat$x, dat$y, method = "spearman")
```
```
## [1] -0.4724992
```
#### 8\.7\.3\.1 Sampling function
Create a function that creates two variables with `n` observations and `r` correlation. Use the function `cor.test()` to give you p\-values for the correlation.
```
sim_cor_test <- function(n = 100, r = 0) {
dat <- rnorm_multi(
n = n,
vars = 2,
r = r,
varnames = c("x", "y")
)
ctest <- cor.test(dat$x, dat$y)
ctest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_cor_test(50, .5)
```
```
## [1] 0.001354836
```
#### 8\.7\.3\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_cor_test(50, 0.5))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.965
```
Compare to the value calcuated by the pwr package.
```
pwr::pwr.r.test(n = 50, r = 0.5)
```
```
##
## approximate correlation power calculation (arctangh transformation)
##
## n = 50
## r = 0.5
## sig.level = 0.05
## power = 0.9669813
## alternative = two.sided
```
8\.8 Example
------------
This example uses the [Growth Chart Data Tables](https://www.cdc.gov/growthcharts/data/zscore/zstatage.csv) from the [US CDC](https://www.cdc.gov/growthcharts/zscore.htm). The data consist of height in centimeters for the z\-scores of –2, \-1\.5, \-1, \-0\.5, 0, 0\.5, 1, 1\.5, and 2 by sex (1\=male; 2\=female) and half\-month of age (from 24\.0 to 240\.5 months).
### 8\.8\.1 Load \& wrangle
We have to do a little data wrangling first. Have a look at the data after you import it and relabel `Sex` to `male` and `female` instead of `1` and `2`. Also convert `Agemos` (age in months) to years. Relabel the column `0` as `mean` and calculate a new column named `sd` as the difference between columns `1` and `0`.
```
orig_height_age <- read_csv("https://www.cdc.gov/growthcharts/data/zscore/zstatage.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## Sex = col_character(),
## Agemos = col_character(),
## `-2` = col_double(),
## `-1.5` = col_double(),
## `-1` = col_double(),
## `-0.5` = col_double(),
## `0` = col_double(),
## `0.5` = col_double(),
## `1` = col_double(),
## `1.5` = col_double(),
## `2` = col_double()
## )
```
```
height_age <- orig_height_age %>%
filter(Sex %in% c(1,2)) %>%
mutate(
sex = recode(Sex, "1" = "male", "2" = "female"),
age = as.numeric(Agemos)/12,
sd = `1` - `0`
) %>%
select(sex, age, mean = `0`, sd)
```
### 8\.8\.2 Plot
Plot your new data frame to see how mean height changes with age for boys and girls.
```
ggplot(height_age, aes(age, mean, color = sex)) +
geom_smooth(aes(ymin = mean - sd,
ymax = mean + sd),
stat="identity")
```
### 8\.8\.3 Simulate a population
Simulate 50 random male heights and 50 random female heights for 20\-year\-olds using the `rnorm()` function and the means and SDs from the `height_age` table. Plot the data.
```
age_filter <- 20
m <- filter(height_age, age == age_filter, sex == "male")
f <- filter(height_age, age == age_filter, sex == "female")
sim_height <- tibble(
male = rnorm(50, m$mean, m$sd),
female = rnorm(50, f$mean, f$sd)
) %>%
gather("sex", "height", male:female)
ggplot(sim_height) +
geom_density(aes(height, fill = sex), alpha = 0.5) +
xlim(125, 225)
```
Run the simulation above several times, noting how the density plot changes. Try changing the age you’re simulating.
### 8\.8\.4 Analyse simulated data
Use the `sim_t_ind(n, m1, sd1, m2, sd2)` function we created above to generate one simulation with a sample size of 50 in each group using the means and SDs of male and female 14\-year\-olds.
```
age_filter <- 14
m <- filter(height_age, age == age_filter, sex == "male")
f <- filter(height_age, age == age_filter, sex == "female")
sim_t_ind(50, m$mean, m$sd, f$mean, f$sd)
```
```
## [1] 0.0005255744
```
### 8\.8\.5 Replicate simulation
Now replicate this 1e4 times using the `replicate()` function. This function will save the returned p\-values in a list (`my_reps`). We can then check what proportion of those p\-values are less than our alpha value. This is the power of our test.
```
my_reps <- replicate(1e4, sim_t_ind(50, m$mean, m$sd, f$mean, f$sd))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.6403
```
### 8\.8\.6 One\-tailed prediction
This design has about 65% power to detect the sex difference in height (with a 2\-tailed test). Modify the `sim_t_ind` function for a 1\-tailed prediction.
You could just set `alternative` equal to “greater” in the function, but it might be better to add the `alt` argument to your function (giving it the same default value as `t.test`) and change the value of `alternative` in the function to `alt`.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2, alt = "two.sided") {
v1 <- rnorm(n, m1, sd1)
v2 <- rnorm(n, m2, sd2)
t_ind <- t.test(v1, v2, paired = FALSE, alternative = alt)
return(t_ind$p.value)
}
alpha <- 0.05
my_reps <- replicate(1e4, sim_t_ind(50, m$mean, m$sd, f$mean, f$sd, "greater"))
mean(my_reps < alpha)
```
```
## [1] 0.752
```
### 8\.8\.7 Range of sample sizes
What if we want to find out what sample size will give us 80% power? We can try trial and error. We know the number should be slightly larger than 50\. But you can search more systematically by repeating your power calculation for a range of sample sizes.
This might seem like overkill for a t\-test, where you can easily look up sample size calculators online, but it is a valuable skill to learn for when your analyses become more complicated.
Start with a relatively low number of replications and/or more spread\-out samples to estimate where you should be looking more specifically. Then you can repeat with a narrower/denser range of sample sizes and more iterations.
```
# make another custom function to return power
pwr_func <- function(n, reps = 100, alpha = 0.05) {
ps <- replicate(reps, sim_t_ind(n, m$mean, m$sd, f$mean, f$sd, "greater"))
mean(ps < alpha)
}
# make a table of the n values you want to check
power_table <- tibble(
n = seq(20, 100, by = 5)
) %>%
# run the power function for each n
mutate(power = map_dbl(n, pwr_func))
# plot the results
ggplot(power_table, aes(n, power)) +
geom_smooth() +
geom_point() +
geom_hline(yintercept = 0.8)
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
Now we can narrow down our search to values around 55 (plus or minus 5\) and increase the number of replications from 1e3 to 1e4\.
```
power_table <- tibble(
n = seq(50, 60)
) %>%
mutate(power = map_dbl(n, pwr_func, reps = 1e4))
ggplot(power_table, aes(n, power)) +
geom_smooth() +
geom_point() +
geom_hline(yintercept = 0.8)
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
8\.9 Glossary
-------------
| term | definition |
| --- | --- |
| [alpha](https://psyteachr.github.io/glossary/a#alpha) | (stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot |
| [beta](https://psyteachr.github.io/glossary/b#beta) | The false negative rate we accept for a statistical test. |
| [binomial distribution](https://psyteachr.github.io/glossary/b#binomial.distribution) | The distribution of data where each observation can have one of two outcomes, like success/failure, yes/no or head/tails. |
| [bivariate normal](https://psyteachr.github.io/glossary/b#bivariate.normal) | Two normally distributed vectors that have a specified correlation with each other. |
| [confidence interval](https://psyteachr.github.io/glossary/c#confidence.interval) | A type of interval estimate used to summarise a given statistic or measurement where a proportion of intervals calculated from the sample(s) will contain the true value of the statistic. |
| [correlation](https://psyteachr.github.io/glossary/c#correlation) | The relationship two vectors have to each other. |
| [covariance matrix](https://psyteachr.github.io/glossary/c#covariance.matrix) | Parameters showing how a set of vectors vary and are correlated. |
| [discrete](https://psyteachr.github.io/glossary/d#discrete) | Data that can only take certain values, such as integers. |
| [effect size](https://psyteachr.github.io/glossary/e#effect.size) | The difference between the effect in your data and the null effect (usually a chance value) |
| [effect](https://psyteachr.github.io/glossary/e#effect) | Some measure of your data, such as the mean value, or the number of standard deviations the mean differs from a chance value. |
| [false negative](https://psyteachr.github.io/glossary/f#false.negative) | When a test concludes there is no effect when there really is an effect |
| [false positive](https://psyteachr.github.io/glossary/f#false.positive) | When a test concludes there is an effect when there really is no effect |
| [function](https://psyteachr.github.io/glossary/f#function.) | A named section of code that can be reused. |
| [nhst](https://psyteachr.github.io/glossary/n#nhst) | Null Hypothesis Signficance Testing |
| [normal distribution](https://psyteachr.github.io/glossary/n#normal.distribution) | A symmetric distribution of data where values near the centre are most probable. |
| [null effect](https://psyteachr.github.io/glossary/n#null.effect) | An outcome that does not show an otherwise expected effect. |
| [p value](https://psyteachr.github.io/glossary/p#p.value) | The probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect) |
| [parameter](https://psyteachr.github.io/glossary/p#parameter) | A value that describes a distribution, such as the mean or SD |
| [poisson distribution](https://psyteachr.github.io/glossary/p#poisson.distribution) | A distribution that models independent events happening over a unit of time |
| [power](https://psyteachr.github.io/glossary/p#power) | The probability of rejecting the null hypothesis when it is false. |
| [probability](https://psyteachr.github.io/glossary/p#probability) | A number between 0 and 1 where 0 indicates impossibility of the event and 1 indicates certainty |
| [sesoi](https://psyteachr.github.io/glossary/s#sesoi) | Smallest Effect Size of Interest: the smallest effect that is theoretically or practically meaningful |
| [significant](https://psyteachr.github.io/glossary/s#significant) | The conclusion when the p\-value is less than the critical alpha. |
| [simulation](https://psyteachr.github.io/glossary/s#simulation) | Generating data from summary parameters |
| [true positive](https://psyteachr.github.io/glossary/t#true.positive) | When a test concludes there is an effect when there is really is an effect |
| [type i error](https://psyteachr.github.io/glossary/t#type.i.error) | A false positive; When a test concludes there is an effect when there is really is no effect |
| [type ii error](https://psyteachr.github.io/glossary/t#type.ii.error) | A false negative; When a test concludes there is no effect when there is really is an effect |
| [uniform distribution](https://psyteachr.github.io/glossary/u#uniform.distribution) | A distribution where all numbers in the range have an equal probability of being sampled |
| [univariate](https://psyteachr.github.io/glossary/u#univariate) | Relating to a single variable. |
8\.10 Exercises
---------------
Download the [exercises](exercises/08_sim_exercise.Rmd). See the [answers](exercises/08_sim_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(8)
# run this to access the answers
dataskills::exercise(8, answers = TRUE)
```
8\.1 Learning Objectives
------------------------
### 8\.1\.1 Basic
1. Generate and plot data randomly sampled from common distributions [(video)](https://youtu.be/iuecrT3q1kg)
* [uniform](sim.html#uniform)
* [binomial](sim.html#binomial)
* [normal](sim.html#normal)
* [poisson](sim.html#poisson)
2. Generate related variables from a [multivariate](sim.html#mvdist) distribution [(video)](https://youtu.be/B14HfWQ1kIc)
3. Define the following [statistical terms](sim.html#stat-terms):
* [p\-value](sim.html#p-value)
* [alpha](sim.html#alpha)
* [power](sim.html#power)
* smallest effect size of interest ([SESOI](#sesoi))
* [false positive](sim.html#false-pos) (type I error)
* [false negative](#false-neg) (type II error)
* confidence interval ([CI](#conf-inf))
4. Test sampled distributions against a null hypothesis [(video)](https://youtu.be/Am3G6rA2S1s)
* [exact binomial test](sim.html#exact-binom)
* [t\-test](sim.html#t-test) (1\-sample, independent samples, paired samples)
* [correlation](sim.html#correlation) (pearson, kendall and spearman)
5. [Calculate power](sim.html#calc-power-binom) using iteration and a sampling function
### 8\.1\.2 Intermediate
6. Calculate the minimum sample size for a specific power level and design
### 8\.1\.1 Basic
1. Generate and plot data randomly sampled from common distributions [(video)](https://youtu.be/iuecrT3q1kg)
* [uniform](sim.html#uniform)
* [binomial](sim.html#binomial)
* [normal](sim.html#normal)
* [poisson](sim.html#poisson)
2. Generate related variables from a [multivariate](sim.html#mvdist) distribution [(video)](https://youtu.be/B14HfWQ1kIc)
3. Define the following [statistical terms](sim.html#stat-terms):
* [p\-value](sim.html#p-value)
* [alpha](sim.html#alpha)
* [power](sim.html#power)
* smallest effect size of interest ([SESOI](#sesoi))
* [false positive](sim.html#false-pos) (type I error)
* [false negative](#false-neg) (type II error)
* confidence interval ([CI](#conf-inf))
4. Test sampled distributions against a null hypothesis [(video)](https://youtu.be/Am3G6rA2S1s)
* [exact binomial test](sim.html#exact-binom)
* [t\-test](sim.html#t-test) (1\-sample, independent samples, paired samples)
* [correlation](sim.html#correlation) (pearson, kendall and spearman)
5. [Calculate power](sim.html#calc-power-binom) using iteration and a sampling function
### 8\.1\.2 Intermediate
6. Calculate the minimum sample size for a specific power level and design
8\.2 Resources
--------------
* [Stub for this lesson](stubs/8_sim.Rmd)
* [Distribution Shiny App](http://shiny.psy.gla.ac.uk/debruine/simulate/) (or run `dataskills::app("simulate")`
* [Simulation tutorials](https://debruine.github.io/tutorials/sim-data.html)
* [Chapter 21: Iteration](http://r4ds.had.co.nz/iteration.html) of *R for Data Science*
* [Improving your statistical inferences](https://www.coursera.org/learn/statistical-inferences/) on Coursera (week 1\)
* [Faux](https://debruine.github.io/faux/) package for data simulation
* [Simulation\-Based Power\-Analysis for Factorial ANOVA Designs](https://psyarxiv.com/baxsf) ([Daniel Lakens and Caldwell 2019](#ref-lakens_caldwell_2019))
* [Understanding mixed effects models through data simulation](https://psyarxiv.com/xp5cy/) ([DeBruine and Barr 2019](#ref-debruine_barr_2019))
8\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(plotly)
library(faux)
set.seed(8675309) # makes sure random numbers are reproducible
```
Simulating data is a very powerful way to test your understanding of statistical concepts. We are going to use [simulations](https://psyteachr.github.io/glossary/s#simulation "Generating data from summary parameters") to learn the basics of [probability](https://psyteachr.github.io/glossary/p#probability "A number between 0 and 1 where 0 indicates impossibility of the event and 1 indicates certainty").
8\.4 Univariate Distributions
-----------------------------
First, we need to understand some different ways data might be distributed and how to simulate data from these distributions. A [univariate](https://psyteachr.github.io/glossary/u#univariate "Relating to a single variable.") distribution is the distribution of a single variable.
### 8\.4\.1 Uniform Distribution
The [uniform distribution](https://psyteachr.github.io/glossary/u#uniform-distribution "A distribution where all numbers in the range have an equal probability of being sampled") is the simplest distribution. All numbers in the range have an equal probability of being sampled.
Take a minute to think of things in your own research that are uniformly distributed.
#### 8\.4\.1\.1 Continuous distribution
`runif(n, min=0, max=1)`
Use `runif()` to sample from a continuous uniform distribution.
```
u <- runif(100000, min = 0, max = 1)
# plot to visualise
ggplot() +
geom_histogram(aes(u), binwidth = 0.05, boundary = 0,
fill = "white", colour = "black")
```
#### 8\.4\.1\.2 Discrete
`sample(x, size, replace = FALSE, prob = NULL)`
Use `sample()` to sample from a [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") distribution.
You can use `sample()` to simulate events like rolling dice or choosing from a deck of cards. The code below simulates rolling a 6\-sided die 10000 times. We set `replace` to `TRUE` so that each event is independent. See what happens if you set `replace` to `FALSE`.
```
rolls <- sample(1:6, 10000, replace = TRUE)
# plot the results
ggplot() +
geom_histogram(aes(rolls), binwidth = 1,
fill = "white", color = "black")
```
Figure 8\.1: Distribution of dice rolls.
You can also use sample to sample from a list of named outcomes.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
sample(pet_types, 10, replace = TRUE)
```
```
## [1] "cat" "cat" "cat" "cat" "ferret" "dog" "bird" "cat"
## [9] "dog" "fish"
```
Ferrets are a much less common pet than cats and dogs, so our sample isn’t very realistic. You can set the probabilities of each item in the list with the `prob` argument.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
pet_prob <- c(0.3, 0.4, 0.1, 0.1, 0.1)
sample(pet_types, 10, replace = TRUE, prob = pet_prob)
```
```
## [1] "fish" "dog" "cat" "dog" "cat" "dog" "fish" "dog" "cat" "fish"
```
### 8\.4\.2 Binomial Distribution
The [binomial distribution](https://psyteachr.github.io/glossary/b#binomial-distribution "The distribution of data where each observation can have one of two outcomes, like success/failure, yes/no or head/tails. ") is useful for modelling binary data, where each observation can have one of two outcomes, like success/failure, yes/no or head/tails.
`rbinom(n, size, prob)`
The `rbinom` function will generate a random binomial distribution.
* `n` \= number of observations
* `size` \= number of trials
* `prob` \= probability of success on each trial
Coin flips are a typical example of a binomial distribution, where we can assign heads to 1 and tails to 0\.
```
# 20 individual coin flips of a fair coin
rbinom(20, 1, 0.5)
```
```
## [1] 1 1 1 0 1 1 0 1 0 0 1 1 1 0 0 0 1 0 0 0
```
```
# 20 individual coin flips of a baised (0.75) coin
rbinom(20, 1, 0.75)
```
```
## [1] 1 1 1 0 1 0 1 1 1 0 1 1 1 0 0 1 1 1 1 1
```
You can generate the total number of heads in 1 set of 20 coin flips by setting `size` to 20 and `n` to 1\.
```
rbinom(1, 20, 0.75)
```
```
## [1] 13
```
You can generate more sets of 20 coin flips by increasing the `n`.
```
rbinom(10, 20, 0.5)
```
```
## [1] 10 14 11 7 11 13 6 10 9 9
```
You should always check your randomly generated data to check that it makes sense. For large samples, it’s easiest to do that graphically. A histogram is usually the best choice for plotting binomial data.
```
flips <- rbinom(1000, 20, 0.5)
ggplot() +
geom_histogram(
aes(flips),
binwidth = 1,
fill = "white",
color = "black"
)
```
Run the simulation above several times, noting how the histogram changes. Try changing the values of `n`, `size`, and `prob`.
### 8\.4\.3 Normal Distribution
`rnorm(n, mean, sd)`
We can simulate a [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable.") of size `n` if we know the `mean` and standard deviation (`sd`). A density plot is usually the best way to visualise this type of data if your `n` is large.
```
dv <- rnorm(1e5, 10, 2)
# proportions of normally-distributed data
# within 1, 2, or 3 SD of the mean
sd1 <- .6827
sd2 <- .9545
sd3 <- .9973
ggplot() +
geom_density(aes(dv), fill = "white") +
geom_vline(xintercept = mean(dv), color = "red") +
geom_vline(xintercept = quantile(dv, .5 - sd1/2), color = "darkgreen") +
geom_vline(xintercept = quantile(dv, .5 + sd1/2), color = "darkgreen") +
geom_vline(xintercept = quantile(dv, .5 - sd2/2), color = "blue") +
geom_vline(xintercept = quantile(dv, .5 + sd2/2), color = "blue") +
geom_vline(xintercept = quantile(dv, .5 - sd3/2), color = "purple") +
geom_vline(xintercept = quantile(dv, .5 + sd3/2), color = "purple") +
scale_x_continuous(
limits = c(0,20),
breaks = seq(0,20)
)
```
Run the simulation above several times, noting how the density plot changes. What do the vertical lines represent? Try changing the values of `n`, `mean`, and `sd`.
### 8\.4\.4 Poisson Distribution
The [Poisson distribution](https://psyteachr.github.io/glossary/p#poisson-distribution "A distribution that models independent events happening over a unit of time") is useful for modelling events, like how many times something happens over a unit of time, as long as the events are independent (e.g., an event having happened in one time period doesn’t make it more or less likely to happen in the next).
`rpois(n, lambda)`
The `rpois` function will generate a random Poisson distribution.
* `n` \= number of observations
* `lambda` \= the mean number of events per observation
Let’s say we want to model how many texts you get each day for a whole. You know that you get an average of 20 texts per day. So we set `n = 365` and `lambda = 20`. Lambda is a [parameter](https://psyteachr.github.io/glossary/p#parameter "A value that describes a distribution, such as the mean or SD") that describes the Poisson distribution, just like mean and standard deviation are parameters that describe the normal distribution.
```
texts <- rpois(n = 365, lambda = 20)
ggplot() +
geom_histogram(
aes(texts),
binwidth = 1,
fill = "white",
color = "black"
)
```
So we can see that over a year, you’re unlikely to get fewer than 5 texts in a day, or more than 35 (although it’s not impossible).
### 8\.4\.1 Uniform Distribution
The [uniform distribution](https://psyteachr.github.io/glossary/u#uniform-distribution "A distribution where all numbers in the range have an equal probability of being sampled") is the simplest distribution. All numbers in the range have an equal probability of being sampled.
Take a minute to think of things in your own research that are uniformly distributed.
#### 8\.4\.1\.1 Continuous distribution
`runif(n, min=0, max=1)`
Use `runif()` to sample from a continuous uniform distribution.
```
u <- runif(100000, min = 0, max = 1)
# plot to visualise
ggplot() +
geom_histogram(aes(u), binwidth = 0.05, boundary = 0,
fill = "white", colour = "black")
```
#### 8\.4\.1\.2 Discrete
`sample(x, size, replace = FALSE, prob = NULL)`
Use `sample()` to sample from a [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") distribution.
You can use `sample()` to simulate events like rolling dice or choosing from a deck of cards. The code below simulates rolling a 6\-sided die 10000 times. We set `replace` to `TRUE` so that each event is independent. See what happens if you set `replace` to `FALSE`.
```
rolls <- sample(1:6, 10000, replace = TRUE)
# plot the results
ggplot() +
geom_histogram(aes(rolls), binwidth = 1,
fill = "white", color = "black")
```
Figure 8\.1: Distribution of dice rolls.
You can also use sample to sample from a list of named outcomes.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
sample(pet_types, 10, replace = TRUE)
```
```
## [1] "cat" "cat" "cat" "cat" "ferret" "dog" "bird" "cat"
## [9] "dog" "fish"
```
Ferrets are a much less common pet than cats and dogs, so our sample isn’t very realistic. You can set the probabilities of each item in the list with the `prob` argument.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
pet_prob <- c(0.3, 0.4, 0.1, 0.1, 0.1)
sample(pet_types, 10, replace = TRUE, prob = pet_prob)
```
```
## [1] "fish" "dog" "cat" "dog" "cat" "dog" "fish" "dog" "cat" "fish"
```
#### 8\.4\.1\.1 Continuous distribution
`runif(n, min=0, max=1)`
Use `runif()` to sample from a continuous uniform distribution.
```
u <- runif(100000, min = 0, max = 1)
# plot to visualise
ggplot() +
geom_histogram(aes(u), binwidth = 0.05, boundary = 0,
fill = "white", colour = "black")
```
#### 8\.4\.1\.2 Discrete
`sample(x, size, replace = FALSE, prob = NULL)`
Use `sample()` to sample from a [discrete](https://psyteachr.github.io/glossary/d#discrete "Data that can only take certain values, such as integers.") distribution.
You can use `sample()` to simulate events like rolling dice or choosing from a deck of cards. The code below simulates rolling a 6\-sided die 10000 times. We set `replace` to `TRUE` so that each event is independent. See what happens if you set `replace` to `FALSE`.
```
rolls <- sample(1:6, 10000, replace = TRUE)
# plot the results
ggplot() +
geom_histogram(aes(rolls), binwidth = 1,
fill = "white", color = "black")
```
Figure 8\.1: Distribution of dice rolls.
You can also use sample to sample from a list of named outcomes.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
sample(pet_types, 10, replace = TRUE)
```
```
## [1] "cat" "cat" "cat" "cat" "ferret" "dog" "bird" "cat"
## [9] "dog" "fish"
```
Ferrets are a much less common pet than cats and dogs, so our sample isn’t very realistic. You can set the probabilities of each item in the list with the `prob` argument.
```
pet_types <- c("cat", "dog", "ferret", "bird", "fish")
pet_prob <- c(0.3, 0.4, 0.1, 0.1, 0.1)
sample(pet_types, 10, replace = TRUE, prob = pet_prob)
```
```
## [1] "fish" "dog" "cat" "dog" "cat" "dog" "fish" "dog" "cat" "fish"
```
### 8\.4\.2 Binomial Distribution
The [binomial distribution](https://psyteachr.github.io/glossary/b#binomial-distribution "The distribution of data where each observation can have one of two outcomes, like success/failure, yes/no or head/tails. ") is useful for modelling binary data, where each observation can have one of two outcomes, like success/failure, yes/no or head/tails.
`rbinom(n, size, prob)`
The `rbinom` function will generate a random binomial distribution.
* `n` \= number of observations
* `size` \= number of trials
* `prob` \= probability of success on each trial
Coin flips are a typical example of a binomial distribution, where we can assign heads to 1 and tails to 0\.
```
# 20 individual coin flips of a fair coin
rbinom(20, 1, 0.5)
```
```
## [1] 1 1 1 0 1 1 0 1 0 0 1 1 1 0 0 0 1 0 0 0
```
```
# 20 individual coin flips of a baised (0.75) coin
rbinom(20, 1, 0.75)
```
```
## [1] 1 1 1 0 1 0 1 1 1 0 1 1 1 0 0 1 1 1 1 1
```
You can generate the total number of heads in 1 set of 20 coin flips by setting `size` to 20 and `n` to 1\.
```
rbinom(1, 20, 0.75)
```
```
## [1] 13
```
You can generate more sets of 20 coin flips by increasing the `n`.
```
rbinom(10, 20, 0.5)
```
```
## [1] 10 14 11 7 11 13 6 10 9 9
```
You should always check your randomly generated data to check that it makes sense. For large samples, it’s easiest to do that graphically. A histogram is usually the best choice for plotting binomial data.
```
flips <- rbinom(1000, 20, 0.5)
ggplot() +
geom_histogram(
aes(flips),
binwidth = 1,
fill = "white",
color = "black"
)
```
Run the simulation above several times, noting how the histogram changes. Try changing the values of `n`, `size`, and `prob`.
### 8\.4\.3 Normal Distribution
`rnorm(n, mean, sd)`
We can simulate a [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable.") of size `n` if we know the `mean` and standard deviation (`sd`). A density plot is usually the best way to visualise this type of data if your `n` is large.
```
dv <- rnorm(1e5, 10, 2)
# proportions of normally-distributed data
# within 1, 2, or 3 SD of the mean
sd1 <- .6827
sd2 <- .9545
sd3 <- .9973
ggplot() +
geom_density(aes(dv), fill = "white") +
geom_vline(xintercept = mean(dv), color = "red") +
geom_vline(xintercept = quantile(dv, .5 - sd1/2), color = "darkgreen") +
geom_vline(xintercept = quantile(dv, .5 + sd1/2), color = "darkgreen") +
geom_vline(xintercept = quantile(dv, .5 - sd2/2), color = "blue") +
geom_vline(xintercept = quantile(dv, .5 + sd2/2), color = "blue") +
geom_vline(xintercept = quantile(dv, .5 - sd3/2), color = "purple") +
geom_vline(xintercept = quantile(dv, .5 + sd3/2), color = "purple") +
scale_x_continuous(
limits = c(0,20),
breaks = seq(0,20)
)
```
Run the simulation above several times, noting how the density plot changes. What do the vertical lines represent? Try changing the values of `n`, `mean`, and `sd`.
### 8\.4\.4 Poisson Distribution
The [Poisson distribution](https://psyteachr.github.io/glossary/p#poisson-distribution "A distribution that models independent events happening over a unit of time") is useful for modelling events, like how many times something happens over a unit of time, as long as the events are independent (e.g., an event having happened in one time period doesn’t make it more or less likely to happen in the next).
`rpois(n, lambda)`
The `rpois` function will generate a random Poisson distribution.
* `n` \= number of observations
* `lambda` \= the mean number of events per observation
Let’s say we want to model how many texts you get each day for a whole. You know that you get an average of 20 texts per day. So we set `n = 365` and `lambda = 20`. Lambda is a [parameter](https://psyteachr.github.io/glossary/p#parameter "A value that describes a distribution, such as the mean or SD") that describes the Poisson distribution, just like mean and standard deviation are parameters that describe the normal distribution.
```
texts <- rpois(n = 365, lambda = 20)
ggplot() +
geom_histogram(
aes(texts),
binwidth = 1,
fill = "white",
color = "black"
)
```
So we can see that over a year, you’re unlikely to get fewer than 5 texts in a day, or more than 35 (although it’s not impossible).
8\.5 Multivariate Distributions
-------------------------------
### 8\.5\.1 Bivariate Normal
A [bivariate normal](https://psyteachr.github.io/glossary/b#bivariate-normal "Two normally distributed vectors that have a specified correlation with each other.") distribution is two normally distributed vectors that have a specified relationship, or [correlation](https://psyteachr.github.io/glossary/c#correlation "The relationship two vectors have to each other.") to each other.
What if we want to sample from a population with specific relationships between variables? We can sample from a bivariate normal distribution using `mvrnorm()` from the `MASS` package.
Don’t load MASS with the `library()` function because it will create a conflict with the `select()` function from dplyr and you will always need to preface it with `dplyr::`. Just use `MASS::mvrnorm()`.
You need to know how many observations you want to simulate (`n`) the means of the two variables (`mu`) and you need to calculate a [covariance matrix](https://psyteachr.github.io/glossary/c#covariance-matrix "Parameters showing how a set of vectors vary and are correlated.") (`sigma`) from the correlation between the variables (`rho`) and their standard deviations (`sd`).
```
n <- 1000 # number of random samples
# name the mu values to give the resulting columns names
mu <- c(x = 10, y = 20) # the means of the samples
sd <- c(5, 6) # the SDs of the samples
rho <- 0.5 # population correlation between the two variables
# correlation matrix
cor_mat <- matrix(c( 1, rho,
rho, 1), 2)
# create the covariance matrix
sigma <- (sd %*% t(sd)) * cor_mat
# sample from bivariate normal distribution
bvn <- MASS::mvrnorm(n, mu, sigma)
```
Plot your sampled variables to check everything worked like you expect. It’s easiest to convert the output of `mvnorm` into a tibble in order to use it in ggplot.
```
bvn %>%
as_tibble() %>%
ggplot(aes(x, y)) +
geom_point(alpha = 0.5) +
geom_smooth(method = "lm") +
geom_density2d()
```
```
## `geom_smooth()` using formula 'y ~ x'
```
### 8\.5\.2 Multivariate Normal
You can generate more than 2 correlated variables, but it gets a little trickier to create the correlation matrix.
```
n <- 200 # number of random samples
mu <- c(x = 10, y = 20, z = 30) # the means of the samples
sd <- c(8, 9, 10) # the SDs of the samples
rho1_2 <- 0.5 # correlation between x and y
rho1_3 <- 0 # correlation between x and z
rho2_3 <- 0.7 # correlation between y and z
# correlation matrix
cor_mat <- matrix(c( 1, rho1_2, rho1_3,
rho1_2, 1, rho2_3,
rho1_3, rho2_3, 1), 3)
sigma <- (sd %*% t(sd)) * cor_mat
bvn3 <- MASS::mvrnorm(n, mu, sigma)
cor(bvn3) # check correlation matrix
```
```
## x y z
## x 1.0000000 0.5896674 0.1513108
## y 0.5896674 1.0000000 0.7468737
## z 0.1513108 0.7468737 1.0000000
```
You can use the `plotly` library to make a 3D graph.
```
#set up the marker style
marker_style = list(
color = "#ff0000",
line = list(
color = "#444",
width = 1
),
opacity = 0.5,
size = 5
)
# convert bvn3 to a tibble, plot and add markers
bvn3 %>%
as_tibble() %>%
plot_ly(x = ~x, y = ~y, z = ~z, marker = marker_style) %>%
add_markers()
```
### 8\.5\.3 Faux
Alternatively, you can use the package [faux](https://debruine.github.io/faux/) to generate any number of correlated variables. It also has a function for checking the parameters of your new simulated data (`check_sim_stats()`).
```
bvn3 <- rnorm_multi(
n = n,
vars = 3,
mu = mu,
sd = sd,
r = c(rho1_2, rho1_3, rho2_3),
varnames = c("x", "y", "z")
)
check_sim_stats(bvn3)
```
| n | var | x | y | z | mean | sd |
| --- | --- | --- | --- | --- | --- | --- |
| 200 | x | 1\.00 | 0\.54 | 0\.10 | 10\.35 | 7\.66 |
| 200 | y | 0\.54 | 1\.00 | 0\.67 | 20\.01 | 8\.77 |
| 200 | z | 0\.10 | 0\.67 | 1\.00 | 30\.37 | 9\.59 |
You can also use faux to simulate data for factorial designs. Set up the between\-subject and within\-subject factors as lists with the levels as (named) vectors. Means and standard deviations can be included as vectors or data frames. The function calculates sigma for you, structures your dataset, and outputs a plot of the design.
```
b <- list(pet = c(cat = "Cat Owners",
dog = "Dog Owners"))
w <- list(time = c("morning",
"noon",
"night"))
mu <- data.frame(
cat = c(10, 12, 14),
dog = c(10, 15, 20),
row.names = w$time
)
sd <- c(3, 3, 3, 5, 5, 5)
pet_data <- sim_design(
within = w,
between = b,
n = 100,
mu = mu,
sd = sd,
r = .5)
```
You can use the `check_sim_stats()` function, but you need to set the argument `between` to a vector of all the between\-subject factor columns.
```
check_sim_stats(pet_data, between = "pet")
```
| pet | n | var | morning | noon | night | mean | sd |
| --- | --- | --- | --- | --- | --- | --- | --- |
| cat | 100 | morning | 1\.00 | 0\.57 | 0\.51 | 10\.62 | 3\.48 |
| cat | 100 | noon | 0\.57 | 1\.00 | 0\.59 | 12\.44 | 3\.01 |
| cat | 100 | night | 0\.51 | 0\.59 | 1\.00 | 14\.61 | 3\.14 |
| dog | 100 | morning | 1\.00 | 0\.55 | 0\.50 | 9\.44 | 4\.92 |
| dog | 100 | noon | 0\.55 | 1\.00 | 0\.48 | 14\.18 | 5\.90 |
| dog | 100 | night | 0\.50 | 0\.48 | 1\.00 | 19\.42 | 5\.36 |
See the [faux website](https://debruine.github.io/faux/) for more detailed tutorials.
### 8\.5\.1 Bivariate Normal
A [bivariate normal](https://psyteachr.github.io/glossary/b#bivariate-normal "Two normally distributed vectors that have a specified correlation with each other.") distribution is two normally distributed vectors that have a specified relationship, or [correlation](https://psyteachr.github.io/glossary/c#correlation "The relationship two vectors have to each other.") to each other.
What if we want to sample from a population with specific relationships between variables? We can sample from a bivariate normal distribution using `mvrnorm()` from the `MASS` package.
Don’t load MASS with the `library()` function because it will create a conflict with the `select()` function from dplyr and you will always need to preface it with `dplyr::`. Just use `MASS::mvrnorm()`.
You need to know how many observations you want to simulate (`n`) the means of the two variables (`mu`) and you need to calculate a [covariance matrix](https://psyteachr.github.io/glossary/c#covariance-matrix "Parameters showing how a set of vectors vary and are correlated.") (`sigma`) from the correlation between the variables (`rho`) and their standard deviations (`sd`).
```
n <- 1000 # number of random samples
# name the mu values to give the resulting columns names
mu <- c(x = 10, y = 20) # the means of the samples
sd <- c(5, 6) # the SDs of the samples
rho <- 0.5 # population correlation between the two variables
# correlation matrix
cor_mat <- matrix(c( 1, rho,
rho, 1), 2)
# create the covariance matrix
sigma <- (sd %*% t(sd)) * cor_mat
# sample from bivariate normal distribution
bvn <- MASS::mvrnorm(n, mu, sigma)
```
Plot your sampled variables to check everything worked like you expect. It’s easiest to convert the output of `mvnorm` into a tibble in order to use it in ggplot.
```
bvn %>%
as_tibble() %>%
ggplot(aes(x, y)) +
geom_point(alpha = 0.5) +
geom_smooth(method = "lm") +
geom_density2d()
```
```
## `geom_smooth()` using formula 'y ~ x'
```
### 8\.5\.2 Multivariate Normal
You can generate more than 2 correlated variables, but it gets a little trickier to create the correlation matrix.
```
n <- 200 # number of random samples
mu <- c(x = 10, y = 20, z = 30) # the means of the samples
sd <- c(8, 9, 10) # the SDs of the samples
rho1_2 <- 0.5 # correlation between x and y
rho1_3 <- 0 # correlation between x and z
rho2_3 <- 0.7 # correlation between y and z
# correlation matrix
cor_mat <- matrix(c( 1, rho1_2, rho1_3,
rho1_2, 1, rho2_3,
rho1_3, rho2_3, 1), 3)
sigma <- (sd %*% t(sd)) * cor_mat
bvn3 <- MASS::mvrnorm(n, mu, sigma)
cor(bvn3) # check correlation matrix
```
```
## x y z
## x 1.0000000 0.5896674 0.1513108
## y 0.5896674 1.0000000 0.7468737
## z 0.1513108 0.7468737 1.0000000
```
You can use the `plotly` library to make a 3D graph.
```
#set up the marker style
marker_style = list(
color = "#ff0000",
line = list(
color = "#444",
width = 1
),
opacity = 0.5,
size = 5
)
# convert bvn3 to a tibble, plot and add markers
bvn3 %>%
as_tibble() %>%
plot_ly(x = ~x, y = ~y, z = ~z, marker = marker_style) %>%
add_markers()
```
### 8\.5\.3 Faux
Alternatively, you can use the package [faux](https://debruine.github.io/faux/) to generate any number of correlated variables. It also has a function for checking the parameters of your new simulated data (`check_sim_stats()`).
```
bvn3 <- rnorm_multi(
n = n,
vars = 3,
mu = mu,
sd = sd,
r = c(rho1_2, rho1_3, rho2_3),
varnames = c("x", "y", "z")
)
check_sim_stats(bvn3)
```
| n | var | x | y | z | mean | sd |
| --- | --- | --- | --- | --- | --- | --- |
| 200 | x | 1\.00 | 0\.54 | 0\.10 | 10\.35 | 7\.66 |
| 200 | y | 0\.54 | 1\.00 | 0\.67 | 20\.01 | 8\.77 |
| 200 | z | 0\.10 | 0\.67 | 1\.00 | 30\.37 | 9\.59 |
You can also use faux to simulate data for factorial designs. Set up the between\-subject and within\-subject factors as lists with the levels as (named) vectors. Means and standard deviations can be included as vectors or data frames. The function calculates sigma for you, structures your dataset, and outputs a plot of the design.
```
b <- list(pet = c(cat = "Cat Owners",
dog = "Dog Owners"))
w <- list(time = c("morning",
"noon",
"night"))
mu <- data.frame(
cat = c(10, 12, 14),
dog = c(10, 15, 20),
row.names = w$time
)
sd <- c(3, 3, 3, 5, 5, 5)
pet_data <- sim_design(
within = w,
between = b,
n = 100,
mu = mu,
sd = sd,
r = .5)
```
You can use the `check_sim_stats()` function, but you need to set the argument `between` to a vector of all the between\-subject factor columns.
```
check_sim_stats(pet_data, between = "pet")
```
| pet | n | var | morning | noon | night | mean | sd |
| --- | --- | --- | --- | --- | --- | --- | --- |
| cat | 100 | morning | 1\.00 | 0\.57 | 0\.51 | 10\.62 | 3\.48 |
| cat | 100 | noon | 0\.57 | 1\.00 | 0\.59 | 12\.44 | 3\.01 |
| cat | 100 | night | 0\.51 | 0\.59 | 1\.00 | 14\.61 | 3\.14 |
| dog | 100 | morning | 1\.00 | 0\.55 | 0\.50 | 9\.44 | 4\.92 |
| dog | 100 | noon | 0\.55 | 1\.00 | 0\.48 | 14\.18 | 5\.90 |
| dog | 100 | night | 0\.50 | 0\.48 | 1\.00 | 19\.42 | 5\.36 |
See the [faux website](https://debruine.github.io/faux/) for more detailed tutorials.
8\.6 Statistical terms
----------------------
Let’s review some important statistical terms before we review tests of distributions.
### 8\.6\.1 Effect
The [effect](https://psyteachr.github.io/glossary/e#effect "Some measure of your data, such as the mean value, or the number of standard deviations the mean differs from a chance value.") is some measure of your data. This will depend on the type of data you have and the type of statistical test you are using. For example, if you flipped a coin 100 times and it landed heads 66 times, the effect would be 66/100\. You can then use the exact binomial test to compare this effect to the [null effect](https://psyteachr.github.io/glossary/n#null-effect "An outcome that does not show an otherwise expected effect.") you would expect from a fair coin (50/100\) or to any other effect you choose. The [effect size](https://psyteachr.github.io/glossary/e#effect-size "The difference between the effect in your data and the null effect (usually a chance value)") refers to the difference between the effect in your data and the null effect (usually a chance value).
### 8\.6\.2 P\-value
The [p\-value](https://psyteachr.github.io/glossary/p#p-value "The probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect)") of a test is the probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect). So if you used a binomial test to test against a chance probability of 1/6 (e.g., the probability of rolling 1 with a 6\-sided die), then a p\-value of 0\.17 means that you could expect to see effects at least as extreme as your data 17% of the time just by chance alone.
### 8\.6\.3 Alpha
If you are using null hypothesis significance testing ([NHST](https://psyteachr.github.io/glossary/n#nhst "Null Hypothesis Signficance Testing")), then you need to decide on a cutoff value ([alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot")) for making a decision to reject the null hypothesis. We call p\-values below the alpha cutoff [significant](https://psyteachr.github.io/glossary/s#significant "The conclusion when the p-value is less than the critical alpha. "). In psychology, alpha is traditionally set at 0\.05, but there are good arguments for [setting a different criterion in some circumstances](http://daniellakens.blogspot.com/2019/05/justifying-your-alpha-by-minimizing-or.html).
### 8\.6\.4 False Positive/Negative
The probability that a test concludes there is an effect when there is really no effect (e.g., concludes a fair coin is biased) is called the [false positive](https://psyteachr.github.io/glossary/f#false-positive "When a test concludes there is an effect when there really is no effect") rate (or [Type I Error](https://psyteachr.github.io/glossary/t#type-i-error "A false positive; When a test concludes there is an effect when there is really is no effect") Rate). The [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") is the false positive rate we accept for a test. The probability that a test concludes there is no effect when there really is one (e.g., concludes a biased coin is fair) is called the [false negative](https://psyteachr.github.io/glossary/f#false-negative "When a test concludes there is no effect when there really is an effect") rate (or [Type II Error](https://psyteachr.github.io/glossary/t#type-ii-error "A false negative; When a test concludes there is no effect when there is really is an effect") Rate). The [beta](https://psyteachr.github.io/glossary/b#beta "The false negative rate we accept for a statistical test.") is the false negative rate we accept for a test.
The false positive rate is not the overall probability of getting a false positive, but the probability of a false positive *under the null hypothesis*. Similarly, the false negative rate is the probability of a false negative *under the alternative hypothesis*. Unless we know the probability that we are testing a null effect, we can’t say anything about the overall probability of false positives or negatives. If 100% of the hypotheses we test are false, then all significant effects are false positives, but if all of the hypotheses we test are true, then all of the positives are true positives and the overall false positive rate is 0\.
### 8\.6\.5 Power and SESOI
[Power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") is equal to 1 minus beta (i.e., the [true positive](https://psyteachr.github.io/glossary/t#true-positive "When a test concludes there is an effect when there is really is an effect") rate), and depends on the effect size, how many samples we take (n), and what we set alpha to. For any test, if you specify all but one of these values, you can calculate the last. The effect size you use in power calculations should be the smallest effect size of interest ([SESOI](https://psyteachr.github.io/glossary/s#sesoi "Smallest Effect Size of Interest: the smallest effect that is theoretically or practically meaningful")). See ([Daniël Lakens, Scheel, and Isager 2018](#ref-TOSTtutorial))([https://doi.org/10\.1177/2515245918770963](https://doi.org/10.1177/2515245918770963)) for a tutorial on methods for choosing an SESOI.
Let’s say you want to be able to detect at least a 15% difference from chance (50%) in a coin’s fairness, and you want your test to have a 5% chance of false positives and a 10% chance of false negatives. What are the following values?
* alpha \=
* beta \=
* false positive rate \=
* false negative rate \=
* power \=
* SESOI \=
### 8\.6\.6 Confidence Intervals
The [confidence interval](https://psyteachr.github.io/glossary/c#confidence-interval "A type of interval estimate used to summarise a given statistic or measurement where a proportion of intervals calculated from the sample(s) will contain the true value of the statistic.") is a range around some value (such as a mean) that has some probability of containing the parameter, if you repeated the process many times. Traditionally in psychology, we use 95% confidence intervals, but you can calculate CIs for any percentage.
A 95% CI does *not* mean that there is a 95% probability that the true mean lies within this range, but that, if you repeated the study many times and calculated the CI this same way every time, you’d expect the true mean to be inside the CI in 95% of the studies. This seems like a subtle distinction, but can lead to some misunderstandings. See ([Morey et al. 2016](#ref-Morey2016))([https://link.springer.com/article/10\.3758/s13423\-015\-0947\-8](https://link.springer.com/article/10.3758/s13423-015-0947-8)) for more detailed discussion.
### 8\.6\.1 Effect
The [effect](https://psyteachr.github.io/glossary/e#effect "Some measure of your data, such as the mean value, or the number of standard deviations the mean differs from a chance value.") is some measure of your data. This will depend on the type of data you have and the type of statistical test you are using. For example, if you flipped a coin 100 times and it landed heads 66 times, the effect would be 66/100\. You can then use the exact binomial test to compare this effect to the [null effect](https://psyteachr.github.io/glossary/n#null-effect "An outcome that does not show an otherwise expected effect.") you would expect from a fair coin (50/100\) or to any other effect you choose. The [effect size](https://psyteachr.github.io/glossary/e#effect-size "The difference between the effect in your data and the null effect (usually a chance value)") refers to the difference between the effect in your data and the null effect (usually a chance value).
### 8\.6\.2 P\-value
The [p\-value](https://psyteachr.github.io/glossary/p#p-value "The probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect)") of a test is the probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect). So if you used a binomial test to test against a chance probability of 1/6 (e.g., the probability of rolling 1 with a 6\-sided die), then a p\-value of 0\.17 means that you could expect to see effects at least as extreme as your data 17% of the time just by chance alone.
### 8\.6\.3 Alpha
If you are using null hypothesis significance testing ([NHST](https://psyteachr.github.io/glossary/n#nhst "Null Hypothesis Signficance Testing")), then you need to decide on a cutoff value ([alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot")) for making a decision to reject the null hypothesis. We call p\-values below the alpha cutoff [significant](https://psyteachr.github.io/glossary/s#significant "The conclusion when the p-value is less than the critical alpha. "). In psychology, alpha is traditionally set at 0\.05, but there are good arguments for [setting a different criterion in some circumstances](http://daniellakens.blogspot.com/2019/05/justifying-your-alpha-by-minimizing-or.html).
### 8\.6\.4 False Positive/Negative
The probability that a test concludes there is an effect when there is really no effect (e.g., concludes a fair coin is biased) is called the [false positive](https://psyteachr.github.io/glossary/f#false-positive "When a test concludes there is an effect when there really is no effect") rate (or [Type I Error](https://psyteachr.github.io/glossary/t#type-i-error "A false positive; When a test concludes there is an effect when there is really is no effect") Rate). The [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") is the false positive rate we accept for a test. The probability that a test concludes there is no effect when there really is one (e.g., concludes a biased coin is fair) is called the [false negative](https://psyteachr.github.io/glossary/f#false-negative "When a test concludes there is no effect when there really is an effect") rate (or [Type II Error](https://psyteachr.github.io/glossary/t#type-ii-error "A false negative; When a test concludes there is no effect when there is really is an effect") Rate). The [beta](https://psyteachr.github.io/glossary/b#beta "The false negative rate we accept for a statistical test.") is the false negative rate we accept for a test.
The false positive rate is not the overall probability of getting a false positive, but the probability of a false positive *under the null hypothesis*. Similarly, the false negative rate is the probability of a false negative *under the alternative hypothesis*. Unless we know the probability that we are testing a null effect, we can’t say anything about the overall probability of false positives or negatives. If 100% of the hypotheses we test are false, then all significant effects are false positives, but if all of the hypotheses we test are true, then all of the positives are true positives and the overall false positive rate is 0\.
### 8\.6\.5 Power and SESOI
[Power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") is equal to 1 minus beta (i.e., the [true positive](https://psyteachr.github.io/glossary/t#true-positive "When a test concludes there is an effect when there is really is an effect") rate), and depends on the effect size, how many samples we take (n), and what we set alpha to. For any test, if you specify all but one of these values, you can calculate the last. The effect size you use in power calculations should be the smallest effect size of interest ([SESOI](https://psyteachr.github.io/glossary/s#sesoi "Smallest Effect Size of Interest: the smallest effect that is theoretically or practically meaningful")). See ([Daniël Lakens, Scheel, and Isager 2018](#ref-TOSTtutorial))([https://doi.org/10\.1177/2515245918770963](https://doi.org/10.1177/2515245918770963)) for a tutorial on methods for choosing an SESOI.
Let’s say you want to be able to detect at least a 15% difference from chance (50%) in a coin’s fairness, and you want your test to have a 5% chance of false positives and a 10% chance of false negatives. What are the following values?
* alpha \=
* beta \=
* false positive rate \=
* false negative rate \=
* power \=
* SESOI \=
### 8\.6\.6 Confidence Intervals
The [confidence interval](https://psyteachr.github.io/glossary/c#confidence-interval "A type of interval estimate used to summarise a given statistic or measurement where a proportion of intervals calculated from the sample(s) will contain the true value of the statistic.") is a range around some value (such as a mean) that has some probability of containing the parameter, if you repeated the process many times. Traditionally in psychology, we use 95% confidence intervals, but you can calculate CIs for any percentage.
A 95% CI does *not* mean that there is a 95% probability that the true mean lies within this range, but that, if you repeated the study many times and calculated the CI this same way every time, you’d expect the true mean to be inside the CI in 95% of the studies. This seems like a subtle distinction, but can lead to some misunderstandings. See ([Morey et al. 2016](#ref-Morey2016))([https://link.springer.com/article/10\.3758/s13423\-015\-0947\-8](https://link.springer.com/article/10.3758/s13423-015-0947-8)) for more detailed discussion.
8\.7 Tests
----------
### 8\.7\.1 Exact binomial test
`binom.test(x, n, p)`
You can test a binomial distribution against a specific probability using the exact binomial test.
* `x` \= the number of successes
* `n` \= the number of trials
* `p` \= hypothesised probability of success
Here we can test a series of 10 coin flips from a fair coin and a biased coin against the hypothesised probability of 0\.5 (even odds).
```
n <- 10
fair_coin <- rbinom(1, n, 0.5)
biased_coin <- rbinom(1, n, 0.6)
binom.test(fair_coin, n, p = 0.5)
binom.test(biased_coin, n, p = 0.5)
```
```
##
## Exact binomial test
##
## data: fair_coin and n
## number of successes = 6, number of trials = 10, p-value = 0.7539
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.2623781 0.8784477
## sample estimates:
## probability of success
## 0.6
##
##
## Exact binomial test
##
## data: biased_coin and n
## number of successes = 8, number of trials = 10, p-value = 0.1094
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.4439045 0.9747893
## sample estimates:
## probability of success
## 0.8
```
Run the code above several times, noting the p\-values for the fair and biased coins. Alternatively, you can [simulate coin flips](http://shiny.psy.gla.ac.uk/debruine/coinsim/) online and build up a graph of results and p\-values.
* How does the p\-value vary for the fair and biased coins?
* What happens to the confidence intervals if you increase n from 10 to 100?
* What criterion would you use to tell if the observed data indicate the coin is fair or biased?
* How often do you conclude the fair coin is biased (false positives)?
* How often do you conclude the biased coin is fair (false negatives)?
#### 8\.7\.1\.1 Sampling function
To estimate these rates, we need to repeat the sampling above many times. A [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") is ideal for repeating the exact same procedure over and over. Set the arguments of the function to variables that you might want to change. Here, we will want to estimate power for:
* different sample sizes (`n`)
* different effects (`bias`)
* different hypothesised probabilities (`p`, defaults to 0\.5\)
```
sim_binom_test <- function(n, bias, p = 0.5) {
# simulate 1 coin flip n times with the specified bias
coin <- rbinom(1, n, bias)
# run a binomial test on the simulated data for the specified p
btest <- binom.test(coin, n, p)
# return the p-value of this test
btest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_binom_test(100, 0.6)
```
```
## [1] 0.1332106
```
#### 8\.7\.1\.2 Calculate power
Then you can use the `replicate()` function to run it many times and save all the output values. You can calculate the [power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") of your analysis by checking the proportion of your simulated analyses that have a p\-value less than your [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") (the probability of rejecting the null hypothesis when the null hypothesis is true).
```
my_reps <- replicate(1e4, sim_binom_test(100, 0.6))
alpha <- 0.05 # this does not always have to be 0.05
mean(my_reps < alpha)
```
```
## [1] 0.4561
```
`1e4` is just scientific notation for a 1 followed by 4 zeros (`10000`). When you’re running simulations, you usually want to run a lot of them. It’s a pain to keep track of whether you’ve typed 5 or 6 zeros (100000 vs 1000000\) and this will change your running time by an order of magnitude.
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
### 8\.7\.2 T\-test
`t.test(x, y, alternative, mu, paired)`
Use a t\-test to compare the mean of one distribution to a null hypothesis (one\-sample t\-test), compare the means of two samples (independent\-samples t\-test), or compare pairs of values (paired\-samples t\-test).
You can run a one\-sample t\-test comparing the mean of your data to `mu`. Here is a simulated distribution with a mean of 0\.5 and an SD of 1, creating an effect size of 0\.5 SD when tested against a `mu` of 0\. Run the simulation a few times to see how often the t\-test returns a significant p\-value (or run it in the [shiny app](http://shiny.psy.gla.ac.uk/debruine/normsim/)).
```
sim_norm <- rnorm(100, 0.5, 1)
t.test(sim_norm, mu = 0)
```
```
##
## One Sample t-test
##
## data: sim_norm
## t = 6.2874, df = 99, p-value = 8.758e-09
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 0.4049912 0.7784761
## sample estimates:
## mean of x
## 0.5917337
```
Run an independent\-samples t\-test by comparing two lists of values.
```
a <- rnorm(100, 0.5, 1)
b <- rnorm(100, 0.7, 1)
t_ind <- t.test(a, b, paired = FALSE)
t_ind
```
```
##
## Welch Two Sample t-test
##
## data: a and b
## t = -1.8061, df = 197.94, p-value = 0.07243
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.54825320 0.02408469
## sample estimates:
## mean of x mean of y
## 0.4585985 0.7206828
```
The `paired` argument defaults to `FALSE`, but it’s good practice to always explicitly set it so you are never confused about what type of test you are performing.
#### 8\.7\.2\.1 Sampling function
We can use the `names()` function to find out the names of all the t.test parameters and use this to just get one type of data, like the test statistic (e.g., t\-value).
```
names(t_ind)
t_ind$statistic
```
```
## [1] "statistic" "parameter" "p.value" "conf.int" "estimate"
## [6] "null.value" "stderr" "alternative" "method" "data.name"
## t
## -1.806051
```
If you want to run the simulation many times and record information each time, first you need to turn your simulation into a function.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2) {
# simulate v1
v1 <- rnorm(n, m1, sd1)
#simulate v2
v2 <- rnorm(n, m2, sd2)
# compare using an independent samples t-test
t_ind <- t.test(v1, v2, paired = FALSE)
# return the p-value
return(t_ind$p.value)
}
```
Run it a few times to check that it gives you sensible values.
```
sim_t_ind(100, 0.7, 1, 0.5, 1)
```
```
## [1] 0.362521
```
#### 8\.7\.2\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_t_ind(100, 0.7, 1, 0.5, 1))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.2925
```
Run the code above several times. How much does the power value fluctuate? How many replications do you need to run to get a reliable estimate of power?
Compare your power estimate from simluation to a power calculation using `power.t.test()`. Here, `delta` is the difference between `m1` and `m2` above.
```
power.t.test(n = 100,
delta = 0.2,
sd = 1,
sig.level = alpha,
type = "two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
What do you think the distribution of p\-values is when there is no effect (i.e., the means are identical)? Check this yourself.
Make sure the `boundary` argument is set to `0` for p\-value histograms. See what happens with a null effect if `boundary` is not set.
### 8\.7\.3 Correlation
You can test if continuous variables are related to each other using the `cor()` function. Let’s use `rnorm_multi()` to make a quick table of correlated values.
```
dat <- rnorm_multi(
n = 100,
vars = 2,
r = -0.5,
varnames = c("x", "y")
)
cor(dat$x, dat$y)
```
```
## [1] -0.4960331
```
Set `n` to a large number like 1e6 so that the correlations are less affected by chance. Change the value of the **mean** for `a`, `x`, or `y`. Does it change the correlation between `x` and `y`? What happens when you increase or decrease the **sd**? Can you work out any rules here?
`cor()` defaults to Pearson’s correlations. Set the `method` argument to use Kendall or Spearman correlations.
```
cor(dat$x, dat$y, method = "spearman")
```
```
## [1] -0.4724992
```
#### 8\.7\.3\.1 Sampling function
Create a function that creates two variables with `n` observations and `r` correlation. Use the function `cor.test()` to give you p\-values for the correlation.
```
sim_cor_test <- function(n = 100, r = 0) {
dat <- rnorm_multi(
n = n,
vars = 2,
r = r,
varnames = c("x", "y")
)
ctest <- cor.test(dat$x, dat$y)
ctest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_cor_test(50, .5)
```
```
## [1] 0.001354836
```
#### 8\.7\.3\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_cor_test(50, 0.5))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.965
```
Compare to the value calcuated by the pwr package.
```
pwr::pwr.r.test(n = 50, r = 0.5)
```
```
##
## approximate correlation power calculation (arctangh transformation)
##
## n = 50
## r = 0.5
## sig.level = 0.05
## power = 0.9669813
## alternative = two.sided
```
### 8\.7\.1 Exact binomial test
`binom.test(x, n, p)`
You can test a binomial distribution against a specific probability using the exact binomial test.
* `x` \= the number of successes
* `n` \= the number of trials
* `p` \= hypothesised probability of success
Here we can test a series of 10 coin flips from a fair coin and a biased coin against the hypothesised probability of 0\.5 (even odds).
```
n <- 10
fair_coin <- rbinom(1, n, 0.5)
biased_coin <- rbinom(1, n, 0.6)
binom.test(fair_coin, n, p = 0.5)
binom.test(biased_coin, n, p = 0.5)
```
```
##
## Exact binomial test
##
## data: fair_coin and n
## number of successes = 6, number of trials = 10, p-value = 0.7539
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.2623781 0.8784477
## sample estimates:
## probability of success
## 0.6
##
##
## Exact binomial test
##
## data: biased_coin and n
## number of successes = 8, number of trials = 10, p-value = 0.1094
## alternative hypothesis: true probability of success is not equal to 0.5
## 95 percent confidence interval:
## 0.4439045 0.9747893
## sample estimates:
## probability of success
## 0.8
```
Run the code above several times, noting the p\-values for the fair and biased coins. Alternatively, you can [simulate coin flips](http://shiny.psy.gla.ac.uk/debruine/coinsim/) online and build up a graph of results and p\-values.
* How does the p\-value vary for the fair and biased coins?
* What happens to the confidence intervals if you increase n from 10 to 100?
* What criterion would you use to tell if the observed data indicate the coin is fair or biased?
* How often do you conclude the fair coin is biased (false positives)?
* How often do you conclude the biased coin is fair (false negatives)?
#### 8\.7\.1\.1 Sampling function
To estimate these rates, we need to repeat the sampling above many times. A [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") is ideal for repeating the exact same procedure over and over. Set the arguments of the function to variables that you might want to change. Here, we will want to estimate power for:
* different sample sizes (`n`)
* different effects (`bias`)
* different hypothesised probabilities (`p`, defaults to 0\.5\)
```
sim_binom_test <- function(n, bias, p = 0.5) {
# simulate 1 coin flip n times with the specified bias
coin <- rbinom(1, n, bias)
# run a binomial test on the simulated data for the specified p
btest <- binom.test(coin, n, p)
# return the p-value of this test
btest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_binom_test(100, 0.6)
```
```
## [1] 0.1332106
```
#### 8\.7\.1\.2 Calculate power
Then you can use the `replicate()` function to run it many times and save all the output values. You can calculate the [power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") of your analysis by checking the proportion of your simulated analyses that have a p\-value less than your [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") (the probability of rejecting the null hypothesis when the null hypothesis is true).
```
my_reps <- replicate(1e4, sim_binom_test(100, 0.6))
alpha <- 0.05 # this does not always have to be 0.05
mean(my_reps < alpha)
```
```
## [1] 0.4561
```
`1e4` is just scientific notation for a 1 followed by 4 zeros (`10000`). When you’re running simulations, you usually want to run a lot of them. It’s a pain to keep track of whether you’ve typed 5 or 6 zeros (100000 vs 1000000\) and this will change your running time by an order of magnitude.
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
#### 8\.7\.1\.1 Sampling function
To estimate these rates, we need to repeat the sampling above many times. A [function](https://psyteachr.github.io/glossary/f#function "A named section of code that can be reused.") is ideal for repeating the exact same procedure over and over. Set the arguments of the function to variables that you might want to change. Here, we will want to estimate power for:
* different sample sizes (`n`)
* different effects (`bias`)
* different hypothesised probabilities (`p`, defaults to 0\.5\)
```
sim_binom_test <- function(n, bias, p = 0.5) {
# simulate 1 coin flip n times with the specified bias
coin <- rbinom(1, n, bias)
# run a binomial test on the simulated data for the specified p
btest <- binom.test(coin, n, p)
# return the p-value of this test
btest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_binom_test(100, 0.6)
```
```
## [1] 0.1332106
```
#### 8\.7\.1\.2 Calculate power
Then you can use the `replicate()` function to run it many times and save all the output values. You can calculate the [power](https://psyteachr.github.io/glossary/p#power "The probability of rejecting the null hypothesis when it is false.") of your analysis by checking the proportion of your simulated analyses that have a p\-value less than your [alpha](https://psyteachr.github.io/glossary/a#alpha "(stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot") (the probability of rejecting the null hypothesis when the null hypothesis is true).
```
my_reps <- replicate(1e4, sim_binom_test(100, 0.6))
alpha <- 0.05 # this does not always have to be 0.05
mean(my_reps < alpha)
```
```
## [1] 0.4561
```
`1e4` is just scientific notation for a 1 followed by 4 zeros (`10000`). When you’re running simulations, you usually want to run a lot of them. It’s a pain to keep track of whether you’ve typed 5 or 6 zeros (100000 vs 1000000\) and this will change your running time by an order of magnitude.
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
### 8\.7\.2 T\-test
`t.test(x, y, alternative, mu, paired)`
Use a t\-test to compare the mean of one distribution to a null hypothesis (one\-sample t\-test), compare the means of two samples (independent\-samples t\-test), or compare pairs of values (paired\-samples t\-test).
You can run a one\-sample t\-test comparing the mean of your data to `mu`. Here is a simulated distribution with a mean of 0\.5 and an SD of 1, creating an effect size of 0\.5 SD when tested against a `mu` of 0\. Run the simulation a few times to see how often the t\-test returns a significant p\-value (or run it in the [shiny app](http://shiny.psy.gla.ac.uk/debruine/normsim/)).
```
sim_norm <- rnorm(100, 0.5, 1)
t.test(sim_norm, mu = 0)
```
```
##
## One Sample t-test
##
## data: sim_norm
## t = 6.2874, df = 99, p-value = 8.758e-09
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 0.4049912 0.7784761
## sample estimates:
## mean of x
## 0.5917337
```
Run an independent\-samples t\-test by comparing two lists of values.
```
a <- rnorm(100, 0.5, 1)
b <- rnorm(100, 0.7, 1)
t_ind <- t.test(a, b, paired = FALSE)
t_ind
```
```
##
## Welch Two Sample t-test
##
## data: a and b
## t = -1.8061, df = 197.94, p-value = 0.07243
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.54825320 0.02408469
## sample estimates:
## mean of x mean of y
## 0.4585985 0.7206828
```
The `paired` argument defaults to `FALSE`, but it’s good practice to always explicitly set it so you are never confused about what type of test you are performing.
#### 8\.7\.2\.1 Sampling function
We can use the `names()` function to find out the names of all the t.test parameters and use this to just get one type of data, like the test statistic (e.g., t\-value).
```
names(t_ind)
t_ind$statistic
```
```
## [1] "statistic" "parameter" "p.value" "conf.int" "estimate"
## [6] "null.value" "stderr" "alternative" "method" "data.name"
## t
## -1.806051
```
If you want to run the simulation many times and record information each time, first you need to turn your simulation into a function.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2) {
# simulate v1
v1 <- rnorm(n, m1, sd1)
#simulate v2
v2 <- rnorm(n, m2, sd2)
# compare using an independent samples t-test
t_ind <- t.test(v1, v2, paired = FALSE)
# return the p-value
return(t_ind$p.value)
}
```
Run it a few times to check that it gives you sensible values.
```
sim_t_ind(100, 0.7, 1, 0.5, 1)
```
```
## [1] 0.362521
```
#### 8\.7\.2\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_t_ind(100, 0.7, 1, 0.5, 1))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.2925
```
Run the code above several times. How much does the power value fluctuate? How many replications do you need to run to get a reliable estimate of power?
Compare your power estimate from simluation to a power calculation using `power.t.test()`. Here, `delta` is the difference between `m1` and `m2` above.
```
power.t.test(n = 100,
delta = 0.2,
sd = 1,
sig.level = alpha,
type = "two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
What do you think the distribution of p\-values is when there is no effect (i.e., the means are identical)? Check this yourself.
Make sure the `boundary` argument is set to `0` for p\-value histograms. See what happens with a null effect if `boundary` is not set.
#### 8\.7\.2\.1 Sampling function
We can use the `names()` function to find out the names of all the t.test parameters and use this to just get one type of data, like the test statistic (e.g., t\-value).
```
names(t_ind)
t_ind$statistic
```
```
## [1] "statistic" "parameter" "p.value" "conf.int" "estimate"
## [6] "null.value" "stderr" "alternative" "method" "data.name"
## t
## -1.806051
```
If you want to run the simulation many times and record information each time, first you need to turn your simulation into a function.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2) {
# simulate v1
v1 <- rnorm(n, m1, sd1)
#simulate v2
v2 <- rnorm(n, m2, sd2)
# compare using an independent samples t-test
t_ind <- t.test(v1, v2, paired = FALSE)
# return the p-value
return(t_ind$p.value)
}
```
Run it a few times to check that it gives you sensible values.
```
sim_t_ind(100, 0.7, 1, 0.5, 1)
```
```
## [1] 0.362521
```
#### 8\.7\.2\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_t_ind(100, 0.7, 1, 0.5, 1))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.2925
```
Run the code above several times. How much does the power value fluctuate? How many replications do you need to run to get a reliable estimate of power?
Compare your power estimate from simluation to a power calculation using `power.t.test()`. Here, `delta` is the difference between `m1` and `m2` above.
```
power.t.test(n = 100,
delta = 0.2,
sd = 1,
sig.level = alpha,
type = "two.sample")
```
```
##
## Two-sample t test power calculation
##
## n = 100
## delta = 0.2
## sd = 1
## sig.level = 0.05
## power = 0.2902664
## alternative = two.sided
##
## NOTE: n is number in *each* group
```
You can plot the distribution of p\-values.
```
ggplot() +
geom_histogram(
aes(my_reps),
binwidth = 0.05,
boundary = 0,
fill = "white",
color = "black"
)
```
What do you think the distribution of p\-values is when there is no effect (i.e., the means are identical)? Check this yourself.
Make sure the `boundary` argument is set to `0` for p\-value histograms. See what happens with a null effect if `boundary` is not set.
### 8\.7\.3 Correlation
You can test if continuous variables are related to each other using the `cor()` function. Let’s use `rnorm_multi()` to make a quick table of correlated values.
```
dat <- rnorm_multi(
n = 100,
vars = 2,
r = -0.5,
varnames = c("x", "y")
)
cor(dat$x, dat$y)
```
```
## [1] -0.4960331
```
Set `n` to a large number like 1e6 so that the correlations are less affected by chance. Change the value of the **mean** for `a`, `x`, or `y`. Does it change the correlation between `x` and `y`? What happens when you increase or decrease the **sd**? Can you work out any rules here?
`cor()` defaults to Pearson’s correlations. Set the `method` argument to use Kendall or Spearman correlations.
```
cor(dat$x, dat$y, method = "spearman")
```
```
## [1] -0.4724992
```
#### 8\.7\.3\.1 Sampling function
Create a function that creates two variables with `n` observations and `r` correlation. Use the function `cor.test()` to give you p\-values for the correlation.
```
sim_cor_test <- function(n = 100, r = 0) {
dat <- rnorm_multi(
n = n,
vars = 2,
r = r,
varnames = c("x", "y")
)
ctest <- cor.test(dat$x, dat$y)
ctest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_cor_test(50, .5)
```
```
## [1] 0.001354836
```
#### 8\.7\.3\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_cor_test(50, 0.5))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.965
```
Compare to the value calcuated by the pwr package.
```
pwr::pwr.r.test(n = 50, r = 0.5)
```
```
##
## approximate correlation power calculation (arctangh transformation)
##
## n = 50
## r = 0.5
## sig.level = 0.05
## power = 0.9669813
## alternative = two.sided
```
#### 8\.7\.3\.1 Sampling function
Create a function that creates two variables with `n` observations and `r` correlation. Use the function `cor.test()` to give you p\-values for the correlation.
```
sim_cor_test <- function(n = 100, r = 0) {
dat <- rnorm_multi(
n = n,
vars = 2,
r = r,
varnames = c("x", "y")
)
ctest <- cor.test(dat$x, dat$y)
ctest$p.value
}
```
Once you’ve created your function, test it a few times, changing the values.
```
sim_cor_test(50, .5)
```
```
## [1] 0.001354836
```
#### 8\.7\.3\.2 Calculate power
Now replicate the simulation 1000 times.
```
my_reps <- replicate(1e4, sim_cor_test(50, 0.5))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.965
```
Compare to the value calcuated by the pwr package.
```
pwr::pwr.r.test(n = 50, r = 0.5)
```
```
##
## approximate correlation power calculation (arctangh transformation)
##
## n = 50
## r = 0.5
## sig.level = 0.05
## power = 0.9669813
## alternative = two.sided
```
8\.8 Example
------------
This example uses the [Growth Chart Data Tables](https://www.cdc.gov/growthcharts/data/zscore/zstatage.csv) from the [US CDC](https://www.cdc.gov/growthcharts/zscore.htm). The data consist of height in centimeters for the z\-scores of –2, \-1\.5, \-1, \-0\.5, 0, 0\.5, 1, 1\.5, and 2 by sex (1\=male; 2\=female) and half\-month of age (from 24\.0 to 240\.5 months).
### 8\.8\.1 Load \& wrangle
We have to do a little data wrangling first. Have a look at the data after you import it and relabel `Sex` to `male` and `female` instead of `1` and `2`. Also convert `Agemos` (age in months) to years. Relabel the column `0` as `mean` and calculate a new column named `sd` as the difference between columns `1` and `0`.
```
orig_height_age <- read_csv("https://www.cdc.gov/growthcharts/data/zscore/zstatage.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## Sex = col_character(),
## Agemos = col_character(),
## `-2` = col_double(),
## `-1.5` = col_double(),
## `-1` = col_double(),
## `-0.5` = col_double(),
## `0` = col_double(),
## `0.5` = col_double(),
## `1` = col_double(),
## `1.5` = col_double(),
## `2` = col_double()
## )
```
```
height_age <- orig_height_age %>%
filter(Sex %in% c(1,2)) %>%
mutate(
sex = recode(Sex, "1" = "male", "2" = "female"),
age = as.numeric(Agemos)/12,
sd = `1` - `0`
) %>%
select(sex, age, mean = `0`, sd)
```
### 8\.8\.2 Plot
Plot your new data frame to see how mean height changes with age for boys and girls.
```
ggplot(height_age, aes(age, mean, color = sex)) +
geom_smooth(aes(ymin = mean - sd,
ymax = mean + sd),
stat="identity")
```
### 8\.8\.3 Simulate a population
Simulate 50 random male heights and 50 random female heights for 20\-year\-olds using the `rnorm()` function and the means and SDs from the `height_age` table. Plot the data.
```
age_filter <- 20
m <- filter(height_age, age == age_filter, sex == "male")
f <- filter(height_age, age == age_filter, sex == "female")
sim_height <- tibble(
male = rnorm(50, m$mean, m$sd),
female = rnorm(50, f$mean, f$sd)
) %>%
gather("sex", "height", male:female)
ggplot(sim_height) +
geom_density(aes(height, fill = sex), alpha = 0.5) +
xlim(125, 225)
```
Run the simulation above several times, noting how the density plot changes. Try changing the age you’re simulating.
### 8\.8\.4 Analyse simulated data
Use the `sim_t_ind(n, m1, sd1, m2, sd2)` function we created above to generate one simulation with a sample size of 50 in each group using the means and SDs of male and female 14\-year\-olds.
```
age_filter <- 14
m <- filter(height_age, age == age_filter, sex == "male")
f <- filter(height_age, age == age_filter, sex == "female")
sim_t_ind(50, m$mean, m$sd, f$mean, f$sd)
```
```
## [1] 0.0005255744
```
### 8\.8\.5 Replicate simulation
Now replicate this 1e4 times using the `replicate()` function. This function will save the returned p\-values in a list (`my_reps`). We can then check what proportion of those p\-values are less than our alpha value. This is the power of our test.
```
my_reps <- replicate(1e4, sim_t_ind(50, m$mean, m$sd, f$mean, f$sd))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.6403
```
### 8\.8\.6 One\-tailed prediction
This design has about 65% power to detect the sex difference in height (with a 2\-tailed test). Modify the `sim_t_ind` function for a 1\-tailed prediction.
You could just set `alternative` equal to “greater” in the function, but it might be better to add the `alt` argument to your function (giving it the same default value as `t.test`) and change the value of `alternative` in the function to `alt`.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2, alt = "two.sided") {
v1 <- rnorm(n, m1, sd1)
v2 <- rnorm(n, m2, sd2)
t_ind <- t.test(v1, v2, paired = FALSE, alternative = alt)
return(t_ind$p.value)
}
alpha <- 0.05
my_reps <- replicate(1e4, sim_t_ind(50, m$mean, m$sd, f$mean, f$sd, "greater"))
mean(my_reps < alpha)
```
```
## [1] 0.752
```
### 8\.8\.7 Range of sample sizes
What if we want to find out what sample size will give us 80% power? We can try trial and error. We know the number should be slightly larger than 50\. But you can search more systematically by repeating your power calculation for a range of sample sizes.
This might seem like overkill for a t\-test, where you can easily look up sample size calculators online, but it is a valuable skill to learn for when your analyses become more complicated.
Start with a relatively low number of replications and/or more spread\-out samples to estimate where you should be looking more specifically. Then you can repeat with a narrower/denser range of sample sizes and more iterations.
```
# make another custom function to return power
pwr_func <- function(n, reps = 100, alpha = 0.05) {
ps <- replicate(reps, sim_t_ind(n, m$mean, m$sd, f$mean, f$sd, "greater"))
mean(ps < alpha)
}
# make a table of the n values you want to check
power_table <- tibble(
n = seq(20, 100, by = 5)
) %>%
# run the power function for each n
mutate(power = map_dbl(n, pwr_func))
# plot the results
ggplot(power_table, aes(n, power)) +
geom_smooth() +
geom_point() +
geom_hline(yintercept = 0.8)
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
Now we can narrow down our search to values around 55 (plus or minus 5\) and increase the number of replications from 1e3 to 1e4\.
```
power_table <- tibble(
n = seq(50, 60)
) %>%
mutate(power = map_dbl(n, pwr_func, reps = 1e4))
ggplot(power_table, aes(n, power)) +
geom_smooth() +
geom_point() +
geom_hline(yintercept = 0.8)
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
### 8\.8\.1 Load \& wrangle
We have to do a little data wrangling first. Have a look at the data after you import it and relabel `Sex` to `male` and `female` instead of `1` and `2`. Also convert `Agemos` (age in months) to years. Relabel the column `0` as `mean` and calculate a new column named `sd` as the difference between columns `1` and `0`.
```
orig_height_age <- read_csv("https://www.cdc.gov/growthcharts/data/zscore/zstatage.csv")
```
```
##
## ── Column specification ────────────────────────────────────────────────────────
## cols(
## Sex = col_character(),
## Agemos = col_character(),
## `-2` = col_double(),
## `-1.5` = col_double(),
## `-1` = col_double(),
## `-0.5` = col_double(),
## `0` = col_double(),
## `0.5` = col_double(),
## `1` = col_double(),
## `1.5` = col_double(),
## `2` = col_double()
## )
```
```
height_age <- orig_height_age %>%
filter(Sex %in% c(1,2)) %>%
mutate(
sex = recode(Sex, "1" = "male", "2" = "female"),
age = as.numeric(Agemos)/12,
sd = `1` - `0`
) %>%
select(sex, age, mean = `0`, sd)
```
### 8\.8\.2 Plot
Plot your new data frame to see how mean height changes with age for boys and girls.
```
ggplot(height_age, aes(age, mean, color = sex)) +
geom_smooth(aes(ymin = mean - sd,
ymax = mean + sd),
stat="identity")
```
### 8\.8\.3 Simulate a population
Simulate 50 random male heights and 50 random female heights for 20\-year\-olds using the `rnorm()` function and the means and SDs from the `height_age` table. Plot the data.
```
age_filter <- 20
m <- filter(height_age, age == age_filter, sex == "male")
f <- filter(height_age, age == age_filter, sex == "female")
sim_height <- tibble(
male = rnorm(50, m$mean, m$sd),
female = rnorm(50, f$mean, f$sd)
) %>%
gather("sex", "height", male:female)
ggplot(sim_height) +
geom_density(aes(height, fill = sex), alpha = 0.5) +
xlim(125, 225)
```
Run the simulation above several times, noting how the density plot changes. Try changing the age you’re simulating.
### 8\.8\.4 Analyse simulated data
Use the `sim_t_ind(n, m1, sd1, m2, sd2)` function we created above to generate one simulation with a sample size of 50 in each group using the means and SDs of male and female 14\-year\-olds.
```
age_filter <- 14
m <- filter(height_age, age == age_filter, sex == "male")
f <- filter(height_age, age == age_filter, sex == "female")
sim_t_ind(50, m$mean, m$sd, f$mean, f$sd)
```
```
## [1] 0.0005255744
```
### 8\.8\.5 Replicate simulation
Now replicate this 1e4 times using the `replicate()` function. This function will save the returned p\-values in a list (`my_reps`). We can then check what proportion of those p\-values are less than our alpha value. This is the power of our test.
```
my_reps <- replicate(1e4, sim_t_ind(50, m$mean, m$sd, f$mean, f$sd))
alpha <- 0.05
power <- mean(my_reps < alpha)
power
```
```
## [1] 0.6403
```
### 8\.8\.6 One\-tailed prediction
This design has about 65% power to detect the sex difference in height (with a 2\-tailed test). Modify the `sim_t_ind` function for a 1\-tailed prediction.
You could just set `alternative` equal to “greater” in the function, but it might be better to add the `alt` argument to your function (giving it the same default value as `t.test`) and change the value of `alternative` in the function to `alt`.
```
sim_t_ind <- function(n, m1, sd1, m2, sd2, alt = "two.sided") {
v1 <- rnorm(n, m1, sd1)
v2 <- rnorm(n, m2, sd2)
t_ind <- t.test(v1, v2, paired = FALSE, alternative = alt)
return(t_ind$p.value)
}
alpha <- 0.05
my_reps <- replicate(1e4, sim_t_ind(50, m$mean, m$sd, f$mean, f$sd, "greater"))
mean(my_reps < alpha)
```
```
## [1] 0.752
```
### 8\.8\.7 Range of sample sizes
What if we want to find out what sample size will give us 80% power? We can try trial and error. We know the number should be slightly larger than 50\. But you can search more systematically by repeating your power calculation for a range of sample sizes.
This might seem like overkill for a t\-test, where you can easily look up sample size calculators online, but it is a valuable skill to learn for when your analyses become more complicated.
Start with a relatively low number of replications and/or more spread\-out samples to estimate where you should be looking more specifically. Then you can repeat with a narrower/denser range of sample sizes and more iterations.
```
# make another custom function to return power
pwr_func <- function(n, reps = 100, alpha = 0.05) {
ps <- replicate(reps, sim_t_ind(n, m$mean, m$sd, f$mean, f$sd, "greater"))
mean(ps < alpha)
}
# make a table of the n values you want to check
power_table <- tibble(
n = seq(20, 100, by = 5)
) %>%
# run the power function for each n
mutate(power = map_dbl(n, pwr_func))
# plot the results
ggplot(power_table, aes(n, power)) +
geom_smooth() +
geom_point() +
geom_hline(yintercept = 0.8)
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
Now we can narrow down our search to values around 55 (plus or minus 5\) and increase the number of replications from 1e3 to 1e4\.
```
power_table <- tibble(
n = seq(50, 60)
) %>%
mutate(power = map_dbl(n, pwr_func, reps = 1e4))
ggplot(power_table, aes(n, power)) +
geom_smooth() +
geom_point() +
geom_hline(yintercept = 0.8)
```
```
## `geom_smooth()` using method = 'loess' and formula 'y ~ x'
```
8\.9 Glossary
-------------
| term | definition |
| --- | --- |
| [alpha](https://psyteachr.github.io/glossary/a#alpha) | (stats) The cutoff value for making a decision to reject the null hypothesis; (graphics) A value between 0 and 1 used to control the levels of transparency in a plot |
| [beta](https://psyteachr.github.io/glossary/b#beta) | The false negative rate we accept for a statistical test. |
| [binomial distribution](https://psyteachr.github.io/glossary/b#binomial.distribution) | The distribution of data where each observation can have one of two outcomes, like success/failure, yes/no or head/tails. |
| [bivariate normal](https://psyteachr.github.io/glossary/b#bivariate.normal) | Two normally distributed vectors that have a specified correlation with each other. |
| [confidence interval](https://psyteachr.github.io/glossary/c#confidence.interval) | A type of interval estimate used to summarise a given statistic or measurement where a proportion of intervals calculated from the sample(s) will contain the true value of the statistic. |
| [correlation](https://psyteachr.github.io/glossary/c#correlation) | The relationship two vectors have to each other. |
| [covariance matrix](https://psyteachr.github.io/glossary/c#covariance.matrix) | Parameters showing how a set of vectors vary and are correlated. |
| [discrete](https://psyteachr.github.io/glossary/d#discrete) | Data that can only take certain values, such as integers. |
| [effect size](https://psyteachr.github.io/glossary/e#effect.size) | The difference between the effect in your data and the null effect (usually a chance value) |
| [effect](https://psyteachr.github.io/glossary/e#effect) | Some measure of your data, such as the mean value, or the number of standard deviations the mean differs from a chance value. |
| [false negative](https://psyteachr.github.io/glossary/f#false.negative) | When a test concludes there is no effect when there really is an effect |
| [false positive](https://psyteachr.github.io/glossary/f#false.positive) | When a test concludes there is an effect when there really is no effect |
| [function](https://psyteachr.github.io/glossary/f#function.) | A named section of code that can be reused. |
| [nhst](https://psyteachr.github.io/glossary/n#nhst) | Null Hypothesis Signficance Testing |
| [normal distribution](https://psyteachr.github.io/glossary/n#normal.distribution) | A symmetric distribution of data where values near the centre are most probable. |
| [null effect](https://psyteachr.github.io/glossary/n#null.effect) | An outcome that does not show an otherwise expected effect. |
| [p value](https://psyteachr.github.io/glossary/p#p.value) | The probability of seeing an effect at least as extreme as what you have, if the real effect was the value you are testing against (e.g., a null effect) |
| [parameter](https://psyteachr.github.io/glossary/p#parameter) | A value that describes a distribution, such as the mean or SD |
| [poisson distribution](https://psyteachr.github.io/glossary/p#poisson.distribution) | A distribution that models independent events happening over a unit of time |
| [power](https://psyteachr.github.io/glossary/p#power) | The probability of rejecting the null hypothesis when it is false. |
| [probability](https://psyteachr.github.io/glossary/p#probability) | A number between 0 and 1 where 0 indicates impossibility of the event and 1 indicates certainty |
| [sesoi](https://psyteachr.github.io/glossary/s#sesoi) | Smallest Effect Size of Interest: the smallest effect that is theoretically or practically meaningful |
| [significant](https://psyteachr.github.io/glossary/s#significant) | The conclusion when the p\-value is less than the critical alpha. |
| [simulation](https://psyteachr.github.io/glossary/s#simulation) | Generating data from summary parameters |
| [true positive](https://psyteachr.github.io/glossary/t#true.positive) | When a test concludes there is an effect when there is really is an effect |
| [type i error](https://psyteachr.github.io/glossary/t#type.i.error) | A false positive; When a test concludes there is an effect when there is really is no effect |
| [type ii error](https://psyteachr.github.io/glossary/t#type.ii.error) | A false negative; When a test concludes there is no effect when there is really is an effect |
| [uniform distribution](https://psyteachr.github.io/glossary/u#uniform.distribution) | A distribution where all numbers in the range have an equal probability of being sampled |
| [univariate](https://psyteachr.github.io/glossary/u#univariate) | Relating to a single variable. |
8\.10 Exercises
---------------
Download the [exercises](exercises/08_sim_exercise.Rmd). See the [answers](exercises/08_sim_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(8)
# run this to access the answers
dataskills::exercise(8, answers = TRUE)
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/glm.html |
Chapter 9 Introduction to GLM
=============================
9\.1 Learning Objectives
------------------------
### 9\.1\.1 Basic
1. Define the [components](glm.html#glm-components) of the GLM
2. [Simulate data](glm.html#sim-glm) using GLM equations [(video)](https://youtu.be/JQ90LnVCbKc)
3. Identify the model parameters that correspond to the data\-generation parameters
4. Understand and plot [residuals](glm.html#residuals) [(video)](https://youtu.be/sr-NtxiH2Qk)
5. [Predict new values](glm.html#predict) using the model [(video)](https://youtu.be/0o4LEbVVWfM)
6. Explain the differences among [coding schemes](glm.html#coding-schemes) [(video)](https://youtu.be/SqL28AbLj3g)
### 9\.1\.2 Intermediate
7. Demonstrate the [relationships](glm.html#test-rels) among two\-sample t\-test, one\-way ANOVA, and linear regression
8. Given data and a GLM, [generate a decomposition matrix](glm.html#decomp) and calculate sums of squares, mean squares, and F ratios for a one\-way ANOVA
9\.2 Resources
--------------
* [Stub for this lesson](stubs/9_glm.Rmd)
* [Jeff Miller and Patricia Haden, Statistical Analysis with the Linear Model (free online textbook)](http://www.otago.ac.nz/psychology/otago039309.pdf)
* [lecture slides introducing the General Linear Model](slides/08_glm_slides.pdf)
* [GLM shiny app](http://rstudio2.psy.gla.ac.uk/Dale/GLM)
* [F distribution](http://rstudio2.psy.gla.ac.uk/Dale/fdist)
9\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(broom)
set.seed(30250) # makes sure random numbers are reproducible
```
9\.4 GLM
--------
### 9\.4\.1 What is the GLM?
The [General Linear Model](https://psyteachr.github.io/glossary/g#general-linear-model "A mathematical model comparing how one or more independent variables affect a continuous dependent variable") (GLM) is a general mathematical framework for expressing relationships among variables that can express or test linear relationships between a numerical [dependent variable](https://psyteachr.github.io/glossary/d#dependent-variable "The target variable that is being analyzed, whose value is assumed to depend on other variables.") and any combination of [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") or [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") [independent variables](https://psyteachr.github.io/glossary/i#independent-variable "A variable whose value is assumed to influence the value of a dependent variable.").
### 9\.4\.2 Components
There are some mathematical conventions that you need to learn to understand the equations representing linear models. Once you understand those, learning about the GLM will get much easier.
| Component of GLM | Notation |
| --- | --- |
| Dependent Variable (DV) | \\(Y\\) |
| Grand Average | \\(\\mu\\) (the Greek letter “mu”) |
| Main Effects | \\(A, B, C, \\ldots\\) |
| Interactions | \\(AB, AC, BC, ABC, \\ldots\\) |
| Random Error | \\(S(Group)\\) |
The linear equation predicts the dependent variable (\\(Y\\)) as the sum of the grand average value of \\(Y\\) (\\(\\mu\\), also called the intercept), the main effects of all the predictor variables (\\(A\+B\+C\+ \\ldots\\)), the interactions among all the predictor variables (\\(AB, AC, BC, ABC, \\ldots\\)), and some random error (\\(S(Group)\\)). The equation for a model with two predictor variables (\\(A\\) and \\(B\\)) and their interaction (\\(AB\\)) is written like this:
\\(Y\\) \~ \\(\\mu\+A\+B\+AB\+S(Group)\\)
Don’t worry if this doesn’t make sense until we walk through a concrete example.
### 9\.4\.3 Simulating data from GLM
A good way to learn about linear models is to [simulate](https://psyteachr.github.io/glossary/s#simulation "Generating data from summary parameters") data where you know exactly how the variables are related, and then analyse this simulated data to see where the parameters show up in the analysis.
We’ll start with a very simple linear model that just has a single categorical factor with two levels. Let’s say we’re predicting reaction times for congruent and incongruent trials in a Stroop task for a single participant. Average reaction time (`mu`) is 800ms, and is 50ms faster for congruent than incongruent trials (`effect`).
A **factor** is a categorical variable that is used to divide subjects into groups, usually to draw some comparison. Factors are composed of different **levels**. Do not confuse factors with levels!
In the example above, trial type is a factor level, incongrunt is a factor level, and congruent is a factor level.
You need to represent [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") factors with numbers. The numbers, or [coding scheme](https://psyteachr.github.io/glossary/c#coding-scheme "How to represent categorical variables with numbers for use in models") you choose will affect the numbers you get out of the analysis and how you need to interpret them. Here, we will [effect code](https://psyteachr.github.io/glossary/e#effect-code "A coding scheme for categorical variables that contrasts each group mean with the mean of all the group means.") the trial types so that congruent trials are coded as \+0\.5, and incongruent trials are coded as \-0\.5\.
A person won’t always respond exactly the same way. They might be a little faster on some trials than others, due to random fluctuations in attention, learning about the task, or fatigue. So we can add an [error term](https://psyteachr.github.io/glossary/e#error-term "The term in a model that represents the difference between the actual and predicted values") to each trial. We can’t know how much any specific trial will differ, but we can characterise the distribution of how much trials differ from average and then sample from this distribution.
Here, we’ll assume the error term is sampled from a [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable.") with a [standard deviation](https://psyteachr.github.io/glossary/s#standard-deviation "A descriptive statistic that measures how spread out data are relative to the mean.") of 100 ms (the mean of the error term distribution is always 0\). We’ll also sample 100 trials of each type, so we can see a range of variation.
So first create variables for all of the parameters that describe your data.
```
n_per_grp <- 100
mu <- 800 # average RT
effect <- 50 # average difference between congruent and incongruent trials
error_sd <- 100 # standard deviation of the error term
trial_types <- c("congruent" = 0.5, "incongruent" = -0.5) # effect code
```
Then simulate the data by creating a data table with a row for each trial and columns for the trial type and the error term (random numbers samples from a normal distribution with the SD specified by `error_sd`). For categorical variables, include both a column with the text labels (`trial_type`) and another column with the coded version (`trial_type.e`) to make it easier to check what the codings mean and to use for graphing. Calculate the dependent variable (`RT`) as the sum of the grand mean (`mu`), the coefficient (`effect`) multiplied by the effect\-coded predictor variable (`trial_type.e`), and the error term.
```
dat <- data.frame(
trial_type = rep(names(trial_types), each = n_per_grp)
) %>%
mutate(
trial_type.e = recode(trial_type, !!!trial_types),
error = rnorm(nrow(.), 0, error_sd),
RT = mu + effect*trial_type.e + error
)
```
The `!!!` (triple bang) in the code `recode(trial_type, !!!trial_types)` is a way to expand the vector `trial_types <- c(“congruent” = 0.5, “incongruent” = -0.5)`. It’s equivalent to `recode(trial_type, “congruent” = 0.5, “incongruent” = -0.5)`. This pattern avoids making mistakes with recoding because there is only one place where you set up the category to code mapping (in the `trial_types` vector).
Last but not least, always plot simulated data to make sure it looks like you expect.
```
ggplot(dat, aes(trial_type, RT)) +
geom_violin() +
geom_boxplot(aes(fill = trial_type),
width = 0.25, show.legend = FALSE)
```
Figure 9\.1: Simulated Data
### 9\.4\.4 Linear Regression
Now we can analyse the data we simulated using the function `lm()`. It takes the formula as the first argument. This is the same as the data\-generating equation, but you can omit the error term (this is implied), and takes the data table as the second argument. Use the `summary()` function to see the statistical summary.
```
my_lm <- lm(RT ~ trial_type.e, data = dat)
summary(my_lm)
```
```
##
## Call:
## lm(formula = RT ~ trial_type.e, data = dat)
##
## Residuals:
## Min 1Q Median 3Q Max
## -302.110 -70.052 0.948 68.262 246.220
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 788.192 7.206 109.376 < 2e-16 ***
## trial_type.e 61.938 14.413 4.297 2.71e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 101.9 on 198 degrees of freedom
## Multiple R-squared: 0.08532, Adjusted R-squared: 0.0807
## F-statistic: 18.47 on 1 and 198 DF, p-value: 2.707e-05
```
Notice how the **estimate** for the `(Intercept)` is close to the value we set for `mu` and the estimate for `trial_type.e` is close to the value we set for `effect`.
Change the values of `mu` and `effect`, resimulate the data, and re\-run the linear model. What happens to the estimates?
### 9\.4\.5 Residuals
You can use the `residuals()` function to extract the error term for each each data point. This is the DV values, minus the estimates for the intercept and trial type. We’ll make a density plot of the [residuals](https://psyteachr.github.io/glossary/r#residual-error "That part of an observation that cannot be captured by the statistical model, and thus is assumed to reflect unknown factors.") below and compare it to the normal distribution we used for the error term.
```
res <- residuals(my_lm)
ggplot(dat) +
stat_function(aes(0), color = "grey60",
fun = dnorm, n = 101,
args = list(mean = 0, sd = error_sd)) +
geom_density(aes(res, color = trial_type))
```
Figure 9\.2: Model residuals should be approximately normally distributed for each group
You can also compare the model residuals to the simulated error values. If the model is accurate, they should be almost identical. If the intercept estimate is slightly off, the points will be slightly above or below the black line. If the estimate for the effect of trial type is slightly off, there will be a small, systematic difference between residuals for congruent and incongruent trials.
```
ggplot(dat) +
geom_abline(slope = 1) +
geom_point(aes(error, res,color = trial_type)) +
ylab("Model Residuals") +
xlab("Simulated Error")
```
Figure 9\.3: Model residuals should be very similar to the simulated error
What happens to the residuals if you fit a model that ignores trial type (e.g., `lm(Y ~ 1, data = dat)`)?
### 9\.4\.6 Predict New Values
You can use the estimates from your model to predict new data points, given values for the model parameters. For this simple example, we just need to know the trial type to make a prediction.
For congruent trials, you would predict that a new data point would be equal to the intercept estimate plus the trial type estimate multiplied by 0\.5 (the effect code for congruent trials).
```
int_est <- my_lm$coefficients[["(Intercept)"]]
tt_est <- my_lm$coefficients[["trial_type.e"]]
tt_code <- trial_types[["congruent"]]
new_congruent_RT <- int_est + tt_est * tt_code
new_congruent_RT
```
```
## [1] 819.1605
```
You can also use the `predict()` function to do this more easily. The second argument is a data table with columns for the factors in the model and rows with the values that you want to use for the prediction.
```
predict(my_lm, newdata = tibble(trial_type.e = 0.5))
```
```
## 1
## 819.1605
```
If you look up this function using `?predict`, you will see that “The function invokes particular methods which depend on the class of the first argument.” What this means is that `predict()` works differently depending on whether you’re predicting from the output of `lm()` or other analysis functions. You can search for help on the lm version with `?predict.lm`.
### 9\.4\.7 Coding Categorical Variables
In the example above, we used **effect coding** for trial type. You can also use **sum coding**, which assigns \+1 and \-1 to the levels instead of \+0\.5 and \-0\.5\. More commonly, you might want to use **treatment coding**, which assigns 0 to one level (usually a baseline or control condition) and 1 to the other level (usually a treatment or experimental condition).
Here we will add sum\-coded and treatment\-coded versions of `trial_type` to the dataset using the `recode()` function.
```
dat <- dat %>% mutate(
trial_type.sum = recode(trial_type, "congruent" = +1, "incongruent" = -1),
trial_type.tr = recode(trial_type, "congruent" = 1, "incongruent" = 0)
)
```
If you define named vectors with your levels and coding, you can use them with the `recode()` function if you expand them using `!!!`.
```
tt_sum <- c("congruent" = +1,
"incongruent" = -1)
tt_tr <- c("congruent" = 1,
"incongruent" = 0)
dat <- dat %>% mutate(
trial_type.sum = recode(trial_type, !!!tt_sum),
trial_type.tr = recode(trial_type, !!!tt_tr)
)
```
Here are the coefficients for the effect\-coded version. They should be the same as those from the last analysis.
```
lm(RT ~ trial_type.e, data = dat)$coefficients
```
```
## (Intercept) trial_type.e
## 788.19166 61.93773
```
Here are the coefficients for the sum\-coded version. This give the same results as effect coding, except the estimate for the categorical factor will be exactly half as large, as it represents the difference between each trial type and the hypothetical condition of 0 (the overall mean RT), rather than the difference between the two trial types.
```
lm(RT ~ trial_type.sum, data = dat)$coefficients
```
```
## (Intercept) trial_type.sum
## 788.19166 30.96887
```
Here are the coefficients for the treatment\-coded version. The estimate for the categorical factor will be the same as in the effect\-coded version, but the intercept will decrease. It will be equal to the intercept minus the estimate for trial type from the sum\-coded version.
```
lm(RT ~ trial_type.tr, data = dat)$coefficients
```
```
## (Intercept) trial_type.tr
## 757.22279 61.93773
```
9\.5 Relationships among tests
------------------------------
### 9\.5\.1 T\-test
The t\-test is just a special, limited example of a general linear model.
```
t.test(RT ~ trial_type.e, data = dat, var.equal = TRUE)
```
```
##
## Two Sample t-test
##
## data: RT by trial_type.e
## t = -4.2975, df = 198, p-value = 2.707e-05
## alternative hypothesis: true difference in means between group -0.5 and group 0.5 is not equal to 0
## 95 percent confidence interval:
## -90.35945 -33.51601
## sample estimates:
## mean in group -0.5 mean in group 0.5
## 757.2228 819.1605
```
What happens when you use other codings for trial type in the t\-test above? Which coding maps onto the results of the t\-test best?
### 9\.5\.2 ANOVA
ANOVA is also a special, limited version of the linear model.
```
my_aov <- aov(RT ~ trial_type.e, data = dat)
summary(my_aov, intercept = TRUE)
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## (Intercept) 1 124249219 124249219 11963.12 < 2e-16 ***
## trial_type.e 1 191814 191814 18.47 2.71e-05 ***
## Residuals 198 2056432 10386
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The easiest way to get parameters out of an analysis is to use the `broom::tidy()` function. This returns a tidy table that you can extract numbers of interest from. Here, we just want to get the F\-value for the effect of trial\_type. Compare the square root of this value to the t\-value from the t\-tests above.
```
f <- broom::tidy(my_aov)$statistic[1]
sqrt(f)
```
```
## [1] 4.297498
```
9\.6 Understanding ANOVA
------------------------
We’ll walk through an example of a one\-way ANOVA with the following equation:
\\(Y\_{ij} \= \\mu \+ A\_i \+ S(A)\_{ij}\\)
This means that each data point (\\(Y\_{ij}\\)) is predicted to be the sum of the grand mean (\\(\\mu\\)), plus the effect of factor A (\\(A\_i\\)), plus some residual error (\\(S(A)\_{ij}\\)).
### 9\.6\.1 Means, Variability, and Deviation Scores
Let’s create a simple simulation function so you can quickly create a two\-sample dataset with specified Ns, means, and SDs.
```
two_sample <- function(n = 10, m1 = 0, m2 = 0, sd1 = 1, sd2 = 1) {
s1 <- rnorm(n, m1, sd1)
s2 <- rnorm(n, m2, sd2)
data.frame(
Y = c(s1, s2),
grp = rep(c("A", "B"), each = n)
)
}
```
Now we will use `two_sample()` to create a dataset `dat` with N\=5 per group, means of \-2 and \+2, and SDs of 1 and 1 (yes, this is an effect size of d \= 4\).
```
dat <- two_sample(5, -2, +2, 1, 1)
```
You can calculate how each data point (`Y`) deviates from the overall sample mean (\\(\\hat{\\mu}\\)), which is represented by the horizontal grey line below and the deviations are the vertical grey lines. You can also calculate how different each point is from its group\-specific mean (\\(\\hat{A\_i}\\)), which are represented by the horizontal coloured lines below and the deviations are the coloured vertical lines.
Figure 9\.4: Deviations of each data point (Y) from the overall and group means
You can use these deviations to calculate variability between groups and within groups. ANOVA tests whether the variability between groups is larger than that within groups, accounting for the number of groups and observations.
### 9\.6\.2 Decomposition matrices
We can use the estimation equations for a one\-factor ANOVA to calculate the model components.
* `mu` is the overall mean
* `a` is how different each group mean is from the overall mean
* `err` is residual error, calculated by subtracting `mu` and `a` from `Y`
This produces a *decomposition matrix*, a table with columns for `Y`, `mu`, `a`, and `err`.
```
decomp <- dat %>%
select(Y, grp) %>%
mutate(mu = mean(Y)) %>% # calculate mu_hat
group_by(grp) %>%
mutate(a = mean(Y) - mu) %>% # calculate a_hat for each grp
ungroup() %>%
mutate(err = Y - mu - a) # calculate residual error
```
| Y | grp | mu | a | err |
| --- | --- | --- | --- | --- |
| \-1\.4770938 | A | 0\.1207513 | \-1\.533501 | \-0\.0643443 |
| \-2\.9508741 | A | 0\.1207513 | \-1\.533501 | \-1\.5381246 |
| \-0\.6376736 | A | 0\.1207513 | \-1\.533501 | 0\.7750759 |
| \-1\.7579084 | A | 0\.1207513 | \-1\.533501 | \-0\.3451589 |
| \-0\.2401977 | A | 0\.1207513 | \-1\.533501 | 1\.1725518 |
| 0\.1968155 | B | 0\.1207513 | 1\.533501 | \-1\.4574367 |
| 2\.6308008 | B | 0\.1207513 | 1\.533501 | 0\.9765486 |
| 2\.0293297 | B | 0\.1207513 | 1\.533501 | 0\.3750775 |
| 2\.1629037 | B | 0\.1207513 | 1\.533501 | 0\.5086516 |
| 1\.2514112 | B | 0\.1207513 | 1\.533501 | \-0\.4028410 |
Calculate sums of squares for `mu`, `a`, and `err`.
```
SS <- decomp %>%
summarise(mu = sum(mu*mu),
a = sum(a*a),
err = sum(err*err))
```
| mu | a | err |
| --- | --- | --- |
| 0\.1458088 | 23\.51625 | 8\.104182 |
If you’ve done everything right, `SS$mu + SS$a + SS$err` should equal the sum of squares for Y.
```
SS_Y <- sum(decomp$Y^2)
all.equal(SS_Y, SS$mu + SS$a + SS$err)
```
```
## [1] TRUE
```
Divide each sum of squares by its corresponding degrees of freedom (df) to calculate mean squares. The df for `mu` is 1, the df for factor `a` is `K-1` (K is the number of groups), and the df for `err` is `N - K` (N is the number of observations).
```
K <- n_distinct(dat$grp)
N <- nrow(dat)
df <- c(mu = 1, a = K - 1, err = N - K)
MS <- SS / df
```
| mu | a | err |
| --- | --- | --- |
| 0\.1458088 | 23\.51625 | 1\.013023 |
Then calculate an F\-ratio for `mu` and `a` by dividing their mean squares by the error term mean square. Get the p\-values that correspond to these F\-values using the `pf()` function.
```
F_mu <- MS$mu / MS$err
F_a <- MS$a / MS$err
p_mu <- pf(F_mu, df1 = df['mu'], df2 = df['err'], lower.tail = FALSE)
p_a <- pf(F_a, df1 = df['a'], df2 = df['err'], lower.tail = FALSE)
```
Put everything into a data frame to display it in the same way as the ANOVA summary function.
```
my_calcs <- data.frame(
term = c("Intercept", "grp", "Residuals"),
Df = df,
SS = c(SS$mu, SS$a, SS$err),
MS = c(MS$mu, MS$a, MS$err),
F = c(F_mu, F_a, NA),
p = c(p_mu, p_a, NA)
)
```
| | term | Df | SS | MS | F | p |
| --- | --- | --- | --- | --- | --- | --- |
| mu | Intercept | 1 | 0\.146 | 0\.146 | 0\.144 | 0\.714 |
| a | grp | 1 | 23\.516 | 23\.516 | 23\.214 | 0\.001 |
| err | Residuals | 8 | 8\.104 | 1\.013 | NA | NA |
Now run a one\-way ANOVA on your results and compare it to what you obtained in your calculations.
```
aov(Y ~ grp, data = dat) %>% summary(intercept = TRUE)
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## (Intercept) 1 0.146 0.146 0.144 0.71427
## grp 1 23.516 23.516 23.214 0.00132 **
## Residuals 8 8.104 1.013
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Using the code above, write your own function that takes a table of data and returns the ANOVA results table like above.
9\.7 Glossary
-------------
| term | definition |
| --- | --- |
| [categorical](https://psyteachr.github.io/glossary/c#categorical) | Data that can only take certain values, such as types of pet. |
| [coding scheme](https://psyteachr.github.io/glossary/c#coding.scheme) | How to represent categorical variables with numbers for use in models |
| [continuous](https://psyteachr.github.io/glossary/c#continuous) | Data that can take on any values between other existing values. |
| [dependent variable](https://psyteachr.github.io/glossary/d#dependent.variable) | The target variable that is being analyzed, whose value is assumed to depend on other variables. |
| [effect code](https://psyteachr.github.io/glossary/e#effect.code) | A coding scheme for categorical variables that contrasts each group mean with the mean of all the group means. |
| [error term](https://psyteachr.github.io/glossary/e#error.term) | The term in a model that represents the difference between the actual and predicted values |
| [general linear model](https://psyteachr.github.io/glossary/g#general.linear.model) | A mathematical model comparing how one or more independent variables affect a continuous dependent variable |
| [independent variable](https://psyteachr.github.io/glossary/i#independent.variable) | A variable whose value is assumed to influence the value of a dependent variable. |
| [normal distribution](https://psyteachr.github.io/glossary/n#normal.distribution) | A symmetric distribution of data where values near the centre are most probable. |
| [residual error](https://psyteachr.github.io/glossary/r#residual.error) | That part of an observation that cannot be captured by the statistical model, and thus is assumed to reflect unknown factors. |
| [simulation](https://psyteachr.github.io/glossary/s#simulation) | Generating data from summary parameters |
| [standard deviation](https://psyteachr.github.io/glossary/s#standard.deviation) | A descriptive statistic that measures how spread out data are relative to the mean. |
9\.8 Exercises
--------------
Download the [exercises](exercises/09_glm_exercise.Rmd). See the [answers](exercises/09_glm_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(9)
# run this to access the answers
dataskills::exercise(9, answers = TRUE)
```
9\.1 Learning Objectives
------------------------
### 9\.1\.1 Basic
1. Define the [components](glm.html#glm-components) of the GLM
2. [Simulate data](glm.html#sim-glm) using GLM equations [(video)](https://youtu.be/JQ90LnVCbKc)
3. Identify the model parameters that correspond to the data\-generation parameters
4. Understand and plot [residuals](glm.html#residuals) [(video)](https://youtu.be/sr-NtxiH2Qk)
5. [Predict new values](glm.html#predict) using the model [(video)](https://youtu.be/0o4LEbVVWfM)
6. Explain the differences among [coding schemes](glm.html#coding-schemes) [(video)](https://youtu.be/SqL28AbLj3g)
### 9\.1\.2 Intermediate
7. Demonstrate the [relationships](glm.html#test-rels) among two\-sample t\-test, one\-way ANOVA, and linear regression
8. Given data and a GLM, [generate a decomposition matrix](glm.html#decomp) and calculate sums of squares, mean squares, and F ratios for a one\-way ANOVA
### 9\.1\.1 Basic
1. Define the [components](glm.html#glm-components) of the GLM
2. [Simulate data](glm.html#sim-glm) using GLM equations [(video)](https://youtu.be/JQ90LnVCbKc)
3. Identify the model parameters that correspond to the data\-generation parameters
4. Understand and plot [residuals](glm.html#residuals) [(video)](https://youtu.be/sr-NtxiH2Qk)
5. [Predict new values](glm.html#predict) using the model [(video)](https://youtu.be/0o4LEbVVWfM)
6. Explain the differences among [coding schemes](glm.html#coding-schemes) [(video)](https://youtu.be/SqL28AbLj3g)
### 9\.1\.2 Intermediate
7. Demonstrate the [relationships](glm.html#test-rels) among two\-sample t\-test, one\-way ANOVA, and linear regression
8. Given data and a GLM, [generate a decomposition matrix](glm.html#decomp) and calculate sums of squares, mean squares, and F ratios for a one\-way ANOVA
9\.2 Resources
--------------
* [Stub for this lesson](stubs/9_glm.Rmd)
* [Jeff Miller and Patricia Haden, Statistical Analysis with the Linear Model (free online textbook)](http://www.otago.ac.nz/psychology/otago039309.pdf)
* [lecture slides introducing the General Linear Model](slides/08_glm_slides.pdf)
* [GLM shiny app](http://rstudio2.psy.gla.ac.uk/Dale/GLM)
* [F distribution](http://rstudio2.psy.gla.ac.uk/Dale/fdist)
9\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(broom)
set.seed(30250) # makes sure random numbers are reproducible
```
9\.4 GLM
--------
### 9\.4\.1 What is the GLM?
The [General Linear Model](https://psyteachr.github.io/glossary/g#general-linear-model "A mathematical model comparing how one or more independent variables affect a continuous dependent variable") (GLM) is a general mathematical framework for expressing relationships among variables that can express or test linear relationships between a numerical [dependent variable](https://psyteachr.github.io/glossary/d#dependent-variable "The target variable that is being analyzed, whose value is assumed to depend on other variables.") and any combination of [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") or [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") [independent variables](https://psyteachr.github.io/glossary/i#independent-variable "A variable whose value is assumed to influence the value of a dependent variable.").
### 9\.4\.2 Components
There are some mathematical conventions that you need to learn to understand the equations representing linear models. Once you understand those, learning about the GLM will get much easier.
| Component of GLM | Notation |
| --- | --- |
| Dependent Variable (DV) | \\(Y\\) |
| Grand Average | \\(\\mu\\) (the Greek letter “mu”) |
| Main Effects | \\(A, B, C, \\ldots\\) |
| Interactions | \\(AB, AC, BC, ABC, \\ldots\\) |
| Random Error | \\(S(Group)\\) |
The linear equation predicts the dependent variable (\\(Y\\)) as the sum of the grand average value of \\(Y\\) (\\(\\mu\\), also called the intercept), the main effects of all the predictor variables (\\(A\+B\+C\+ \\ldots\\)), the interactions among all the predictor variables (\\(AB, AC, BC, ABC, \\ldots\\)), and some random error (\\(S(Group)\\)). The equation for a model with two predictor variables (\\(A\\) and \\(B\\)) and their interaction (\\(AB\\)) is written like this:
\\(Y\\) \~ \\(\\mu\+A\+B\+AB\+S(Group)\\)
Don’t worry if this doesn’t make sense until we walk through a concrete example.
### 9\.4\.3 Simulating data from GLM
A good way to learn about linear models is to [simulate](https://psyteachr.github.io/glossary/s#simulation "Generating data from summary parameters") data where you know exactly how the variables are related, and then analyse this simulated data to see where the parameters show up in the analysis.
We’ll start with a very simple linear model that just has a single categorical factor with two levels. Let’s say we’re predicting reaction times for congruent and incongruent trials in a Stroop task for a single participant. Average reaction time (`mu`) is 800ms, and is 50ms faster for congruent than incongruent trials (`effect`).
A **factor** is a categorical variable that is used to divide subjects into groups, usually to draw some comparison. Factors are composed of different **levels**. Do not confuse factors with levels!
In the example above, trial type is a factor level, incongrunt is a factor level, and congruent is a factor level.
You need to represent [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") factors with numbers. The numbers, or [coding scheme](https://psyteachr.github.io/glossary/c#coding-scheme "How to represent categorical variables with numbers for use in models") you choose will affect the numbers you get out of the analysis and how you need to interpret them. Here, we will [effect code](https://psyteachr.github.io/glossary/e#effect-code "A coding scheme for categorical variables that contrasts each group mean with the mean of all the group means.") the trial types so that congruent trials are coded as \+0\.5, and incongruent trials are coded as \-0\.5\.
A person won’t always respond exactly the same way. They might be a little faster on some trials than others, due to random fluctuations in attention, learning about the task, or fatigue. So we can add an [error term](https://psyteachr.github.io/glossary/e#error-term "The term in a model that represents the difference between the actual and predicted values") to each trial. We can’t know how much any specific trial will differ, but we can characterise the distribution of how much trials differ from average and then sample from this distribution.
Here, we’ll assume the error term is sampled from a [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable.") with a [standard deviation](https://psyteachr.github.io/glossary/s#standard-deviation "A descriptive statistic that measures how spread out data are relative to the mean.") of 100 ms (the mean of the error term distribution is always 0\). We’ll also sample 100 trials of each type, so we can see a range of variation.
So first create variables for all of the parameters that describe your data.
```
n_per_grp <- 100
mu <- 800 # average RT
effect <- 50 # average difference between congruent and incongruent trials
error_sd <- 100 # standard deviation of the error term
trial_types <- c("congruent" = 0.5, "incongruent" = -0.5) # effect code
```
Then simulate the data by creating a data table with a row for each trial and columns for the trial type and the error term (random numbers samples from a normal distribution with the SD specified by `error_sd`). For categorical variables, include both a column with the text labels (`trial_type`) and another column with the coded version (`trial_type.e`) to make it easier to check what the codings mean and to use for graphing. Calculate the dependent variable (`RT`) as the sum of the grand mean (`mu`), the coefficient (`effect`) multiplied by the effect\-coded predictor variable (`trial_type.e`), and the error term.
```
dat <- data.frame(
trial_type = rep(names(trial_types), each = n_per_grp)
) %>%
mutate(
trial_type.e = recode(trial_type, !!!trial_types),
error = rnorm(nrow(.), 0, error_sd),
RT = mu + effect*trial_type.e + error
)
```
The `!!!` (triple bang) in the code `recode(trial_type, !!!trial_types)` is a way to expand the vector `trial_types <- c(“congruent” = 0.5, “incongruent” = -0.5)`. It’s equivalent to `recode(trial_type, “congruent” = 0.5, “incongruent” = -0.5)`. This pattern avoids making mistakes with recoding because there is only one place where you set up the category to code mapping (in the `trial_types` vector).
Last but not least, always plot simulated data to make sure it looks like you expect.
```
ggplot(dat, aes(trial_type, RT)) +
geom_violin() +
geom_boxplot(aes(fill = trial_type),
width = 0.25, show.legend = FALSE)
```
Figure 9\.1: Simulated Data
### 9\.4\.4 Linear Regression
Now we can analyse the data we simulated using the function `lm()`. It takes the formula as the first argument. This is the same as the data\-generating equation, but you can omit the error term (this is implied), and takes the data table as the second argument. Use the `summary()` function to see the statistical summary.
```
my_lm <- lm(RT ~ trial_type.e, data = dat)
summary(my_lm)
```
```
##
## Call:
## lm(formula = RT ~ trial_type.e, data = dat)
##
## Residuals:
## Min 1Q Median 3Q Max
## -302.110 -70.052 0.948 68.262 246.220
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 788.192 7.206 109.376 < 2e-16 ***
## trial_type.e 61.938 14.413 4.297 2.71e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 101.9 on 198 degrees of freedom
## Multiple R-squared: 0.08532, Adjusted R-squared: 0.0807
## F-statistic: 18.47 on 1 and 198 DF, p-value: 2.707e-05
```
Notice how the **estimate** for the `(Intercept)` is close to the value we set for `mu` and the estimate for `trial_type.e` is close to the value we set for `effect`.
Change the values of `mu` and `effect`, resimulate the data, and re\-run the linear model. What happens to the estimates?
### 9\.4\.5 Residuals
You can use the `residuals()` function to extract the error term for each each data point. This is the DV values, minus the estimates for the intercept and trial type. We’ll make a density plot of the [residuals](https://psyteachr.github.io/glossary/r#residual-error "That part of an observation that cannot be captured by the statistical model, and thus is assumed to reflect unknown factors.") below and compare it to the normal distribution we used for the error term.
```
res <- residuals(my_lm)
ggplot(dat) +
stat_function(aes(0), color = "grey60",
fun = dnorm, n = 101,
args = list(mean = 0, sd = error_sd)) +
geom_density(aes(res, color = trial_type))
```
Figure 9\.2: Model residuals should be approximately normally distributed for each group
You can also compare the model residuals to the simulated error values. If the model is accurate, they should be almost identical. If the intercept estimate is slightly off, the points will be slightly above or below the black line. If the estimate for the effect of trial type is slightly off, there will be a small, systematic difference between residuals for congruent and incongruent trials.
```
ggplot(dat) +
geom_abline(slope = 1) +
geom_point(aes(error, res,color = trial_type)) +
ylab("Model Residuals") +
xlab("Simulated Error")
```
Figure 9\.3: Model residuals should be very similar to the simulated error
What happens to the residuals if you fit a model that ignores trial type (e.g., `lm(Y ~ 1, data = dat)`)?
### 9\.4\.6 Predict New Values
You can use the estimates from your model to predict new data points, given values for the model parameters. For this simple example, we just need to know the trial type to make a prediction.
For congruent trials, you would predict that a new data point would be equal to the intercept estimate plus the trial type estimate multiplied by 0\.5 (the effect code for congruent trials).
```
int_est <- my_lm$coefficients[["(Intercept)"]]
tt_est <- my_lm$coefficients[["trial_type.e"]]
tt_code <- trial_types[["congruent"]]
new_congruent_RT <- int_est + tt_est * tt_code
new_congruent_RT
```
```
## [1] 819.1605
```
You can also use the `predict()` function to do this more easily. The second argument is a data table with columns for the factors in the model and rows with the values that you want to use for the prediction.
```
predict(my_lm, newdata = tibble(trial_type.e = 0.5))
```
```
## 1
## 819.1605
```
If you look up this function using `?predict`, you will see that “The function invokes particular methods which depend on the class of the first argument.” What this means is that `predict()` works differently depending on whether you’re predicting from the output of `lm()` or other analysis functions. You can search for help on the lm version with `?predict.lm`.
### 9\.4\.7 Coding Categorical Variables
In the example above, we used **effect coding** for trial type. You can also use **sum coding**, which assigns \+1 and \-1 to the levels instead of \+0\.5 and \-0\.5\. More commonly, you might want to use **treatment coding**, which assigns 0 to one level (usually a baseline or control condition) and 1 to the other level (usually a treatment or experimental condition).
Here we will add sum\-coded and treatment\-coded versions of `trial_type` to the dataset using the `recode()` function.
```
dat <- dat %>% mutate(
trial_type.sum = recode(trial_type, "congruent" = +1, "incongruent" = -1),
trial_type.tr = recode(trial_type, "congruent" = 1, "incongruent" = 0)
)
```
If you define named vectors with your levels and coding, you can use them with the `recode()` function if you expand them using `!!!`.
```
tt_sum <- c("congruent" = +1,
"incongruent" = -1)
tt_tr <- c("congruent" = 1,
"incongruent" = 0)
dat <- dat %>% mutate(
trial_type.sum = recode(trial_type, !!!tt_sum),
trial_type.tr = recode(trial_type, !!!tt_tr)
)
```
Here are the coefficients for the effect\-coded version. They should be the same as those from the last analysis.
```
lm(RT ~ trial_type.e, data = dat)$coefficients
```
```
## (Intercept) trial_type.e
## 788.19166 61.93773
```
Here are the coefficients for the sum\-coded version. This give the same results as effect coding, except the estimate for the categorical factor will be exactly half as large, as it represents the difference between each trial type and the hypothetical condition of 0 (the overall mean RT), rather than the difference between the two trial types.
```
lm(RT ~ trial_type.sum, data = dat)$coefficients
```
```
## (Intercept) trial_type.sum
## 788.19166 30.96887
```
Here are the coefficients for the treatment\-coded version. The estimate for the categorical factor will be the same as in the effect\-coded version, but the intercept will decrease. It will be equal to the intercept minus the estimate for trial type from the sum\-coded version.
```
lm(RT ~ trial_type.tr, data = dat)$coefficients
```
```
## (Intercept) trial_type.tr
## 757.22279 61.93773
```
### 9\.4\.1 What is the GLM?
The [General Linear Model](https://psyteachr.github.io/glossary/g#general-linear-model "A mathematical model comparing how one or more independent variables affect a continuous dependent variable") (GLM) is a general mathematical framework for expressing relationships among variables that can express or test linear relationships between a numerical [dependent variable](https://psyteachr.github.io/glossary/d#dependent-variable "The target variable that is being analyzed, whose value is assumed to depend on other variables.") and any combination of [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") or [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") [independent variables](https://psyteachr.github.io/glossary/i#independent-variable "A variable whose value is assumed to influence the value of a dependent variable.").
### 9\.4\.2 Components
There are some mathematical conventions that you need to learn to understand the equations representing linear models. Once you understand those, learning about the GLM will get much easier.
| Component of GLM | Notation |
| --- | --- |
| Dependent Variable (DV) | \\(Y\\) |
| Grand Average | \\(\\mu\\) (the Greek letter “mu”) |
| Main Effects | \\(A, B, C, \\ldots\\) |
| Interactions | \\(AB, AC, BC, ABC, \\ldots\\) |
| Random Error | \\(S(Group)\\) |
The linear equation predicts the dependent variable (\\(Y\\)) as the sum of the grand average value of \\(Y\\) (\\(\\mu\\), also called the intercept), the main effects of all the predictor variables (\\(A\+B\+C\+ \\ldots\\)), the interactions among all the predictor variables (\\(AB, AC, BC, ABC, \\ldots\\)), and some random error (\\(S(Group)\\)). The equation for a model with two predictor variables (\\(A\\) and \\(B\\)) and their interaction (\\(AB\\)) is written like this:
\\(Y\\) \~ \\(\\mu\+A\+B\+AB\+S(Group)\\)
Don’t worry if this doesn’t make sense until we walk through a concrete example.
### 9\.4\.3 Simulating data from GLM
A good way to learn about linear models is to [simulate](https://psyteachr.github.io/glossary/s#simulation "Generating data from summary parameters") data where you know exactly how the variables are related, and then analyse this simulated data to see where the parameters show up in the analysis.
We’ll start with a very simple linear model that just has a single categorical factor with two levels. Let’s say we’re predicting reaction times for congruent and incongruent trials in a Stroop task for a single participant. Average reaction time (`mu`) is 800ms, and is 50ms faster for congruent than incongruent trials (`effect`).
A **factor** is a categorical variable that is used to divide subjects into groups, usually to draw some comparison. Factors are composed of different **levels**. Do not confuse factors with levels!
In the example above, trial type is a factor level, incongrunt is a factor level, and congruent is a factor level.
You need to represent [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") factors with numbers. The numbers, or [coding scheme](https://psyteachr.github.io/glossary/c#coding-scheme "How to represent categorical variables with numbers for use in models") you choose will affect the numbers you get out of the analysis and how you need to interpret them. Here, we will [effect code](https://psyteachr.github.io/glossary/e#effect-code "A coding scheme for categorical variables that contrasts each group mean with the mean of all the group means.") the trial types so that congruent trials are coded as \+0\.5, and incongruent trials are coded as \-0\.5\.
A person won’t always respond exactly the same way. They might be a little faster on some trials than others, due to random fluctuations in attention, learning about the task, or fatigue. So we can add an [error term](https://psyteachr.github.io/glossary/e#error-term "The term in a model that represents the difference between the actual and predicted values") to each trial. We can’t know how much any specific trial will differ, but we can characterise the distribution of how much trials differ from average and then sample from this distribution.
Here, we’ll assume the error term is sampled from a [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable.") with a [standard deviation](https://psyteachr.github.io/glossary/s#standard-deviation "A descriptive statistic that measures how spread out data are relative to the mean.") of 100 ms (the mean of the error term distribution is always 0\). We’ll also sample 100 trials of each type, so we can see a range of variation.
So first create variables for all of the parameters that describe your data.
```
n_per_grp <- 100
mu <- 800 # average RT
effect <- 50 # average difference between congruent and incongruent trials
error_sd <- 100 # standard deviation of the error term
trial_types <- c("congruent" = 0.5, "incongruent" = -0.5) # effect code
```
Then simulate the data by creating a data table with a row for each trial and columns for the trial type and the error term (random numbers samples from a normal distribution with the SD specified by `error_sd`). For categorical variables, include both a column with the text labels (`trial_type`) and another column with the coded version (`trial_type.e`) to make it easier to check what the codings mean and to use for graphing. Calculate the dependent variable (`RT`) as the sum of the grand mean (`mu`), the coefficient (`effect`) multiplied by the effect\-coded predictor variable (`trial_type.e`), and the error term.
```
dat <- data.frame(
trial_type = rep(names(trial_types), each = n_per_grp)
) %>%
mutate(
trial_type.e = recode(trial_type, !!!trial_types),
error = rnorm(nrow(.), 0, error_sd),
RT = mu + effect*trial_type.e + error
)
```
The `!!!` (triple bang) in the code `recode(trial_type, !!!trial_types)` is a way to expand the vector `trial_types <- c(“congruent” = 0.5, “incongruent” = -0.5)`. It’s equivalent to `recode(trial_type, “congruent” = 0.5, “incongruent” = -0.5)`. This pattern avoids making mistakes with recoding because there is only one place where you set up the category to code mapping (in the `trial_types` vector).
Last but not least, always plot simulated data to make sure it looks like you expect.
```
ggplot(dat, aes(trial_type, RT)) +
geom_violin() +
geom_boxplot(aes(fill = trial_type),
width = 0.25, show.legend = FALSE)
```
Figure 9\.1: Simulated Data
### 9\.4\.4 Linear Regression
Now we can analyse the data we simulated using the function `lm()`. It takes the formula as the first argument. This is the same as the data\-generating equation, but you can omit the error term (this is implied), and takes the data table as the second argument. Use the `summary()` function to see the statistical summary.
```
my_lm <- lm(RT ~ trial_type.e, data = dat)
summary(my_lm)
```
```
##
## Call:
## lm(formula = RT ~ trial_type.e, data = dat)
##
## Residuals:
## Min 1Q Median 3Q Max
## -302.110 -70.052 0.948 68.262 246.220
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 788.192 7.206 109.376 < 2e-16 ***
## trial_type.e 61.938 14.413 4.297 2.71e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 101.9 on 198 degrees of freedom
## Multiple R-squared: 0.08532, Adjusted R-squared: 0.0807
## F-statistic: 18.47 on 1 and 198 DF, p-value: 2.707e-05
```
Notice how the **estimate** for the `(Intercept)` is close to the value we set for `mu` and the estimate for `trial_type.e` is close to the value we set for `effect`.
Change the values of `mu` and `effect`, resimulate the data, and re\-run the linear model. What happens to the estimates?
### 9\.4\.5 Residuals
You can use the `residuals()` function to extract the error term for each each data point. This is the DV values, minus the estimates for the intercept and trial type. We’ll make a density plot of the [residuals](https://psyteachr.github.io/glossary/r#residual-error "That part of an observation that cannot be captured by the statistical model, and thus is assumed to reflect unknown factors.") below and compare it to the normal distribution we used for the error term.
```
res <- residuals(my_lm)
ggplot(dat) +
stat_function(aes(0), color = "grey60",
fun = dnorm, n = 101,
args = list(mean = 0, sd = error_sd)) +
geom_density(aes(res, color = trial_type))
```
Figure 9\.2: Model residuals should be approximately normally distributed for each group
You can also compare the model residuals to the simulated error values. If the model is accurate, they should be almost identical. If the intercept estimate is slightly off, the points will be slightly above or below the black line. If the estimate for the effect of trial type is slightly off, there will be a small, systematic difference between residuals for congruent and incongruent trials.
```
ggplot(dat) +
geom_abline(slope = 1) +
geom_point(aes(error, res,color = trial_type)) +
ylab("Model Residuals") +
xlab("Simulated Error")
```
Figure 9\.3: Model residuals should be very similar to the simulated error
What happens to the residuals if you fit a model that ignores trial type (e.g., `lm(Y ~ 1, data = dat)`)?
### 9\.4\.6 Predict New Values
You can use the estimates from your model to predict new data points, given values for the model parameters. For this simple example, we just need to know the trial type to make a prediction.
For congruent trials, you would predict that a new data point would be equal to the intercept estimate plus the trial type estimate multiplied by 0\.5 (the effect code for congruent trials).
```
int_est <- my_lm$coefficients[["(Intercept)"]]
tt_est <- my_lm$coefficients[["trial_type.e"]]
tt_code <- trial_types[["congruent"]]
new_congruent_RT <- int_est + tt_est * tt_code
new_congruent_RT
```
```
## [1] 819.1605
```
You can also use the `predict()` function to do this more easily. The second argument is a data table with columns for the factors in the model and rows with the values that you want to use for the prediction.
```
predict(my_lm, newdata = tibble(trial_type.e = 0.5))
```
```
## 1
## 819.1605
```
If you look up this function using `?predict`, you will see that “The function invokes particular methods which depend on the class of the first argument.” What this means is that `predict()` works differently depending on whether you’re predicting from the output of `lm()` or other analysis functions. You can search for help on the lm version with `?predict.lm`.
### 9\.4\.7 Coding Categorical Variables
In the example above, we used **effect coding** for trial type. You can also use **sum coding**, which assigns \+1 and \-1 to the levels instead of \+0\.5 and \-0\.5\. More commonly, you might want to use **treatment coding**, which assigns 0 to one level (usually a baseline or control condition) and 1 to the other level (usually a treatment or experimental condition).
Here we will add sum\-coded and treatment\-coded versions of `trial_type` to the dataset using the `recode()` function.
```
dat <- dat %>% mutate(
trial_type.sum = recode(trial_type, "congruent" = +1, "incongruent" = -1),
trial_type.tr = recode(trial_type, "congruent" = 1, "incongruent" = 0)
)
```
If you define named vectors with your levels and coding, you can use them with the `recode()` function if you expand them using `!!!`.
```
tt_sum <- c("congruent" = +1,
"incongruent" = -1)
tt_tr <- c("congruent" = 1,
"incongruent" = 0)
dat <- dat %>% mutate(
trial_type.sum = recode(trial_type, !!!tt_sum),
trial_type.tr = recode(trial_type, !!!tt_tr)
)
```
Here are the coefficients for the effect\-coded version. They should be the same as those from the last analysis.
```
lm(RT ~ trial_type.e, data = dat)$coefficients
```
```
## (Intercept) trial_type.e
## 788.19166 61.93773
```
Here are the coefficients for the sum\-coded version. This give the same results as effect coding, except the estimate for the categorical factor will be exactly half as large, as it represents the difference between each trial type and the hypothetical condition of 0 (the overall mean RT), rather than the difference between the two trial types.
```
lm(RT ~ trial_type.sum, data = dat)$coefficients
```
```
## (Intercept) trial_type.sum
## 788.19166 30.96887
```
Here are the coefficients for the treatment\-coded version. The estimate for the categorical factor will be the same as in the effect\-coded version, but the intercept will decrease. It will be equal to the intercept minus the estimate for trial type from the sum\-coded version.
```
lm(RT ~ trial_type.tr, data = dat)$coefficients
```
```
## (Intercept) trial_type.tr
## 757.22279 61.93773
```
9\.5 Relationships among tests
------------------------------
### 9\.5\.1 T\-test
The t\-test is just a special, limited example of a general linear model.
```
t.test(RT ~ trial_type.e, data = dat, var.equal = TRUE)
```
```
##
## Two Sample t-test
##
## data: RT by trial_type.e
## t = -4.2975, df = 198, p-value = 2.707e-05
## alternative hypothesis: true difference in means between group -0.5 and group 0.5 is not equal to 0
## 95 percent confidence interval:
## -90.35945 -33.51601
## sample estimates:
## mean in group -0.5 mean in group 0.5
## 757.2228 819.1605
```
What happens when you use other codings for trial type in the t\-test above? Which coding maps onto the results of the t\-test best?
### 9\.5\.2 ANOVA
ANOVA is also a special, limited version of the linear model.
```
my_aov <- aov(RT ~ trial_type.e, data = dat)
summary(my_aov, intercept = TRUE)
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## (Intercept) 1 124249219 124249219 11963.12 < 2e-16 ***
## trial_type.e 1 191814 191814 18.47 2.71e-05 ***
## Residuals 198 2056432 10386
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The easiest way to get parameters out of an analysis is to use the `broom::tidy()` function. This returns a tidy table that you can extract numbers of interest from. Here, we just want to get the F\-value for the effect of trial\_type. Compare the square root of this value to the t\-value from the t\-tests above.
```
f <- broom::tidy(my_aov)$statistic[1]
sqrt(f)
```
```
## [1] 4.297498
```
### 9\.5\.1 T\-test
The t\-test is just a special, limited example of a general linear model.
```
t.test(RT ~ trial_type.e, data = dat, var.equal = TRUE)
```
```
##
## Two Sample t-test
##
## data: RT by trial_type.e
## t = -4.2975, df = 198, p-value = 2.707e-05
## alternative hypothesis: true difference in means between group -0.5 and group 0.5 is not equal to 0
## 95 percent confidence interval:
## -90.35945 -33.51601
## sample estimates:
## mean in group -0.5 mean in group 0.5
## 757.2228 819.1605
```
What happens when you use other codings for trial type in the t\-test above? Which coding maps onto the results of the t\-test best?
### 9\.5\.2 ANOVA
ANOVA is also a special, limited version of the linear model.
```
my_aov <- aov(RT ~ trial_type.e, data = dat)
summary(my_aov, intercept = TRUE)
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## (Intercept) 1 124249219 124249219 11963.12 < 2e-16 ***
## trial_type.e 1 191814 191814 18.47 2.71e-05 ***
## Residuals 198 2056432 10386
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The easiest way to get parameters out of an analysis is to use the `broom::tidy()` function. This returns a tidy table that you can extract numbers of interest from. Here, we just want to get the F\-value for the effect of trial\_type. Compare the square root of this value to the t\-value from the t\-tests above.
```
f <- broom::tidy(my_aov)$statistic[1]
sqrt(f)
```
```
## [1] 4.297498
```
9\.6 Understanding ANOVA
------------------------
We’ll walk through an example of a one\-way ANOVA with the following equation:
\\(Y\_{ij} \= \\mu \+ A\_i \+ S(A)\_{ij}\\)
This means that each data point (\\(Y\_{ij}\\)) is predicted to be the sum of the grand mean (\\(\\mu\\)), plus the effect of factor A (\\(A\_i\\)), plus some residual error (\\(S(A)\_{ij}\\)).
### 9\.6\.1 Means, Variability, and Deviation Scores
Let’s create a simple simulation function so you can quickly create a two\-sample dataset with specified Ns, means, and SDs.
```
two_sample <- function(n = 10, m1 = 0, m2 = 0, sd1 = 1, sd2 = 1) {
s1 <- rnorm(n, m1, sd1)
s2 <- rnorm(n, m2, sd2)
data.frame(
Y = c(s1, s2),
grp = rep(c("A", "B"), each = n)
)
}
```
Now we will use `two_sample()` to create a dataset `dat` with N\=5 per group, means of \-2 and \+2, and SDs of 1 and 1 (yes, this is an effect size of d \= 4\).
```
dat <- two_sample(5, -2, +2, 1, 1)
```
You can calculate how each data point (`Y`) deviates from the overall sample mean (\\(\\hat{\\mu}\\)), which is represented by the horizontal grey line below and the deviations are the vertical grey lines. You can also calculate how different each point is from its group\-specific mean (\\(\\hat{A\_i}\\)), which are represented by the horizontal coloured lines below and the deviations are the coloured vertical lines.
Figure 9\.4: Deviations of each data point (Y) from the overall and group means
You can use these deviations to calculate variability between groups and within groups. ANOVA tests whether the variability between groups is larger than that within groups, accounting for the number of groups and observations.
### 9\.6\.2 Decomposition matrices
We can use the estimation equations for a one\-factor ANOVA to calculate the model components.
* `mu` is the overall mean
* `a` is how different each group mean is from the overall mean
* `err` is residual error, calculated by subtracting `mu` and `a` from `Y`
This produces a *decomposition matrix*, a table with columns for `Y`, `mu`, `a`, and `err`.
```
decomp <- dat %>%
select(Y, grp) %>%
mutate(mu = mean(Y)) %>% # calculate mu_hat
group_by(grp) %>%
mutate(a = mean(Y) - mu) %>% # calculate a_hat for each grp
ungroup() %>%
mutate(err = Y - mu - a) # calculate residual error
```
| Y | grp | mu | a | err |
| --- | --- | --- | --- | --- |
| \-1\.4770938 | A | 0\.1207513 | \-1\.533501 | \-0\.0643443 |
| \-2\.9508741 | A | 0\.1207513 | \-1\.533501 | \-1\.5381246 |
| \-0\.6376736 | A | 0\.1207513 | \-1\.533501 | 0\.7750759 |
| \-1\.7579084 | A | 0\.1207513 | \-1\.533501 | \-0\.3451589 |
| \-0\.2401977 | A | 0\.1207513 | \-1\.533501 | 1\.1725518 |
| 0\.1968155 | B | 0\.1207513 | 1\.533501 | \-1\.4574367 |
| 2\.6308008 | B | 0\.1207513 | 1\.533501 | 0\.9765486 |
| 2\.0293297 | B | 0\.1207513 | 1\.533501 | 0\.3750775 |
| 2\.1629037 | B | 0\.1207513 | 1\.533501 | 0\.5086516 |
| 1\.2514112 | B | 0\.1207513 | 1\.533501 | \-0\.4028410 |
Calculate sums of squares for `mu`, `a`, and `err`.
```
SS <- decomp %>%
summarise(mu = sum(mu*mu),
a = sum(a*a),
err = sum(err*err))
```
| mu | a | err |
| --- | --- | --- |
| 0\.1458088 | 23\.51625 | 8\.104182 |
If you’ve done everything right, `SS$mu + SS$a + SS$err` should equal the sum of squares for Y.
```
SS_Y <- sum(decomp$Y^2)
all.equal(SS_Y, SS$mu + SS$a + SS$err)
```
```
## [1] TRUE
```
Divide each sum of squares by its corresponding degrees of freedom (df) to calculate mean squares. The df for `mu` is 1, the df for factor `a` is `K-1` (K is the number of groups), and the df for `err` is `N - K` (N is the number of observations).
```
K <- n_distinct(dat$grp)
N <- nrow(dat)
df <- c(mu = 1, a = K - 1, err = N - K)
MS <- SS / df
```
| mu | a | err |
| --- | --- | --- |
| 0\.1458088 | 23\.51625 | 1\.013023 |
Then calculate an F\-ratio for `mu` and `a` by dividing their mean squares by the error term mean square. Get the p\-values that correspond to these F\-values using the `pf()` function.
```
F_mu <- MS$mu / MS$err
F_a <- MS$a / MS$err
p_mu <- pf(F_mu, df1 = df['mu'], df2 = df['err'], lower.tail = FALSE)
p_a <- pf(F_a, df1 = df['a'], df2 = df['err'], lower.tail = FALSE)
```
Put everything into a data frame to display it in the same way as the ANOVA summary function.
```
my_calcs <- data.frame(
term = c("Intercept", "grp", "Residuals"),
Df = df,
SS = c(SS$mu, SS$a, SS$err),
MS = c(MS$mu, MS$a, MS$err),
F = c(F_mu, F_a, NA),
p = c(p_mu, p_a, NA)
)
```
| | term | Df | SS | MS | F | p |
| --- | --- | --- | --- | --- | --- | --- |
| mu | Intercept | 1 | 0\.146 | 0\.146 | 0\.144 | 0\.714 |
| a | grp | 1 | 23\.516 | 23\.516 | 23\.214 | 0\.001 |
| err | Residuals | 8 | 8\.104 | 1\.013 | NA | NA |
Now run a one\-way ANOVA on your results and compare it to what you obtained in your calculations.
```
aov(Y ~ grp, data = dat) %>% summary(intercept = TRUE)
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## (Intercept) 1 0.146 0.146 0.144 0.71427
## grp 1 23.516 23.516 23.214 0.00132 **
## Residuals 8 8.104 1.013
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Using the code above, write your own function that takes a table of data and returns the ANOVA results table like above.
### 9\.6\.1 Means, Variability, and Deviation Scores
Let’s create a simple simulation function so you can quickly create a two\-sample dataset with specified Ns, means, and SDs.
```
two_sample <- function(n = 10, m1 = 0, m2 = 0, sd1 = 1, sd2 = 1) {
s1 <- rnorm(n, m1, sd1)
s2 <- rnorm(n, m2, sd2)
data.frame(
Y = c(s1, s2),
grp = rep(c("A", "B"), each = n)
)
}
```
Now we will use `two_sample()` to create a dataset `dat` with N\=5 per group, means of \-2 and \+2, and SDs of 1 and 1 (yes, this is an effect size of d \= 4\).
```
dat <- two_sample(5, -2, +2, 1, 1)
```
You can calculate how each data point (`Y`) deviates from the overall sample mean (\\(\\hat{\\mu}\\)), which is represented by the horizontal grey line below and the deviations are the vertical grey lines. You can also calculate how different each point is from its group\-specific mean (\\(\\hat{A\_i}\\)), which are represented by the horizontal coloured lines below and the deviations are the coloured vertical lines.
Figure 9\.4: Deviations of each data point (Y) from the overall and group means
You can use these deviations to calculate variability between groups and within groups. ANOVA tests whether the variability between groups is larger than that within groups, accounting for the number of groups and observations.
### 9\.6\.2 Decomposition matrices
We can use the estimation equations for a one\-factor ANOVA to calculate the model components.
* `mu` is the overall mean
* `a` is how different each group mean is from the overall mean
* `err` is residual error, calculated by subtracting `mu` and `a` from `Y`
This produces a *decomposition matrix*, a table with columns for `Y`, `mu`, `a`, and `err`.
```
decomp <- dat %>%
select(Y, grp) %>%
mutate(mu = mean(Y)) %>% # calculate mu_hat
group_by(grp) %>%
mutate(a = mean(Y) - mu) %>% # calculate a_hat for each grp
ungroup() %>%
mutate(err = Y - mu - a) # calculate residual error
```
| Y | grp | mu | a | err |
| --- | --- | --- | --- | --- |
| \-1\.4770938 | A | 0\.1207513 | \-1\.533501 | \-0\.0643443 |
| \-2\.9508741 | A | 0\.1207513 | \-1\.533501 | \-1\.5381246 |
| \-0\.6376736 | A | 0\.1207513 | \-1\.533501 | 0\.7750759 |
| \-1\.7579084 | A | 0\.1207513 | \-1\.533501 | \-0\.3451589 |
| \-0\.2401977 | A | 0\.1207513 | \-1\.533501 | 1\.1725518 |
| 0\.1968155 | B | 0\.1207513 | 1\.533501 | \-1\.4574367 |
| 2\.6308008 | B | 0\.1207513 | 1\.533501 | 0\.9765486 |
| 2\.0293297 | B | 0\.1207513 | 1\.533501 | 0\.3750775 |
| 2\.1629037 | B | 0\.1207513 | 1\.533501 | 0\.5086516 |
| 1\.2514112 | B | 0\.1207513 | 1\.533501 | \-0\.4028410 |
Calculate sums of squares for `mu`, `a`, and `err`.
```
SS <- decomp %>%
summarise(mu = sum(mu*mu),
a = sum(a*a),
err = sum(err*err))
```
| mu | a | err |
| --- | --- | --- |
| 0\.1458088 | 23\.51625 | 8\.104182 |
If you’ve done everything right, `SS$mu + SS$a + SS$err` should equal the sum of squares for Y.
```
SS_Y <- sum(decomp$Y^2)
all.equal(SS_Y, SS$mu + SS$a + SS$err)
```
```
## [1] TRUE
```
Divide each sum of squares by its corresponding degrees of freedom (df) to calculate mean squares. The df for `mu` is 1, the df for factor `a` is `K-1` (K is the number of groups), and the df for `err` is `N - K` (N is the number of observations).
```
K <- n_distinct(dat$grp)
N <- nrow(dat)
df <- c(mu = 1, a = K - 1, err = N - K)
MS <- SS / df
```
| mu | a | err |
| --- | --- | --- |
| 0\.1458088 | 23\.51625 | 1\.013023 |
Then calculate an F\-ratio for `mu` and `a` by dividing their mean squares by the error term mean square. Get the p\-values that correspond to these F\-values using the `pf()` function.
```
F_mu <- MS$mu / MS$err
F_a <- MS$a / MS$err
p_mu <- pf(F_mu, df1 = df['mu'], df2 = df['err'], lower.tail = FALSE)
p_a <- pf(F_a, df1 = df['a'], df2 = df['err'], lower.tail = FALSE)
```
Put everything into a data frame to display it in the same way as the ANOVA summary function.
```
my_calcs <- data.frame(
term = c("Intercept", "grp", "Residuals"),
Df = df,
SS = c(SS$mu, SS$a, SS$err),
MS = c(MS$mu, MS$a, MS$err),
F = c(F_mu, F_a, NA),
p = c(p_mu, p_a, NA)
)
```
| | term | Df | SS | MS | F | p |
| --- | --- | --- | --- | --- | --- | --- |
| mu | Intercept | 1 | 0\.146 | 0\.146 | 0\.144 | 0\.714 |
| a | grp | 1 | 23\.516 | 23\.516 | 23\.214 | 0\.001 |
| err | Residuals | 8 | 8\.104 | 1\.013 | NA | NA |
Now run a one\-way ANOVA on your results and compare it to what you obtained in your calculations.
```
aov(Y ~ grp, data = dat) %>% summary(intercept = TRUE)
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## (Intercept) 1 0.146 0.146 0.144 0.71427
## grp 1 23.516 23.516 23.214 0.00132 **
## Residuals 8 8.104 1.013
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Using the code above, write your own function that takes a table of data and returns the ANOVA results table like above.
9\.7 Glossary
-------------
| term | definition |
| --- | --- |
| [categorical](https://psyteachr.github.io/glossary/c#categorical) | Data that can only take certain values, such as types of pet. |
| [coding scheme](https://psyteachr.github.io/glossary/c#coding.scheme) | How to represent categorical variables with numbers for use in models |
| [continuous](https://psyteachr.github.io/glossary/c#continuous) | Data that can take on any values between other existing values. |
| [dependent variable](https://psyteachr.github.io/glossary/d#dependent.variable) | The target variable that is being analyzed, whose value is assumed to depend on other variables. |
| [effect code](https://psyteachr.github.io/glossary/e#effect.code) | A coding scheme for categorical variables that contrasts each group mean with the mean of all the group means. |
| [error term](https://psyteachr.github.io/glossary/e#error.term) | The term in a model that represents the difference between the actual and predicted values |
| [general linear model](https://psyteachr.github.io/glossary/g#general.linear.model) | A mathematical model comparing how one or more independent variables affect a continuous dependent variable |
| [independent variable](https://psyteachr.github.io/glossary/i#independent.variable) | A variable whose value is assumed to influence the value of a dependent variable. |
| [normal distribution](https://psyteachr.github.io/glossary/n#normal.distribution) | A symmetric distribution of data where values near the centre are most probable. |
| [residual error](https://psyteachr.github.io/glossary/r#residual.error) | That part of an observation that cannot be captured by the statistical model, and thus is assumed to reflect unknown factors. |
| [simulation](https://psyteachr.github.io/glossary/s#simulation) | Generating data from summary parameters |
| [standard deviation](https://psyteachr.github.io/glossary/s#standard.deviation) | A descriptive statistic that measures how spread out data are relative to the mean. |
9\.8 Exercises
--------------
Download the [exercises](exercises/09_glm_exercise.Rmd). See the [answers](exercises/09_glm_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(9)
# run this to access the answers
dataskills::exercise(9, answers = TRUE)
```
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/glm.html |
Chapter 9 Introduction to GLM
=============================
9\.1 Learning Objectives
------------------------
### 9\.1\.1 Basic
1. Define the [components](glm.html#glm-components) of the GLM
2. [Simulate data](glm.html#sim-glm) using GLM equations [(video)](https://youtu.be/JQ90LnVCbKc)
3. Identify the model parameters that correspond to the data\-generation parameters
4. Understand and plot [residuals](glm.html#residuals) [(video)](https://youtu.be/sr-NtxiH2Qk)
5. [Predict new values](glm.html#predict) using the model [(video)](https://youtu.be/0o4LEbVVWfM)
6. Explain the differences among [coding schemes](glm.html#coding-schemes) [(video)](https://youtu.be/SqL28AbLj3g)
### 9\.1\.2 Intermediate
7. Demonstrate the [relationships](glm.html#test-rels) among two\-sample t\-test, one\-way ANOVA, and linear regression
8. Given data and a GLM, [generate a decomposition matrix](glm.html#decomp) and calculate sums of squares, mean squares, and F ratios for a one\-way ANOVA
9\.2 Resources
--------------
* [Stub for this lesson](stubs/9_glm.Rmd)
* [Jeff Miller and Patricia Haden, Statistical Analysis with the Linear Model (free online textbook)](http://www.otago.ac.nz/psychology/otago039309.pdf)
* [lecture slides introducing the General Linear Model](slides/08_glm_slides.pdf)
* [GLM shiny app](http://rstudio2.psy.gla.ac.uk/Dale/GLM)
* [F distribution](http://rstudio2.psy.gla.ac.uk/Dale/fdist)
9\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(broom)
set.seed(30250) # makes sure random numbers are reproducible
```
9\.4 GLM
--------
### 9\.4\.1 What is the GLM?
The [General Linear Model](https://psyteachr.github.io/glossary/g#general-linear-model "A mathematical model comparing how one or more independent variables affect a continuous dependent variable") (GLM) is a general mathematical framework for expressing relationships among variables that can express or test linear relationships between a numerical [dependent variable](https://psyteachr.github.io/glossary/d#dependent-variable "The target variable that is being analyzed, whose value is assumed to depend on other variables.") and any combination of [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") or [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") [independent variables](https://psyteachr.github.io/glossary/i#independent-variable "A variable whose value is assumed to influence the value of a dependent variable.").
### 9\.4\.2 Components
There are some mathematical conventions that you need to learn to understand the equations representing linear models. Once you understand those, learning about the GLM will get much easier.
| Component of GLM | Notation |
| --- | --- |
| Dependent Variable (DV) | \\(Y\\) |
| Grand Average | \\(\\mu\\) (the Greek letter “mu”) |
| Main Effects | \\(A, B, C, \\ldots\\) |
| Interactions | \\(AB, AC, BC, ABC, \\ldots\\) |
| Random Error | \\(S(Group)\\) |
The linear equation predicts the dependent variable (\\(Y\\)) as the sum of the grand average value of \\(Y\\) (\\(\\mu\\), also called the intercept), the main effects of all the predictor variables (\\(A\+B\+C\+ \\ldots\\)), the interactions among all the predictor variables (\\(AB, AC, BC, ABC, \\ldots\\)), and some random error (\\(S(Group)\\)). The equation for a model with two predictor variables (\\(A\\) and \\(B\\)) and their interaction (\\(AB\\)) is written like this:
\\(Y\\) \~ \\(\\mu\+A\+B\+AB\+S(Group)\\)
Don’t worry if this doesn’t make sense until we walk through a concrete example.
### 9\.4\.3 Simulating data from GLM
A good way to learn about linear models is to [simulate](https://psyteachr.github.io/glossary/s#simulation "Generating data from summary parameters") data where you know exactly how the variables are related, and then analyse this simulated data to see where the parameters show up in the analysis.
We’ll start with a very simple linear model that just has a single categorical factor with two levels. Let’s say we’re predicting reaction times for congruent and incongruent trials in a Stroop task for a single participant. Average reaction time (`mu`) is 800ms, and is 50ms faster for congruent than incongruent trials (`effect`).
A **factor** is a categorical variable that is used to divide subjects into groups, usually to draw some comparison. Factors are composed of different **levels**. Do not confuse factors with levels!
In the example above, trial type is a factor level, incongrunt is a factor level, and congruent is a factor level.
You need to represent [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") factors with numbers. The numbers, or [coding scheme](https://psyteachr.github.io/glossary/c#coding-scheme "How to represent categorical variables with numbers for use in models") you choose will affect the numbers you get out of the analysis and how you need to interpret them. Here, we will [effect code](https://psyteachr.github.io/glossary/e#effect-code "A coding scheme for categorical variables that contrasts each group mean with the mean of all the group means.") the trial types so that congruent trials are coded as \+0\.5, and incongruent trials are coded as \-0\.5\.
A person won’t always respond exactly the same way. They might be a little faster on some trials than others, due to random fluctuations in attention, learning about the task, or fatigue. So we can add an [error term](https://psyteachr.github.io/glossary/e#error-term "The term in a model that represents the difference between the actual and predicted values") to each trial. We can’t know how much any specific trial will differ, but we can characterise the distribution of how much trials differ from average and then sample from this distribution.
Here, we’ll assume the error term is sampled from a [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable.") with a [standard deviation](https://psyteachr.github.io/glossary/s#standard-deviation "A descriptive statistic that measures how spread out data are relative to the mean.") of 100 ms (the mean of the error term distribution is always 0\). We’ll also sample 100 trials of each type, so we can see a range of variation.
So first create variables for all of the parameters that describe your data.
```
n_per_grp <- 100
mu <- 800 # average RT
effect <- 50 # average difference between congruent and incongruent trials
error_sd <- 100 # standard deviation of the error term
trial_types <- c("congruent" = 0.5, "incongruent" = -0.5) # effect code
```
Then simulate the data by creating a data table with a row for each trial and columns for the trial type and the error term (random numbers samples from a normal distribution with the SD specified by `error_sd`). For categorical variables, include both a column with the text labels (`trial_type`) and another column with the coded version (`trial_type.e`) to make it easier to check what the codings mean and to use for graphing. Calculate the dependent variable (`RT`) as the sum of the grand mean (`mu`), the coefficient (`effect`) multiplied by the effect\-coded predictor variable (`trial_type.e`), and the error term.
```
dat <- data.frame(
trial_type = rep(names(trial_types), each = n_per_grp)
) %>%
mutate(
trial_type.e = recode(trial_type, !!!trial_types),
error = rnorm(nrow(.), 0, error_sd),
RT = mu + effect*trial_type.e + error
)
```
The `!!!` (triple bang) in the code `recode(trial_type, !!!trial_types)` is a way to expand the vector `trial_types <- c(“congruent” = 0.5, “incongruent” = -0.5)`. It’s equivalent to `recode(trial_type, “congruent” = 0.5, “incongruent” = -0.5)`. This pattern avoids making mistakes with recoding because there is only one place where you set up the category to code mapping (in the `trial_types` vector).
Last but not least, always plot simulated data to make sure it looks like you expect.
```
ggplot(dat, aes(trial_type, RT)) +
geom_violin() +
geom_boxplot(aes(fill = trial_type),
width = 0.25, show.legend = FALSE)
```
Figure 9\.1: Simulated Data
### 9\.4\.4 Linear Regression
Now we can analyse the data we simulated using the function `lm()`. It takes the formula as the first argument. This is the same as the data\-generating equation, but you can omit the error term (this is implied), and takes the data table as the second argument. Use the `summary()` function to see the statistical summary.
```
my_lm <- lm(RT ~ trial_type.e, data = dat)
summary(my_lm)
```
```
##
## Call:
## lm(formula = RT ~ trial_type.e, data = dat)
##
## Residuals:
## Min 1Q Median 3Q Max
## -302.110 -70.052 0.948 68.262 246.220
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 788.192 7.206 109.376 < 2e-16 ***
## trial_type.e 61.938 14.413 4.297 2.71e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 101.9 on 198 degrees of freedom
## Multiple R-squared: 0.08532, Adjusted R-squared: 0.0807
## F-statistic: 18.47 on 1 and 198 DF, p-value: 2.707e-05
```
Notice how the **estimate** for the `(Intercept)` is close to the value we set for `mu` and the estimate for `trial_type.e` is close to the value we set for `effect`.
Change the values of `mu` and `effect`, resimulate the data, and re\-run the linear model. What happens to the estimates?
### 9\.4\.5 Residuals
You can use the `residuals()` function to extract the error term for each each data point. This is the DV values, minus the estimates for the intercept and trial type. We’ll make a density plot of the [residuals](https://psyteachr.github.io/glossary/r#residual-error "That part of an observation that cannot be captured by the statistical model, and thus is assumed to reflect unknown factors.") below and compare it to the normal distribution we used for the error term.
```
res <- residuals(my_lm)
ggplot(dat) +
stat_function(aes(0), color = "grey60",
fun = dnorm, n = 101,
args = list(mean = 0, sd = error_sd)) +
geom_density(aes(res, color = trial_type))
```
Figure 9\.2: Model residuals should be approximately normally distributed for each group
You can also compare the model residuals to the simulated error values. If the model is accurate, they should be almost identical. If the intercept estimate is slightly off, the points will be slightly above or below the black line. If the estimate for the effect of trial type is slightly off, there will be a small, systematic difference between residuals for congruent and incongruent trials.
```
ggplot(dat) +
geom_abline(slope = 1) +
geom_point(aes(error, res,color = trial_type)) +
ylab("Model Residuals") +
xlab("Simulated Error")
```
Figure 9\.3: Model residuals should be very similar to the simulated error
What happens to the residuals if you fit a model that ignores trial type (e.g., `lm(Y ~ 1, data = dat)`)?
### 9\.4\.6 Predict New Values
You can use the estimates from your model to predict new data points, given values for the model parameters. For this simple example, we just need to know the trial type to make a prediction.
For congruent trials, you would predict that a new data point would be equal to the intercept estimate plus the trial type estimate multiplied by 0\.5 (the effect code for congruent trials).
```
int_est <- my_lm$coefficients[["(Intercept)"]]
tt_est <- my_lm$coefficients[["trial_type.e"]]
tt_code <- trial_types[["congruent"]]
new_congruent_RT <- int_est + tt_est * tt_code
new_congruent_RT
```
```
## [1] 819.1605
```
You can also use the `predict()` function to do this more easily. The second argument is a data table with columns for the factors in the model and rows with the values that you want to use for the prediction.
```
predict(my_lm, newdata = tibble(trial_type.e = 0.5))
```
```
## 1
## 819.1605
```
If you look up this function using `?predict`, you will see that “The function invokes particular methods which depend on the class of the first argument.” What this means is that `predict()` works differently depending on whether you’re predicting from the output of `lm()` or other analysis functions. You can search for help on the lm version with `?predict.lm`.
### 9\.4\.7 Coding Categorical Variables
In the example above, we used **effect coding** for trial type. You can also use **sum coding**, which assigns \+1 and \-1 to the levels instead of \+0\.5 and \-0\.5\. More commonly, you might want to use **treatment coding**, which assigns 0 to one level (usually a baseline or control condition) and 1 to the other level (usually a treatment or experimental condition).
Here we will add sum\-coded and treatment\-coded versions of `trial_type` to the dataset using the `recode()` function.
```
dat <- dat %>% mutate(
trial_type.sum = recode(trial_type, "congruent" = +1, "incongruent" = -1),
trial_type.tr = recode(trial_type, "congruent" = 1, "incongruent" = 0)
)
```
If you define named vectors with your levels and coding, you can use them with the `recode()` function if you expand them using `!!!`.
```
tt_sum <- c("congruent" = +1,
"incongruent" = -1)
tt_tr <- c("congruent" = 1,
"incongruent" = 0)
dat <- dat %>% mutate(
trial_type.sum = recode(trial_type, !!!tt_sum),
trial_type.tr = recode(trial_type, !!!tt_tr)
)
```
Here are the coefficients for the effect\-coded version. They should be the same as those from the last analysis.
```
lm(RT ~ trial_type.e, data = dat)$coefficients
```
```
## (Intercept) trial_type.e
## 788.19166 61.93773
```
Here are the coefficients for the sum\-coded version. This give the same results as effect coding, except the estimate for the categorical factor will be exactly half as large, as it represents the difference between each trial type and the hypothetical condition of 0 (the overall mean RT), rather than the difference between the two trial types.
```
lm(RT ~ trial_type.sum, data = dat)$coefficients
```
```
## (Intercept) trial_type.sum
## 788.19166 30.96887
```
Here are the coefficients for the treatment\-coded version. The estimate for the categorical factor will be the same as in the effect\-coded version, but the intercept will decrease. It will be equal to the intercept minus the estimate for trial type from the sum\-coded version.
```
lm(RT ~ trial_type.tr, data = dat)$coefficients
```
```
## (Intercept) trial_type.tr
## 757.22279 61.93773
```
9\.5 Relationships among tests
------------------------------
### 9\.5\.1 T\-test
The t\-test is just a special, limited example of a general linear model.
```
t.test(RT ~ trial_type.e, data = dat, var.equal = TRUE)
```
```
##
## Two Sample t-test
##
## data: RT by trial_type.e
## t = -4.2975, df = 198, p-value = 2.707e-05
## alternative hypothesis: true difference in means between group -0.5 and group 0.5 is not equal to 0
## 95 percent confidence interval:
## -90.35945 -33.51601
## sample estimates:
## mean in group -0.5 mean in group 0.5
## 757.2228 819.1605
```
What happens when you use other codings for trial type in the t\-test above? Which coding maps onto the results of the t\-test best?
### 9\.5\.2 ANOVA
ANOVA is also a special, limited version of the linear model.
```
my_aov <- aov(RT ~ trial_type.e, data = dat)
summary(my_aov, intercept = TRUE)
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## (Intercept) 1 124249219 124249219 11963.12 < 2e-16 ***
## trial_type.e 1 191814 191814 18.47 2.71e-05 ***
## Residuals 198 2056432 10386
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The easiest way to get parameters out of an analysis is to use the `broom::tidy()` function. This returns a tidy table that you can extract numbers of interest from. Here, we just want to get the F\-value for the effect of trial\_type. Compare the square root of this value to the t\-value from the t\-tests above.
```
f <- broom::tidy(my_aov)$statistic[1]
sqrt(f)
```
```
## [1] 4.297498
```
9\.6 Understanding ANOVA
------------------------
We’ll walk through an example of a one\-way ANOVA with the following equation:
\\(Y\_{ij} \= \\mu \+ A\_i \+ S(A)\_{ij}\\)
This means that each data point (\\(Y\_{ij}\\)) is predicted to be the sum of the grand mean (\\(\\mu\\)), plus the effect of factor A (\\(A\_i\\)), plus some residual error (\\(S(A)\_{ij}\\)).
### 9\.6\.1 Means, Variability, and Deviation Scores
Let’s create a simple simulation function so you can quickly create a two\-sample dataset with specified Ns, means, and SDs.
```
two_sample <- function(n = 10, m1 = 0, m2 = 0, sd1 = 1, sd2 = 1) {
s1 <- rnorm(n, m1, sd1)
s2 <- rnorm(n, m2, sd2)
data.frame(
Y = c(s1, s2),
grp = rep(c("A", "B"), each = n)
)
}
```
Now we will use `two_sample()` to create a dataset `dat` with N\=5 per group, means of \-2 and \+2, and SDs of 1 and 1 (yes, this is an effect size of d \= 4\).
```
dat <- two_sample(5, -2, +2, 1, 1)
```
You can calculate how each data point (`Y`) deviates from the overall sample mean (\\(\\hat{\\mu}\\)), which is represented by the horizontal grey line below and the deviations are the vertical grey lines. You can also calculate how different each point is from its group\-specific mean (\\(\\hat{A\_i}\\)), which are represented by the horizontal coloured lines below and the deviations are the coloured vertical lines.
Figure 9\.4: Deviations of each data point (Y) from the overall and group means
You can use these deviations to calculate variability between groups and within groups. ANOVA tests whether the variability between groups is larger than that within groups, accounting for the number of groups and observations.
### 9\.6\.2 Decomposition matrices
We can use the estimation equations for a one\-factor ANOVA to calculate the model components.
* `mu` is the overall mean
* `a` is how different each group mean is from the overall mean
* `err` is residual error, calculated by subtracting `mu` and `a` from `Y`
This produces a *decomposition matrix*, a table with columns for `Y`, `mu`, `a`, and `err`.
```
decomp <- dat %>%
select(Y, grp) %>%
mutate(mu = mean(Y)) %>% # calculate mu_hat
group_by(grp) %>%
mutate(a = mean(Y) - mu) %>% # calculate a_hat for each grp
ungroup() %>%
mutate(err = Y - mu - a) # calculate residual error
```
| Y | grp | mu | a | err |
| --- | --- | --- | --- | --- |
| \-1\.4770938 | A | 0\.1207513 | \-1\.533501 | \-0\.0643443 |
| \-2\.9508741 | A | 0\.1207513 | \-1\.533501 | \-1\.5381246 |
| \-0\.6376736 | A | 0\.1207513 | \-1\.533501 | 0\.7750759 |
| \-1\.7579084 | A | 0\.1207513 | \-1\.533501 | \-0\.3451589 |
| \-0\.2401977 | A | 0\.1207513 | \-1\.533501 | 1\.1725518 |
| 0\.1968155 | B | 0\.1207513 | 1\.533501 | \-1\.4574367 |
| 2\.6308008 | B | 0\.1207513 | 1\.533501 | 0\.9765486 |
| 2\.0293297 | B | 0\.1207513 | 1\.533501 | 0\.3750775 |
| 2\.1629037 | B | 0\.1207513 | 1\.533501 | 0\.5086516 |
| 1\.2514112 | B | 0\.1207513 | 1\.533501 | \-0\.4028410 |
Calculate sums of squares for `mu`, `a`, and `err`.
```
SS <- decomp %>%
summarise(mu = sum(mu*mu),
a = sum(a*a),
err = sum(err*err))
```
| mu | a | err |
| --- | --- | --- |
| 0\.1458088 | 23\.51625 | 8\.104182 |
If you’ve done everything right, `SS$mu + SS$a + SS$err` should equal the sum of squares for Y.
```
SS_Y <- sum(decomp$Y^2)
all.equal(SS_Y, SS$mu + SS$a + SS$err)
```
```
## [1] TRUE
```
Divide each sum of squares by its corresponding degrees of freedom (df) to calculate mean squares. The df for `mu` is 1, the df for factor `a` is `K-1` (K is the number of groups), and the df for `err` is `N - K` (N is the number of observations).
```
K <- n_distinct(dat$grp)
N <- nrow(dat)
df <- c(mu = 1, a = K - 1, err = N - K)
MS <- SS / df
```
| mu | a | err |
| --- | --- | --- |
| 0\.1458088 | 23\.51625 | 1\.013023 |
Then calculate an F\-ratio for `mu` and `a` by dividing their mean squares by the error term mean square. Get the p\-values that correspond to these F\-values using the `pf()` function.
```
F_mu <- MS$mu / MS$err
F_a <- MS$a / MS$err
p_mu <- pf(F_mu, df1 = df['mu'], df2 = df['err'], lower.tail = FALSE)
p_a <- pf(F_a, df1 = df['a'], df2 = df['err'], lower.tail = FALSE)
```
Put everything into a data frame to display it in the same way as the ANOVA summary function.
```
my_calcs <- data.frame(
term = c("Intercept", "grp", "Residuals"),
Df = df,
SS = c(SS$mu, SS$a, SS$err),
MS = c(MS$mu, MS$a, MS$err),
F = c(F_mu, F_a, NA),
p = c(p_mu, p_a, NA)
)
```
| | term | Df | SS | MS | F | p |
| --- | --- | --- | --- | --- | --- | --- |
| mu | Intercept | 1 | 0\.146 | 0\.146 | 0\.144 | 0\.714 |
| a | grp | 1 | 23\.516 | 23\.516 | 23\.214 | 0\.001 |
| err | Residuals | 8 | 8\.104 | 1\.013 | NA | NA |
Now run a one\-way ANOVA on your results and compare it to what you obtained in your calculations.
```
aov(Y ~ grp, data = dat) %>% summary(intercept = TRUE)
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## (Intercept) 1 0.146 0.146 0.144 0.71427
## grp 1 23.516 23.516 23.214 0.00132 **
## Residuals 8 8.104 1.013
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Using the code above, write your own function that takes a table of data and returns the ANOVA results table like above.
9\.7 Glossary
-------------
| term | definition |
| --- | --- |
| [categorical](https://psyteachr.github.io/glossary/c#categorical) | Data that can only take certain values, such as types of pet. |
| [coding scheme](https://psyteachr.github.io/glossary/c#coding.scheme) | How to represent categorical variables with numbers for use in models |
| [continuous](https://psyteachr.github.io/glossary/c#continuous) | Data that can take on any values between other existing values. |
| [dependent variable](https://psyteachr.github.io/glossary/d#dependent.variable) | The target variable that is being analyzed, whose value is assumed to depend on other variables. |
| [effect code](https://psyteachr.github.io/glossary/e#effect.code) | A coding scheme for categorical variables that contrasts each group mean with the mean of all the group means. |
| [error term](https://psyteachr.github.io/glossary/e#error.term) | The term in a model that represents the difference between the actual and predicted values |
| [general linear model](https://psyteachr.github.io/glossary/g#general.linear.model) | A mathematical model comparing how one or more independent variables affect a continuous dependent variable |
| [independent variable](https://psyteachr.github.io/glossary/i#independent.variable) | A variable whose value is assumed to influence the value of a dependent variable. |
| [normal distribution](https://psyteachr.github.io/glossary/n#normal.distribution) | A symmetric distribution of data where values near the centre are most probable. |
| [residual error](https://psyteachr.github.io/glossary/r#residual.error) | That part of an observation that cannot be captured by the statistical model, and thus is assumed to reflect unknown factors. |
| [simulation](https://psyteachr.github.io/glossary/s#simulation) | Generating data from summary parameters |
| [standard deviation](https://psyteachr.github.io/glossary/s#standard.deviation) | A descriptive statistic that measures how spread out data are relative to the mean. |
9\.8 Exercises
--------------
Download the [exercises](exercises/09_glm_exercise.Rmd). See the [answers](exercises/09_glm_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(9)
# run this to access the answers
dataskills::exercise(9, answers = TRUE)
```
9\.1 Learning Objectives
------------------------
### 9\.1\.1 Basic
1. Define the [components](glm.html#glm-components) of the GLM
2. [Simulate data](glm.html#sim-glm) using GLM equations [(video)](https://youtu.be/JQ90LnVCbKc)
3. Identify the model parameters that correspond to the data\-generation parameters
4. Understand and plot [residuals](glm.html#residuals) [(video)](https://youtu.be/sr-NtxiH2Qk)
5. [Predict new values](glm.html#predict) using the model [(video)](https://youtu.be/0o4LEbVVWfM)
6. Explain the differences among [coding schemes](glm.html#coding-schemes) [(video)](https://youtu.be/SqL28AbLj3g)
### 9\.1\.2 Intermediate
7. Demonstrate the [relationships](glm.html#test-rels) among two\-sample t\-test, one\-way ANOVA, and linear regression
8. Given data and a GLM, [generate a decomposition matrix](glm.html#decomp) and calculate sums of squares, mean squares, and F ratios for a one\-way ANOVA
### 9\.1\.1 Basic
1. Define the [components](glm.html#glm-components) of the GLM
2. [Simulate data](glm.html#sim-glm) using GLM equations [(video)](https://youtu.be/JQ90LnVCbKc)
3. Identify the model parameters that correspond to the data\-generation parameters
4. Understand and plot [residuals](glm.html#residuals) [(video)](https://youtu.be/sr-NtxiH2Qk)
5. [Predict new values](glm.html#predict) using the model [(video)](https://youtu.be/0o4LEbVVWfM)
6. Explain the differences among [coding schemes](glm.html#coding-schemes) [(video)](https://youtu.be/SqL28AbLj3g)
### 9\.1\.2 Intermediate
7. Demonstrate the [relationships](glm.html#test-rels) among two\-sample t\-test, one\-way ANOVA, and linear regression
8. Given data and a GLM, [generate a decomposition matrix](glm.html#decomp) and calculate sums of squares, mean squares, and F ratios for a one\-way ANOVA
9\.2 Resources
--------------
* [Stub for this lesson](stubs/9_glm.Rmd)
* [Jeff Miller and Patricia Haden, Statistical Analysis with the Linear Model (free online textbook)](http://www.otago.ac.nz/psychology/otago039309.pdf)
* [lecture slides introducing the General Linear Model](slides/08_glm_slides.pdf)
* [GLM shiny app](http://rstudio2.psy.gla.ac.uk/Dale/GLM)
* [F distribution](http://rstudio2.psy.gla.ac.uk/Dale/fdist)
9\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(broom)
set.seed(30250) # makes sure random numbers are reproducible
```
9\.4 GLM
--------
### 9\.4\.1 What is the GLM?
The [General Linear Model](https://psyteachr.github.io/glossary/g#general-linear-model "A mathematical model comparing how one or more independent variables affect a continuous dependent variable") (GLM) is a general mathematical framework for expressing relationships among variables that can express or test linear relationships between a numerical [dependent variable](https://psyteachr.github.io/glossary/d#dependent-variable "The target variable that is being analyzed, whose value is assumed to depend on other variables.") and any combination of [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") or [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") [independent variables](https://psyteachr.github.io/glossary/i#independent-variable "A variable whose value is assumed to influence the value of a dependent variable.").
### 9\.4\.2 Components
There are some mathematical conventions that you need to learn to understand the equations representing linear models. Once you understand those, learning about the GLM will get much easier.
| Component of GLM | Notation |
| --- | --- |
| Dependent Variable (DV) | \\(Y\\) |
| Grand Average | \\(\\mu\\) (the Greek letter “mu”) |
| Main Effects | \\(A, B, C, \\ldots\\) |
| Interactions | \\(AB, AC, BC, ABC, \\ldots\\) |
| Random Error | \\(S(Group)\\) |
The linear equation predicts the dependent variable (\\(Y\\)) as the sum of the grand average value of \\(Y\\) (\\(\\mu\\), also called the intercept), the main effects of all the predictor variables (\\(A\+B\+C\+ \\ldots\\)), the interactions among all the predictor variables (\\(AB, AC, BC, ABC, \\ldots\\)), and some random error (\\(S(Group)\\)). The equation for a model with two predictor variables (\\(A\\) and \\(B\\)) and their interaction (\\(AB\\)) is written like this:
\\(Y\\) \~ \\(\\mu\+A\+B\+AB\+S(Group)\\)
Don’t worry if this doesn’t make sense until we walk through a concrete example.
### 9\.4\.3 Simulating data from GLM
A good way to learn about linear models is to [simulate](https://psyteachr.github.io/glossary/s#simulation "Generating data from summary parameters") data where you know exactly how the variables are related, and then analyse this simulated data to see where the parameters show up in the analysis.
We’ll start with a very simple linear model that just has a single categorical factor with two levels. Let’s say we’re predicting reaction times for congruent and incongruent trials in a Stroop task for a single participant. Average reaction time (`mu`) is 800ms, and is 50ms faster for congruent than incongruent trials (`effect`).
A **factor** is a categorical variable that is used to divide subjects into groups, usually to draw some comparison. Factors are composed of different **levels**. Do not confuse factors with levels!
In the example above, trial type is a factor level, incongrunt is a factor level, and congruent is a factor level.
You need to represent [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") factors with numbers. The numbers, or [coding scheme](https://psyteachr.github.io/glossary/c#coding-scheme "How to represent categorical variables with numbers for use in models") you choose will affect the numbers you get out of the analysis and how you need to interpret them. Here, we will [effect code](https://psyteachr.github.io/glossary/e#effect-code "A coding scheme for categorical variables that contrasts each group mean with the mean of all the group means.") the trial types so that congruent trials are coded as \+0\.5, and incongruent trials are coded as \-0\.5\.
A person won’t always respond exactly the same way. They might be a little faster on some trials than others, due to random fluctuations in attention, learning about the task, or fatigue. So we can add an [error term](https://psyteachr.github.io/glossary/e#error-term "The term in a model that represents the difference between the actual and predicted values") to each trial. We can’t know how much any specific trial will differ, but we can characterise the distribution of how much trials differ from average and then sample from this distribution.
Here, we’ll assume the error term is sampled from a [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable.") with a [standard deviation](https://psyteachr.github.io/glossary/s#standard-deviation "A descriptive statistic that measures how spread out data are relative to the mean.") of 100 ms (the mean of the error term distribution is always 0\). We’ll also sample 100 trials of each type, so we can see a range of variation.
So first create variables for all of the parameters that describe your data.
```
n_per_grp <- 100
mu <- 800 # average RT
effect <- 50 # average difference between congruent and incongruent trials
error_sd <- 100 # standard deviation of the error term
trial_types <- c("congruent" = 0.5, "incongruent" = -0.5) # effect code
```
Then simulate the data by creating a data table with a row for each trial and columns for the trial type and the error term (random numbers samples from a normal distribution with the SD specified by `error_sd`). For categorical variables, include both a column with the text labels (`trial_type`) and another column with the coded version (`trial_type.e`) to make it easier to check what the codings mean and to use for graphing. Calculate the dependent variable (`RT`) as the sum of the grand mean (`mu`), the coefficient (`effect`) multiplied by the effect\-coded predictor variable (`trial_type.e`), and the error term.
```
dat <- data.frame(
trial_type = rep(names(trial_types), each = n_per_grp)
) %>%
mutate(
trial_type.e = recode(trial_type, !!!trial_types),
error = rnorm(nrow(.), 0, error_sd),
RT = mu + effect*trial_type.e + error
)
```
The `!!!` (triple bang) in the code `recode(trial_type, !!!trial_types)` is a way to expand the vector `trial_types <- c(“congruent” = 0.5, “incongruent” = -0.5)`. It’s equivalent to `recode(trial_type, “congruent” = 0.5, “incongruent” = -0.5)`. This pattern avoids making mistakes with recoding because there is only one place where you set up the category to code mapping (in the `trial_types` vector).
Last but not least, always plot simulated data to make sure it looks like you expect.
```
ggplot(dat, aes(trial_type, RT)) +
geom_violin() +
geom_boxplot(aes(fill = trial_type),
width = 0.25, show.legend = FALSE)
```
Figure 9\.1: Simulated Data
### 9\.4\.4 Linear Regression
Now we can analyse the data we simulated using the function `lm()`. It takes the formula as the first argument. This is the same as the data\-generating equation, but you can omit the error term (this is implied), and takes the data table as the second argument. Use the `summary()` function to see the statistical summary.
```
my_lm <- lm(RT ~ trial_type.e, data = dat)
summary(my_lm)
```
```
##
## Call:
## lm(formula = RT ~ trial_type.e, data = dat)
##
## Residuals:
## Min 1Q Median 3Q Max
## -302.110 -70.052 0.948 68.262 246.220
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 788.192 7.206 109.376 < 2e-16 ***
## trial_type.e 61.938 14.413 4.297 2.71e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 101.9 on 198 degrees of freedom
## Multiple R-squared: 0.08532, Adjusted R-squared: 0.0807
## F-statistic: 18.47 on 1 and 198 DF, p-value: 2.707e-05
```
Notice how the **estimate** for the `(Intercept)` is close to the value we set for `mu` and the estimate for `trial_type.e` is close to the value we set for `effect`.
Change the values of `mu` and `effect`, resimulate the data, and re\-run the linear model. What happens to the estimates?
### 9\.4\.5 Residuals
You can use the `residuals()` function to extract the error term for each each data point. This is the DV values, minus the estimates for the intercept and trial type. We’ll make a density plot of the [residuals](https://psyteachr.github.io/glossary/r#residual-error "That part of an observation that cannot be captured by the statistical model, and thus is assumed to reflect unknown factors.") below and compare it to the normal distribution we used for the error term.
```
res <- residuals(my_lm)
ggplot(dat) +
stat_function(aes(0), color = "grey60",
fun = dnorm, n = 101,
args = list(mean = 0, sd = error_sd)) +
geom_density(aes(res, color = trial_type))
```
Figure 9\.2: Model residuals should be approximately normally distributed for each group
You can also compare the model residuals to the simulated error values. If the model is accurate, they should be almost identical. If the intercept estimate is slightly off, the points will be slightly above or below the black line. If the estimate for the effect of trial type is slightly off, there will be a small, systematic difference between residuals for congruent and incongruent trials.
```
ggplot(dat) +
geom_abline(slope = 1) +
geom_point(aes(error, res,color = trial_type)) +
ylab("Model Residuals") +
xlab("Simulated Error")
```
Figure 9\.3: Model residuals should be very similar to the simulated error
What happens to the residuals if you fit a model that ignores trial type (e.g., `lm(Y ~ 1, data = dat)`)?
### 9\.4\.6 Predict New Values
You can use the estimates from your model to predict new data points, given values for the model parameters. For this simple example, we just need to know the trial type to make a prediction.
For congruent trials, you would predict that a new data point would be equal to the intercept estimate plus the trial type estimate multiplied by 0\.5 (the effect code for congruent trials).
```
int_est <- my_lm$coefficients[["(Intercept)"]]
tt_est <- my_lm$coefficients[["trial_type.e"]]
tt_code <- trial_types[["congruent"]]
new_congruent_RT <- int_est + tt_est * tt_code
new_congruent_RT
```
```
## [1] 819.1605
```
You can also use the `predict()` function to do this more easily. The second argument is a data table with columns for the factors in the model and rows with the values that you want to use for the prediction.
```
predict(my_lm, newdata = tibble(trial_type.e = 0.5))
```
```
## 1
## 819.1605
```
If you look up this function using `?predict`, you will see that “The function invokes particular methods which depend on the class of the first argument.” What this means is that `predict()` works differently depending on whether you’re predicting from the output of `lm()` or other analysis functions. You can search for help on the lm version with `?predict.lm`.
### 9\.4\.7 Coding Categorical Variables
In the example above, we used **effect coding** for trial type. You can also use **sum coding**, which assigns \+1 and \-1 to the levels instead of \+0\.5 and \-0\.5\. More commonly, you might want to use **treatment coding**, which assigns 0 to one level (usually a baseline or control condition) and 1 to the other level (usually a treatment or experimental condition).
Here we will add sum\-coded and treatment\-coded versions of `trial_type` to the dataset using the `recode()` function.
```
dat <- dat %>% mutate(
trial_type.sum = recode(trial_type, "congruent" = +1, "incongruent" = -1),
trial_type.tr = recode(trial_type, "congruent" = 1, "incongruent" = 0)
)
```
If you define named vectors with your levels and coding, you can use them with the `recode()` function if you expand them using `!!!`.
```
tt_sum <- c("congruent" = +1,
"incongruent" = -1)
tt_tr <- c("congruent" = 1,
"incongruent" = 0)
dat <- dat %>% mutate(
trial_type.sum = recode(trial_type, !!!tt_sum),
trial_type.tr = recode(trial_type, !!!tt_tr)
)
```
Here are the coefficients for the effect\-coded version. They should be the same as those from the last analysis.
```
lm(RT ~ trial_type.e, data = dat)$coefficients
```
```
## (Intercept) trial_type.e
## 788.19166 61.93773
```
Here are the coefficients for the sum\-coded version. This give the same results as effect coding, except the estimate for the categorical factor will be exactly half as large, as it represents the difference between each trial type and the hypothetical condition of 0 (the overall mean RT), rather than the difference between the two trial types.
```
lm(RT ~ trial_type.sum, data = dat)$coefficients
```
```
## (Intercept) trial_type.sum
## 788.19166 30.96887
```
Here are the coefficients for the treatment\-coded version. The estimate for the categorical factor will be the same as in the effect\-coded version, but the intercept will decrease. It will be equal to the intercept minus the estimate for trial type from the sum\-coded version.
```
lm(RT ~ trial_type.tr, data = dat)$coefficients
```
```
## (Intercept) trial_type.tr
## 757.22279 61.93773
```
### 9\.4\.1 What is the GLM?
The [General Linear Model](https://psyteachr.github.io/glossary/g#general-linear-model "A mathematical model comparing how one or more independent variables affect a continuous dependent variable") (GLM) is a general mathematical framework for expressing relationships among variables that can express or test linear relationships between a numerical [dependent variable](https://psyteachr.github.io/glossary/d#dependent-variable "The target variable that is being analyzed, whose value is assumed to depend on other variables.") and any combination of [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") or [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") [independent variables](https://psyteachr.github.io/glossary/i#independent-variable "A variable whose value is assumed to influence the value of a dependent variable.").
### 9\.4\.2 Components
There are some mathematical conventions that you need to learn to understand the equations representing linear models. Once you understand those, learning about the GLM will get much easier.
| Component of GLM | Notation |
| --- | --- |
| Dependent Variable (DV) | \\(Y\\) |
| Grand Average | \\(\\mu\\) (the Greek letter “mu”) |
| Main Effects | \\(A, B, C, \\ldots\\) |
| Interactions | \\(AB, AC, BC, ABC, \\ldots\\) |
| Random Error | \\(S(Group)\\) |
The linear equation predicts the dependent variable (\\(Y\\)) as the sum of the grand average value of \\(Y\\) (\\(\\mu\\), also called the intercept), the main effects of all the predictor variables (\\(A\+B\+C\+ \\ldots\\)), the interactions among all the predictor variables (\\(AB, AC, BC, ABC, \\ldots\\)), and some random error (\\(S(Group)\\)). The equation for a model with two predictor variables (\\(A\\) and \\(B\\)) and their interaction (\\(AB\\)) is written like this:
\\(Y\\) \~ \\(\\mu\+A\+B\+AB\+S(Group)\\)
Don’t worry if this doesn’t make sense until we walk through a concrete example.
### 9\.4\.3 Simulating data from GLM
A good way to learn about linear models is to [simulate](https://psyteachr.github.io/glossary/s#simulation "Generating data from summary parameters") data where you know exactly how the variables are related, and then analyse this simulated data to see where the parameters show up in the analysis.
We’ll start with a very simple linear model that just has a single categorical factor with two levels. Let’s say we’re predicting reaction times for congruent and incongruent trials in a Stroop task for a single participant. Average reaction time (`mu`) is 800ms, and is 50ms faster for congruent than incongruent trials (`effect`).
A **factor** is a categorical variable that is used to divide subjects into groups, usually to draw some comparison. Factors are composed of different **levels**. Do not confuse factors with levels!
In the example above, trial type is a factor level, incongrunt is a factor level, and congruent is a factor level.
You need to represent [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") factors with numbers. The numbers, or [coding scheme](https://psyteachr.github.io/glossary/c#coding-scheme "How to represent categorical variables with numbers for use in models") you choose will affect the numbers you get out of the analysis and how you need to interpret them. Here, we will [effect code](https://psyteachr.github.io/glossary/e#effect-code "A coding scheme for categorical variables that contrasts each group mean with the mean of all the group means.") the trial types so that congruent trials are coded as \+0\.5, and incongruent trials are coded as \-0\.5\.
A person won’t always respond exactly the same way. They might be a little faster on some trials than others, due to random fluctuations in attention, learning about the task, or fatigue. So we can add an [error term](https://psyteachr.github.io/glossary/e#error-term "The term in a model that represents the difference between the actual and predicted values") to each trial. We can’t know how much any specific trial will differ, but we can characterise the distribution of how much trials differ from average and then sample from this distribution.
Here, we’ll assume the error term is sampled from a [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable.") with a [standard deviation](https://psyteachr.github.io/glossary/s#standard-deviation "A descriptive statistic that measures how spread out data are relative to the mean.") of 100 ms (the mean of the error term distribution is always 0\). We’ll also sample 100 trials of each type, so we can see a range of variation.
So first create variables for all of the parameters that describe your data.
```
n_per_grp <- 100
mu <- 800 # average RT
effect <- 50 # average difference between congruent and incongruent trials
error_sd <- 100 # standard deviation of the error term
trial_types <- c("congruent" = 0.5, "incongruent" = -0.5) # effect code
```
Then simulate the data by creating a data table with a row for each trial and columns for the trial type and the error term (random numbers samples from a normal distribution with the SD specified by `error_sd`). For categorical variables, include both a column with the text labels (`trial_type`) and another column with the coded version (`trial_type.e`) to make it easier to check what the codings mean and to use for graphing. Calculate the dependent variable (`RT`) as the sum of the grand mean (`mu`), the coefficient (`effect`) multiplied by the effect\-coded predictor variable (`trial_type.e`), and the error term.
```
dat <- data.frame(
trial_type = rep(names(trial_types), each = n_per_grp)
) %>%
mutate(
trial_type.e = recode(trial_type, !!!trial_types),
error = rnorm(nrow(.), 0, error_sd),
RT = mu + effect*trial_type.e + error
)
```
The `!!!` (triple bang) in the code `recode(trial_type, !!!trial_types)` is a way to expand the vector `trial_types <- c(“congruent” = 0.5, “incongruent” = -0.5)`. It’s equivalent to `recode(trial_type, “congruent” = 0.5, “incongruent” = -0.5)`. This pattern avoids making mistakes with recoding because there is only one place where you set up the category to code mapping (in the `trial_types` vector).
Last but not least, always plot simulated data to make sure it looks like you expect.
```
ggplot(dat, aes(trial_type, RT)) +
geom_violin() +
geom_boxplot(aes(fill = trial_type),
width = 0.25, show.legend = FALSE)
```
Figure 9\.1: Simulated Data
### 9\.4\.4 Linear Regression
Now we can analyse the data we simulated using the function `lm()`. It takes the formula as the first argument. This is the same as the data\-generating equation, but you can omit the error term (this is implied), and takes the data table as the second argument. Use the `summary()` function to see the statistical summary.
```
my_lm <- lm(RT ~ trial_type.e, data = dat)
summary(my_lm)
```
```
##
## Call:
## lm(formula = RT ~ trial_type.e, data = dat)
##
## Residuals:
## Min 1Q Median 3Q Max
## -302.110 -70.052 0.948 68.262 246.220
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 788.192 7.206 109.376 < 2e-16 ***
## trial_type.e 61.938 14.413 4.297 2.71e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 101.9 on 198 degrees of freedom
## Multiple R-squared: 0.08532, Adjusted R-squared: 0.0807
## F-statistic: 18.47 on 1 and 198 DF, p-value: 2.707e-05
```
Notice how the **estimate** for the `(Intercept)` is close to the value we set for `mu` and the estimate for `trial_type.e` is close to the value we set for `effect`.
Change the values of `mu` and `effect`, resimulate the data, and re\-run the linear model. What happens to the estimates?
### 9\.4\.5 Residuals
You can use the `residuals()` function to extract the error term for each each data point. This is the DV values, minus the estimates for the intercept and trial type. We’ll make a density plot of the [residuals](https://psyteachr.github.io/glossary/r#residual-error "That part of an observation that cannot be captured by the statistical model, and thus is assumed to reflect unknown factors.") below and compare it to the normal distribution we used for the error term.
```
res <- residuals(my_lm)
ggplot(dat) +
stat_function(aes(0), color = "grey60",
fun = dnorm, n = 101,
args = list(mean = 0, sd = error_sd)) +
geom_density(aes(res, color = trial_type))
```
Figure 9\.2: Model residuals should be approximately normally distributed for each group
You can also compare the model residuals to the simulated error values. If the model is accurate, they should be almost identical. If the intercept estimate is slightly off, the points will be slightly above or below the black line. If the estimate for the effect of trial type is slightly off, there will be a small, systematic difference between residuals for congruent and incongruent trials.
```
ggplot(dat) +
geom_abline(slope = 1) +
geom_point(aes(error, res,color = trial_type)) +
ylab("Model Residuals") +
xlab("Simulated Error")
```
Figure 9\.3: Model residuals should be very similar to the simulated error
What happens to the residuals if you fit a model that ignores trial type (e.g., `lm(Y ~ 1, data = dat)`)?
### 9\.4\.6 Predict New Values
You can use the estimates from your model to predict new data points, given values for the model parameters. For this simple example, we just need to know the trial type to make a prediction.
For congruent trials, you would predict that a new data point would be equal to the intercept estimate plus the trial type estimate multiplied by 0\.5 (the effect code for congruent trials).
```
int_est <- my_lm$coefficients[["(Intercept)"]]
tt_est <- my_lm$coefficients[["trial_type.e"]]
tt_code <- trial_types[["congruent"]]
new_congruent_RT <- int_est + tt_est * tt_code
new_congruent_RT
```
```
## [1] 819.1605
```
You can also use the `predict()` function to do this more easily. The second argument is a data table with columns for the factors in the model and rows with the values that you want to use for the prediction.
```
predict(my_lm, newdata = tibble(trial_type.e = 0.5))
```
```
## 1
## 819.1605
```
If you look up this function using `?predict`, you will see that “The function invokes particular methods which depend on the class of the first argument.” What this means is that `predict()` works differently depending on whether you’re predicting from the output of `lm()` or other analysis functions. You can search for help on the lm version with `?predict.lm`.
### 9\.4\.7 Coding Categorical Variables
In the example above, we used **effect coding** for trial type. You can also use **sum coding**, which assigns \+1 and \-1 to the levels instead of \+0\.5 and \-0\.5\. More commonly, you might want to use **treatment coding**, which assigns 0 to one level (usually a baseline or control condition) and 1 to the other level (usually a treatment or experimental condition).
Here we will add sum\-coded and treatment\-coded versions of `trial_type` to the dataset using the `recode()` function.
```
dat <- dat %>% mutate(
trial_type.sum = recode(trial_type, "congruent" = +1, "incongruent" = -1),
trial_type.tr = recode(trial_type, "congruent" = 1, "incongruent" = 0)
)
```
If you define named vectors with your levels and coding, you can use them with the `recode()` function if you expand them using `!!!`.
```
tt_sum <- c("congruent" = +1,
"incongruent" = -1)
tt_tr <- c("congruent" = 1,
"incongruent" = 0)
dat <- dat %>% mutate(
trial_type.sum = recode(trial_type, !!!tt_sum),
trial_type.tr = recode(trial_type, !!!tt_tr)
)
```
Here are the coefficients for the effect\-coded version. They should be the same as those from the last analysis.
```
lm(RT ~ trial_type.e, data = dat)$coefficients
```
```
## (Intercept) trial_type.e
## 788.19166 61.93773
```
Here are the coefficients for the sum\-coded version. This give the same results as effect coding, except the estimate for the categorical factor will be exactly half as large, as it represents the difference between each trial type and the hypothetical condition of 0 (the overall mean RT), rather than the difference between the two trial types.
```
lm(RT ~ trial_type.sum, data = dat)$coefficients
```
```
## (Intercept) trial_type.sum
## 788.19166 30.96887
```
Here are the coefficients for the treatment\-coded version. The estimate for the categorical factor will be the same as in the effect\-coded version, but the intercept will decrease. It will be equal to the intercept minus the estimate for trial type from the sum\-coded version.
```
lm(RT ~ trial_type.tr, data = dat)$coefficients
```
```
## (Intercept) trial_type.tr
## 757.22279 61.93773
```
9\.5 Relationships among tests
------------------------------
### 9\.5\.1 T\-test
The t\-test is just a special, limited example of a general linear model.
```
t.test(RT ~ trial_type.e, data = dat, var.equal = TRUE)
```
```
##
## Two Sample t-test
##
## data: RT by trial_type.e
## t = -4.2975, df = 198, p-value = 2.707e-05
## alternative hypothesis: true difference in means between group -0.5 and group 0.5 is not equal to 0
## 95 percent confidence interval:
## -90.35945 -33.51601
## sample estimates:
## mean in group -0.5 mean in group 0.5
## 757.2228 819.1605
```
What happens when you use other codings for trial type in the t\-test above? Which coding maps onto the results of the t\-test best?
### 9\.5\.2 ANOVA
ANOVA is also a special, limited version of the linear model.
```
my_aov <- aov(RT ~ trial_type.e, data = dat)
summary(my_aov, intercept = TRUE)
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## (Intercept) 1 124249219 124249219 11963.12 < 2e-16 ***
## trial_type.e 1 191814 191814 18.47 2.71e-05 ***
## Residuals 198 2056432 10386
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The easiest way to get parameters out of an analysis is to use the `broom::tidy()` function. This returns a tidy table that you can extract numbers of interest from. Here, we just want to get the F\-value for the effect of trial\_type. Compare the square root of this value to the t\-value from the t\-tests above.
```
f <- broom::tidy(my_aov)$statistic[1]
sqrt(f)
```
```
## [1] 4.297498
```
### 9\.5\.1 T\-test
The t\-test is just a special, limited example of a general linear model.
```
t.test(RT ~ trial_type.e, data = dat, var.equal = TRUE)
```
```
##
## Two Sample t-test
##
## data: RT by trial_type.e
## t = -4.2975, df = 198, p-value = 2.707e-05
## alternative hypothesis: true difference in means between group -0.5 and group 0.5 is not equal to 0
## 95 percent confidence interval:
## -90.35945 -33.51601
## sample estimates:
## mean in group -0.5 mean in group 0.5
## 757.2228 819.1605
```
What happens when you use other codings for trial type in the t\-test above? Which coding maps onto the results of the t\-test best?
### 9\.5\.2 ANOVA
ANOVA is also a special, limited version of the linear model.
```
my_aov <- aov(RT ~ trial_type.e, data = dat)
summary(my_aov, intercept = TRUE)
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## (Intercept) 1 124249219 124249219 11963.12 < 2e-16 ***
## trial_type.e 1 191814 191814 18.47 2.71e-05 ***
## Residuals 198 2056432 10386
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The easiest way to get parameters out of an analysis is to use the `broom::tidy()` function. This returns a tidy table that you can extract numbers of interest from. Here, we just want to get the F\-value for the effect of trial\_type. Compare the square root of this value to the t\-value from the t\-tests above.
```
f <- broom::tidy(my_aov)$statistic[1]
sqrt(f)
```
```
## [1] 4.297498
```
9\.6 Understanding ANOVA
------------------------
We’ll walk through an example of a one\-way ANOVA with the following equation:
\\(Y\_{ij} \= \\mu \+ A\_i \+ S(A)\_{ij}\\)
This means that each data point (\\(Y\_{ij}\\)) is predicted to be the sum of the grand mean (\\(\\mu\\)), plus the effect of factor A (\\(A\_i\\)), plus some residual error (\\(S(A)\_{ij}\\)).
### 9\.6\.1 Means, Variability, and Deviation Scores
Let’s create a simple simulation function so you can quickly create a two\-sample dataset with specified Ns, means, and SDs.
```
two_sample <- function(n = 10, m1 = 0, m2 = 0, sd1 = 1, sd2 = 1) {
s1 <- rnorm(n, m1, sd1)
s2 <- rnorm(n, m2, sd2)
data.frame(
Y = c(s1, s2),
grp = rep(c("A", "B"), each = n)
)
}
```
Now we will use `two_sample()` to create a dataset `dat` with N\=5 per group, means of \-2 and \+2, and SDs of 1 and 1 (yes, this is an effect size of d \= 4\).
```
dat <- two_sample(5, -2, +2, 1, 1)
```
You can calculate how each data point (`Y`) deviates from the overall sample mean (\\(\\hat{\\mu}\\)), which is represented by the horizontal grey line below and the deviations are the vertical grey lines. You can also calculate how different each point is from its group\-specific mean (\\(\\hat{A\_i}\\)), which are represented by the horizontal coloured lines below and the deviations are the coloured vertical lines.
Figure 9\.4: Deviations of each data point (Y) from the overall and group means
You can use these deviations to calculate variability between groups and within groups. ANOVA tests whether the variability between groups is larger than that within groups, accounting for the number of groups and observations.
### 9\.6\.2 Decomposition matrices
We can use the estimation equations for a one\-factor ANOVA to calculate the model components.
* `mu` is the overall mean
* `a` is how different each group mean is from the overall mean
* `err` is residual error, calculated by subtracting `mu` and `a` from `Y`
This produces a *decomposition matrix*, a table with columns for `Y`, `mu`, `a`, and `err`.
```
decomp <- dat %>%
select(Y, grp) %>%
mutate(mu = mean(Y)) %>% # calculate mu_hat
group_by(grp) %>%
mutate(a = mean(Y) - mu) %>% # calculate a_hat for each grp
ungroup() %>%
mutate(err = Y - mu - a) # calculate residual error
```
| Y | grp | mu | a | err |
| --- | --- | --- | --- | --- |
| \-1\.4770938 | A | 0\.1207513 | \-1\.533501 | \-0\.0643443 |
| \-2\.9508741 | A | 0\.1207513 | \-1\.533501 | \-1\.5381246 |
| \-0\.6376736 | A | 0\.1207513 | \-1\.533501 | 0\.7750759 |
| \-1\.7579084 | A | 0\.1207513 | \-1\.533501 | \-0\.3451589 |
| \-0\.2401977 | A | 0\.1207513 | \-1\.533501 | 1\.1725518 |
| 0\.1968155 | B | 0\.1207513 | 1\.533501 | \-1\.4574367 |
| 2\.6308008 | B | 0\.1207513 | 1\.533501 | 0\.9765486 |
| 2\.0293297 | B | 0\.1207513 | 1\.533501 | 0\.3750775 |
| 2\.1629037 | B | 0\.1207513 | 1\.533501 | 0\.5086516 |
| 1\.2514112 | B | 0\.1207513 | 1\.533501 | \-0\.4028410 |
Calculate sums of squares for `mu`, `a`, and `err`.
```
SS <- decomp %>%
summarise(mu = sum(mu*mu),
a = sum(a*a),
err = sum(err*err))
```
| mu | a | err |
| --- | --- | --- |
| 0\.1458088 | 23\.51625 | 8\.104182 |
If you’ve done everything right, `SS$mu + SS$a + SS$err` should equal the sum of squares for Y.
```
SS_Y <- sum(decomp$Y^2)
all.equal(SS_Y, SS$mu + SS$a + SS$err)
```
```
## [1] TRUE
```
Divide each sum of squares by its corresponding degrees of freedom (df) to calculate mean squares. The df for `mu` is 1, the df for factor `a` is `K-1` (K is the number of groups), and the df for `err` is `N - K` (N is the number of observations).
```
K <- n_distinct(dat$grp)
N <- nrow(dat)
df <- c(mu = 1, a = K - 1, err = N - K)
MS <- SS / df
```
| mu | a | err |
| --- | --- | --- |
| 0\.1458088 | 23\.51625 | 1\.013023 |
Then calculate an F\-ratio for `mu` and `a` by dividing their mean squares by the error term mean square. Get the p\-values that correspond to these F\-values using the `pf()` function.
```
F_mu <- MS$mu / MS$err
F_a <- MS$a / MS$err
p_mu <- pf(F_mu, df1 = df['mu'], df2 = df['err'], lower.tail = FALSE)
p_a <- pf(F_a, df1 = df['a'], df2 = df['err'], lower.tail = FALSE)
```
Put everything into a data frame to display it in the same way as the ANOVA summary function.
```
my_calcs <- data.frame(
term = c("Intercept", "grp", "Residuals"),
Df = df,
SS = c(SS$mu, SS$a, SS$err),
MS = c(MS$mu, MS$a, MS$err),
F = c(F_mu, F_a, NA),
p = c(p_mu, p_a, NA)
)
```
| | term | Df | SS | MS | F | p |
| --- | --- | --- | --- | --- | --- | --- |
| mu | Intercept | 1 | 0\.146 | 0\.146 | 0\.144 | 0\.714 |
| a | grp | 1 | 23\.516 | 23\.516 | 23\.214 | 0\.001 |
| err | Residuals | 8 | 8\.104 | 1\.013 | NA | NA |
Now run a one\-way ANOVA on your results and compare it to what you obtained in your calculations.
```
aov(Y ~ grp, data = dat) %>% summary(intercept = TRUE)
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## (Intercept) 1 0.146 0.146 0.144 0.71427
## grp 1 23.516 23.516 23.214 0.00132 **
## Residuals 8 8.104 1.013
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Using the code above, write your own function that takes a table of data and returns the ANOVA results table like above.
### 9\.6\.1 Means, Variability, and Deviation Scores
Let’s create a simple simulation function so you can quickly create a two\-sample dataset with specified Ns, means, and SDs.
```
two_sample <- function(n = 10, m1 = 0, m2 = 0, sd1 = 1, sd2 = 1) {
s1 <- rnorm(n, m1, sd1)
s2 <- rnorm(n, m2, sd2)
data.frame(
Y = c(s1, s2),
grp = rep(c("A", "B"), each = n)
)
}
```
Now we will use `two_sample()` to create a dataset `dat` with N\=5 per group, means of \-2 and \+2, and SDs of 1 and 1 (yes, this is an effect size of d \= 4\).
```
dat <- two_sample(5, -2, +2, 1, 1)
```
You can calculate how each data point (`Y`) deviates from the overall sample mean (\\(\\hat{\\mu}\\)), which is represented by the horizontal grey line below and the deviations are the vertical grey lines. You can also calculate how different each point is from its group\-specific mean (\\(\\hat{A\_i}\\)), which are represented by the horizontal coloured lines below and the deviations are the coloured vertical lines.
Figure 9\.4: Deviations of each data point (Y) from the overall and group means
You can use these deviations to calculate variability between groups and within groups. ANOVA tests whether the variability between groups is larger than that within groups, accounting for the number of groups and observations.
### 9\.6\.2 Decomposition matrices
We can use the estimation equations for a one\-factor ANOVA to calculate the model components.
* `mu` is the overall mean
* `a` is how different each group mean is from the overall mean
* `err` is residual error, calculated by subtracting `mu` and `a` from `Y`
This produces a *decomposition matrix*, a table with columns for `Y`, `mu`, `a`, and `err`.
```
decomp <- dat %>%
select(Y, grp) %>%
mutate(mu = mean(Y)) %>% # calculate mu_hat
group_by(grp) %>%
mutate(a = mean(Y) - mu) %>% # calculate a_hat for each grp
ungroup() %>%
mutate(err = Y - mu - a) # calculate residual error
```
| Y | grp | mu | a | err |
| --- | --- | --- | --- | --- |
| \-1\.4770938 | A | 0\.1207513 | \-1\.533501 | \-0\.0643443 |
| \-2\.9508741 | A | 0\.1207513 | \-1\.533501 | \-1\.5381246 |
| \-0\.6376736 | A | 0\.1207513 | \-1\.533501 | 0\.7750759 |
| \-1\.7579084 | A | 0\.1207513 | \-1\.533501 | \-0\.3451589 |
| \-0\.2401977 | A | 0\.1207513 | \-1\.533501 | 1\.1725518 |
| 0\.1968155 | B | 0\.1207513 | 1\.533501 | \-1\.4574367 |
| 2\.6308008 | B | 0\.1207513 | 1\.533501 | 0\.9765486 |
| 2\.0293297 | B | 0\.1207513 | 1\.533501 | 0\.3750775 |
| 2\.1629037 | B | 0\.1207513 | 1\.533501 | 0\.5086516 |
| 1\.2514112 | B | 0\.1207513 | 1\.533501 | \-0\.4028410 |
Calculate sums of squares for `mu`, `a`, and `err`.
```
SS <- decomp %>%
summarise(mu = sum(mu*mu),
a = sum(a*a),
err = sum(err*err))
```
| mu | a | err |
| --- | --- | --- |
| 0\.1458088 | 23\.51625 | 8\.104182 |
If you’ve done everything right, `SS$mu + SS$a + SS$err` should equal the sum of squares for Y.
```
SS_Y <- sum(decomp$Y^2)
all.equal(SS_Y, SS$mu + SS$a + SS$err)
```
```
## [1] TRUE
```
Divide each sum of squares by its corresponding degrees of freedom (df) to calculate mean squares. The df for `mu` is 1, the df for factor `a` is `K-1` (K is the number of groups), and the df for `err` is `N - K` (N is the number of observations).
```
K <- n_distinct(dat$grp)
N <- nrow(dat)
df <- c(mu = 1, a = K - 1, err = N - K)
MS <- SS / df
```
| mu | a | err |
| --- | --- | --- |
| 0\.1458088 | 23\.51625 | 1\.013023 |
Then calculate an F\-ratio for `mu` and `a` by dividing their mean squares by the error term mean square. Get the p\-values that correspond to these F\-values using the `pf()` function.
```
F_mu <- MS$mu / MS$err
F_a <- MS$a / MS$err
p_mu <- pf(F_mu, df1 = df['mu'], df2 = df['err'], lower.tail = FALSE)
p_a <- pf(F_a, df1 = df['a'], df2 = df['err'], lower.tail = FALSE)
```
Put everything into a data frame to display it in the same way as the ANOVA summary function.
```
my_calcs <- data.frame(
term = c("Intercept", "grp", "Residuals"),
Df = df,
SS = c(SS$mu, SS$a, SS$err),
MS = c(MS$mu, MS$a, MS$err),
F = c(F_mu, F_a, NA),
p = c(p_mu, p_a, NA)
)
```
| | term | Df | SS | MS | F | p |
| --- | --- | --- | --- | --- | --- | --- |
| mu | Intercept | 1 | 0\.146 | 0\.146 | 0\.144 | 0\.714 |
| a | grp | 1 | 23\.516 | 23\.516 | 23\.214 | 0\.001 |
| err | Residuals | 8 | 8\.104 | 1\.013 | NA | NA |
Now run a one\-way ANOVA on your results and compare it to what you obtained in your calculations.
```
aov(Y ~ grp, data = dat) %>% summary(intercept = TRUE)
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## (Intercept) 1 0.146 0.146 0.144 0.71427
## grp 1 23.516 23.516 23.214 0.00132 **
## Residuals 8 8.104 1.013
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Using the code above, write your own function that takes a table of data and returns the ANOVA results table like above.
9\.7 Glossary
-------------
| term | definition |
| --- | --- |
| [categorical](https://psyteachr.github.io/glossary/c#categorical) | Data that can only take certain values, such as types of pet. |
| [coding scheme](https://psyteachr.github.io/glossary/c#coding.scheme) | How to represent categorical variables with numbers for use in models |
| [continuous](https://psyteachr.github.io/glossary/c#continuous) | Data that can take on any values between other existing values. |
| [dependent variable](https://psyteachr.github.io/glossary/d#dependent.variable) | The target variable that is being analyzed, whose value is assumed to depend on other variables. |
| [effect code](https://psyteachr.github.io/glossary/e#effect.code) | A coding scheme for categorical variables that contrasts each group mean with the mean of all the group means. |
| [error term](https://psyteachr.github.io/glossary/e#error.term) | The term in a model that represents the difference between the actual and predicted values |
| [general linear model](https://psyteachr.github.io/glossary/g#general.linear.model) | A mathematical model comparing how one or more independent variables affect a continuous dependent variable |
| [independent variable](https://psyteachr.github.io/glossary/i#independent.variable) | A variable whose value is assumed to influence the value of a dependent variable. |
| [normal distribution](https://psyteachr.github.io/glossary/n#normal.distribution) | A symmetric distribution of data where values near the centre are most probable. |
| [residual error](https://psyteachr.github.io/glossary/r#residual.error) | That part of an observation that cannot be captured by the statistical model, and thus is assumed to reflect unknown factors. |
| [simulation](https://psyteachr.github.io/glossary/s#simulation) | Generating data from summary parameters |
| [standard deviation](https://psyteachr.github.io/glossary/s#standard.deviation) | A descriptive statistic that measures how spread out data are relative to the mean. |
9\.8 Exercises
--------------
Download the [exercises](exercises/09_glm_exercise.Rmd). See the [answers](exercises/09_glm_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(9)
# run this to access the answers
dataskills::exercise(9, answers = TRUE)
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/glm.html |
Chapter 9 Introduction to GLM
=============================
9\.1 Learning Objectives
------------------------
### 9\.1\.1 Basic
1. Define the [components](glm.html#glm-components) of the GLM
2. [Simulate data](glm.html#sim-glm) using GLM equations [(video)](https://youtu.be/JQ90LnVCbKc)
3. Identify the model parameters that correspond to the data\-generation parameters
4. Understand and plot [residuals](glm.html#residuals) [(video)](https://youtu.be/sr-NtxiH2Qk)
5. [Predict new values](glm.html#predict) using the model [(video)](https://youtu.be/0o4LEbVVWfM)
6. Explain the differences among [coding schemes](glm.html#coding-schemes) [(video)](https://youtu.be/SqL28AbLj3g)
### 9\.1\.2 Intermediate
7. Demonstrate the [relationships](glm.html#test-rels) among two\-sample t\-test, one\-way ANOVA, and linear regression
8. Given data and a GLM, [generate a decomposition matrix](glm.html#decomp) and calculate sums of squares, mean squares, and F ratios for a one\-way ANOVA
9\.2 Resources
--------------
* [Stub for this lesson](stubs/9_glm.Rmd)
* [Jeff Miller and Patricia Haden, Statistical Analysis with the Linear Model (free online textbook)](http://www.otago.ac.nz/psychology/otago039309.pdf)
* [lecture slides introducing the General Linear Model](slides/08_glm_slides.pdf)
* [GLM shiny app](http://rstudio2.psy.gla.ac.uk/Dale/GLM)
* [F distribution](http://rstudio2.psy.gla.ac.uk/Dale/fdist)
9\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(broom)
set.seed(30250) # makes sure random numbers are reproducible
```
9\.4 GLM
--------
### 9\.4\.1 What is the GLM?
The [General Linear Model](https://psyteachr.github.io/glossary/g#general-linear-model "A mathematical model comparing how one or more independent variables affect a continuous dependent variable") (GLM) is a general mathematical framework for expressing relationships among variables that can express or test linear relationships between a numerical [dependent variable](https://psyteachr.github.io/glossary/d#dependent-variable "The target variable that is being analyzed, whose value is assumed to depend on other variables.") and any combination of [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") or [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") [independent variables](https://psyteachr.github.io/glossary/i#independent-variable "A variable whose value is assumed to influence the value of a dependent variable.").
### 9\.4\.2 Components
There are some mathematical conventions that you need to learn to understand the equations representing linear models. Once you understand those, learning about the GLM will get much easier.
| Component of GLM | Notation |
| --- | --- |
| Dependent Variable (DV) | \\(Y\\) |
| Grand Average | \\(\\mu\\) (the Greek letter “mu”) |
| Main Effects | \\(A, B, C, \\ldots\\) |
| Interactions | \\(AB, AC, BC, ABC, \\ldots\\) |
| Random Error | \\(S(Group)\\) |
The linear equation predicts the dependent variable (\\(Y\\)) as the sum of the grand average value of \\(Y\\) (\\(\\mu\\), also called the intercept), the main effects of all the predictor variables (\\(A\+B\+C\+ \\ldots\\)), the interactions among all the predictor variables (\\(AB, AC, BC, ABC, \\ldots\\)), and some random error (\\(S(Group)\\)). The equation for a model with two predictor variables (\\(A\\) and \\(B\\)) and their interaction (\\(AB\\)) is written like this:
\\(Y\\) \~ \\(\\mu\+A\+B\+AB\+S(Group)\\)
Don’t worry if this doesn’t make sense until we walk through a concrete example.
### 9\.4\.3 Simulating data from GLM
A good way to learn about linear models is to [simulate](https://psyteachr.github.io/glossary/s#simulation "Generating data from summary parameters") data where you know exactly how the variables are related, and then analyse this simulated data to see where the parameters show up in the analysis.
We’ll start with a very simple linear model that just has a single categorical factor with two levels. Let’s say we’re predicting reaction times for congruent and incongruent trials in a Stroop task for a single participant. Average reaction time (`mu`) is 800ms, and is 50ms faster for congruent than incongruent trials (`effect`).
A **factor** is a categorical variable that is used to divide subjects into groups, usually to draw some comparison. Factors are composed of different **levels**. Do not confuse factors with levels!
In the example above, trial type is a factor level, incongrunt is a factor level, and congruent is a factor level.
You need to represent [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") factors with numbers. The numbers, or [coding scheme](https://psyteachr.github.io/glossary/c#coding-scheme "How to represent categorical variables with numbers for use in models") you choose will affect the numbers you get out of the analysis and how you need to interpret them. Here, we will [effect code](https://psyteachr.github.io/glossary/e#effect-code "A coding scheme for categorical variables that contrasts each group mean with the mean of all the group means.") the trial types so that congruent trials are coded as \+0\.5, and incongruent trials are coded as \-0\.5\.
A person won’t always respond exactly the same way. They might be a little faster on some trials than others, due to random fluctuations in attention, learning about the task, or fatigue. So we can add an [error term](https://psyteachr.github.io/glossary/e#error-term "The term in a model that represents the difference between the actual and predicted values") to each trial. We can’t know how much any specific trial will differ, but we can characterise the distribution of how much trials differ from average and then sample from this distribution.
Here, we’ll assume the error term is sampled from a [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable.") with a [standard deviation](https://psyteachr.github.io/glossary/s#standard-deviation "A descriptive statistic that measures how spread out data are relative to the mean.") of 100 ms (the mean of the error term distribution is always 0\). We’ll also sample 100 trials of each type, so we can see a range of variation.
So first create variables for all of the parameters that describe your data.
```
n_per_grp <- 100
mu <- 800 # average RT
effect <- 50 # average difference between congruent and incongruent trials
error_sd <- 100 # standard deviation of the error term
trial_types <- c("congruent" = 0.5, "incongruent" = -0.5) # effect code
```
Then simulate the data by creating a data table with a row for each trial and columns for the trial type and the error term (random numbers samples from a normal distribution with the SD specified by `error_sd`). For categorical variables, include both a column with the text labels (`trial_type`) and another column with the coded version (`trial_type.e`) to make it easier to check what the codings mean and to use for graphing. Calculate the dependent variable (`RT`) as the sum of the grand mean (`mu`), the coefficient (`effect`) multiplied by the effect\-coded predictor variable (`trial_type.e`), and the error term.
```
dat <- data.frame(
trial_type = rep(names(trial_types), each = n_per_grp)
) %>%
mutate(
trial_type.e = recode(trial_type, !!!trial_types),
error = rnorm(nrow(.), 0, error_sd),
RT = mu + effect*trial_type.e + error
)
```
The `!!!` (triple bang) in the code `recode(trial_type, !!!trial_types)` is a way to expand the vector `trial_types <- c(“congruent” = 0.5, “incongruent” = -0.5)`. It’s equivalent to `recode(trial_type, “congruent” = 0.5, “incongruent” = -0.5)`. This pattern avoids making mistakes with recoding because there is only one place where you set up the category to code mapping (in the `trial_types` vector).
Last but not least, always plot simulated data to make sure it looks like you expect.
```
ggplot(dat, aes(trial_type, RT)) +
geom_violin() +
geom_boxplot(aes(fill = trial_type),
width = 0.25, show.legend = FALSE)
```
Figure 9\.1: Simulated Data
### 9\.4\.4 Linear Regression
Now we can analyse the data we simulated using the function `lm()`. It takes the formula as the first argument. This is the same as the data\-generating equation, but you can omit the error term (this is implied), and takes the data table as the second argument. Use the `summary()` function to see the statistical summary.
```
my_lm <- lm(RT ~ trial_type.e, data = dat)
summary(my_lm)
```
```
##
## Call:
## lm(formula = RT ~ trial_type.e, data = dat)
##
## Residuals:
## Min 1Q Median 3Q Max
## -302.110 -70.052 0.948 68.262 246.220
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 788.192 7.206 109.376 < 2e-16 ***
## trial_type.e 61.938 14.413 4.297 2.71e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 101.9 on 198 degrees of freedom
## Multiple R-squared: 0.08532, Adjusted R-squared: 0.0807
## F-statistic: 18.47 on 1 and 198 DF, p-value: 2.707e-05
```
Notice how the **estimate** for the `(Intercept)` is close to the value we set for `mu` and the estimate for `trial_type.e` is close to the value we set for `effect`.
Change the values of `mu` and `effect`, resimulate the data, and re\-run the linear model. What happens to the estimates?
### 9\.4\.5 Residuals
You can use the `residuals()` function to extract the error term for each each data point. This is the DV values, minus the estimates for the intercept and trial type. We’ll make a density plot of the [residuals](https://psyteachr.github.io/glossary/r#residual-error "That part of an observation that cannot be captured by the statistical model, and thus is assumed to reflect unknown factors.") below and compare it to the normal distribution we used for the error term.
```
res <- residuals(my_lm)
ggplot(dat) +
stat_function(aes(0), color = "grey60",
fun = dnorm, n = 101,
args = list(mean = 0, sd = error_sd)) +
geom_density(aes(res, color = trial_type))
```
Figure 9\.2: Model residuals should be approximately normally distributed for each group
You can also compare the model residuals to the simulated error values. If the model is accurate, they should be almost identical. If the intercept estimate is slightly off, the points will be slightly above or below the black line. If the estimate for the effect of trial type is slightly off, there will be a small, systematic difference between residuals for congruent and incongruent trials.
```
ggplot(dat) +
geom_abline(slope = 1) +
geom_point(aes(error, res,color = trial_type)) +
ylab("Model Residuals") +
xlab("Simulated Error")
```
Figure 9\.3: Model residuals should be very similar to the simulated error
What happens to the residuals if you fit a model that ignores trial type (e.g., `lm(Y ~ 1, data = dat)`)?
### 9\.4\.6 Predict New Values
You can use the estimates from your model to predict new data points, given values for the model parameters. For this simple example, we just need to know the trial type to make a prediction.
For congruent trials, you would predict that a new data point would be equal to the intercept estimate plus the trial type estimate multiplied by 0\.5 (the effect code for congruent trials).
```
int_est <- my_lm$coefficients[["(Intercept)"]]
tt_est <- my_lm$coefficients[["trial_type.e"]]
tt_code <- trial_types[["congruent"]]
new_congruent_RT <- int_est + tt_est * tt_code
new_congruent_RT
```
```
## [1] 819.1605
```
You can also use the `predict()` function to do this more easily. The second argument is a data table with columns for the factors in the model and rows with the values that you want to use for the prediction.
```
predict(my_lm, newdata = tibble(trial_type.e = 0.5))
```
```
## 1
## 819.1605
```
If you look up this function using `?predict`, you will see that “The function invokes particular methods which depend on the class of the first argument.” What this means is that `predict()` works differently depending on whether you’re predicting from the output of `lm()` or other analysis functions. You can search for help on the lm version with `?predict.lm`.
### 9\.4\.7 Coding Categorical Variables
In the example above, we used **effect coding** for trial type. You can also use **sum coding**, which assigns \+1 and \-1 to the levels instead of \+0\.5 and \-0\.5\. More commonly, you might want to use **treatment coding**, which assigns 0 to one level (usually a baseline or control condition) and 1 to the other level (usually a treatment or experimental condition).
Here we will add sum\-coded and treatment\-coded versions of `trial_type` to the dataset using the `recode()` function.
```
dat <- dat %>% mutate(
trial_type.sum = recode(trial_type, "congruent" = +1, "incongruent" = -1),
trial_type.tr = recode(trial_type, "congruent" = 1, "incongruent" = 0)
)
```
If you define named vectors with your levels and coding, you can use them with the `recode()` function if you expand them using `!!!`.
```
tt_sum <- c("congruent" = +1,
"incongruent" = -1)
tt_tr <- c("congruent" = 1,
"incongruent" = 0)
dat <- dat %>% mutate(
trial_type.sum = recode(trial_type, !!!tt_sum),
trial_type.tr = recode(trial_type, !!!tt_tr)
)
```
Here are the coefficients for the effect\-coded version. They should be the same as those from the last analysis.
```
lm(RT ~ trial_type.e, data = dat)$coefficients
```
```
## (Intercept) trial_type.e
## 788.19166 61.93773
```
Here are the coefficients for the sum\-coded version. This give the same results as effect coding, except the estimate for the categorical factor will be exactly half as large, as it represents the difference between each trial type and the hypothetical condition of 0 (the overall mean RT), rather than the difference between the two trial types.
```
lm(RT ~ trial_type.sum, data = dat)$coefficients
```
```
## (Intercept) trial_type.sum
## 788.19166 30.96887
```
Here are the coefficients for the treatment\-coded version. The estimate for the categorical factor will be the same as in the effect\-coded version, but the intercept will decrease. It will be equal to the intercept minus the estimate for trial type from the sum\-coded version.
```
lm(RT ~ trial_type.tr, data = dat)$coefficients
```
```
## (Intercept) trial_type.tr
## 757.22279 61.93773
```
9\.5 Relationships among tests
------------------------------
### 9\.5\.1 T\-test
The t\-test is just a special, limited example of a general linear model.
```
t.test(RT ~ trial_type.e, data = dat, var.equal = TRUE)
```
```
##
## Two Sample t-test
##
## data: RT by trial_type.e
## t = -4.2975, df = 198, p-value = 2.707e-05
## alternative hypothesis: true difference in means between group -0.5 and group 0.5 is not equal to 0
## 95 percent confidence interval:
## -90.35945 -33.51601
## sample estimates:
## mean in group -0.5 mean in group 0.5
## 757.2228 819.1605
```
What happens when you use other codings for trial type in the t\-test above? Which coding maps onto the results of the t\-test best?
### 9\.5\.2 ANOVA
ANOVA is also a special, limited version of the linear model.
```
my_aov <- aov(RT ~ trial_type.e, data = dat)
summary(my_aov, intercept = TRUE)
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## (Intercept) 1 124249219 124249219 11963.12 < 2e-16 ***
## trial_type.e 1 191814 191814 18.47 2.71e-05 ***
## Residuals 198 2056432 10386
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The easiest way to get parameters out of an analysis is to use the `broom::tidy()` function. This returns a tidy table that you can extract numbers of interest from. Here, we just want to get the F\-value for the effect of trial\_type. Compare the square root of this value to the t\-value from the t\-tests above.
```
f <- broom::tidy(my_aov)$statistic[1]
sqrt(f)
```
```
## [1] 4.297498
```
9\.6 Understanding ANOVA
------------------------
We’ll walk through an example of a one\-way ANOVA with the following equation:
\\(Y\_{ij} \= \\mu \+ A\_i \+ S(A)\_{ij}\\)
This means that each data point (\\(Y\_{ij}\\)) is predicted to be the sum of the grand mean (\\(\\mu\\)), plus the effect of factor A (\\(A\_i\\)), plus some residual error (\\(S(A)\_{ij}\\)).
### 9\.6\.1 Means, Variability, and Deviation Scores
Let’s create a simple simulation function so you can quickly create a two\-sample dataset with specified Ns, means, and SDs.
```
two_sample <- function(n = 10, m1 = 0, m2 = 0, sd1 = 1, sd2 = 1) {
s1 <- rnorm(n, m1, sd1)
s2 <- rnorm(n, m2, sd2)
data.frame(
Y = c(s1, s2),
grp = rep(c("A", "B"), each = n)
)
}
```
Now we will use `two_sample()` to create a dataset `dat` with N\=5 per group, means of \-2 and \+2, and SDs of 1 and 1 (yes, this is an effect size of d \= 4\).
```
dat <- two_sample(5, -2, +2, 1, 1)
```
You can calculate how each data point (`Y`) deviates from the overall sample mean (\\(\\hat{\\mu}\\)), which is represented by the horizontal grey line below and the deviations are the vertical grey lines. You can also calculate how different each point is from its group\-specific mean (\\(\\hat{A\_i}\\)), which are represented by the horizontal coloured lines below and the deviations are the coloured vertical lines.
Figure 9\.4: Deviations of each data point (Y) from the overall and group means
You can use these deviations to calculate variability between groups and within groups. ANOVA tests whether the variability between groups is larger than that within groups, accounting for the number of groups and observations.
### 9\.6\.2 Decomposition matrices
We can use the estimation equations for a one\-factor ANOVA to calculate the model components.
* `mu` is the overall mean
* `a` is how different each group mean is from the overall mean
* `err` is residual error, calculated by subtracting `mu` and `a` from `Y`
This produces a *decomposition matrix*, a table with columns for `Y`, `mu`, `a`, and `err`.
```
decomp <- dat %>%
select(Y, grp) %>%
mutate(mu = mean(Y)) %>% # calculate mu_hat
group_by(grp) %>%
mutate(a = mean(Y) - mu) %>% # calculate a_hat for each grp
ungroup() %>%
mutate(err = Y - mu - a) # calculate residual error
```
| Y | grp | mu | a | err |
| --- | --- | --- | --- | --- |
| \-1\.4770938 | A | 0\.1207513 | \-1\.533501 | \-0\.0643443 |
| \-2\.9508741 | A | 0\.1207513 | \-1\.533501 | \-1\.5381246 |
| \-0\.6376736 | A | 0\.1207513 | \-1\.533501 | 0\.7750759 |
| \-1\.7579084 | A | 0\.1207513 | \-1\.533501 | \-0\.3451589 |
| \-0\.2401977 | A | 0\.1207513 | \-1\.533501 | 1\.1725518 |
| 0\.1968155 | B | 0\.1207513 | 1\.533501 | \-1\.4574367 |
| 2\.6308008 | B | 0\.1207513 | 1\.533501 | 0\.9765486 |
| 2\.0293297 | B | 0\.1207513 | 1\.533501 | 0\.3750775 |
| 2\.1629037 | B | 0\.1207513 | 1\.533501 | 0\.5086516 |
| 1\.2514112 | B | 0\.1207513 | 1\.533501 | \-0\.4028410 |
Calculate sums of squares for `mu`, `a`, and `err`.
```
SS <- decomp %>%
summarise(mu = sum(mu*mu),
a = sum(a*a),
err = sum(err*err))
```
| mu | a | err |
| --- | --- | --- |
| 0\.1458088 | 23\.51625 | 8\.104182 |
If you’ve done everything right, `SS$mu + SS$a + SS$err` should equal the sum of squares for Y.
```
SS_Y <- sum(decomp$Y^2)
all.equal(SS_Y, SS$mu + SS$a + SS$err)
```
```
## [1] TRUE
```
Divide each sum of squares by its corresponding degrees of freedom (df) to calculate mean squares. The df for `mu` is 1, the df for factor `a` is `K-1` (K is the number of groups), and the df for `err` is `N - K` (N is the number of observations).
```
K <- n_distinct(dat$grp)
N <- nrow(dat)
df <- c(mu = 1, a = K - 1, err = N - K)
MS <- SS / df
```
| mu | a | err |
| --- | --- | --- |
| 0\.1458088 | 23\.51625 | 1\.013023 |
Then calculate an F\-ratio for `mu` and `a` by dividing their mean squares by the error term mean square. Get the p\-values that correspond to these F\-values using the `pf()` function.
```
F_mu <- MS$mu / MS$err
F_a <- MS$a / MS$err
p_mu <- pf(F_mu, df1 = df['mu'], df2 = df['err'], lower.tail = FALSE)
p_a <- pf(F_a, df1 = df['a'], df2 = df['err'], lower.tail = FALSE)
```
Put everything into a data frame to display it in the same way as the ANOVA summary function.
```
my_calcs <- data.frame(
term = c("Intercept", "grp", "Residuals"),
Df = df,
SS = c(SS$mu, SS$a, SS$err),
MS = c(MS$mu, MS$a, MS$err),
F = c(F_mu, F_a, NA),
p = c(p_mu, p_a, NA)
)
```
| | term | Df | SS | MS | F | p |
| --- | --- | --- | --- | --- | --- | --- |
| mu | Intercept | 1 | 0\.146 | 0\.146 | 0\.144 | 0\.714 |
| a | grp | 1 | 23\.516 | 23\.516 | 23\.214 | 0\.001 |
| err | Residuals | 8 | 8\.104 | 1\.013 | NA | NA |
Now run a one\-way ANOVA on your results and compare it to what you obtained in your calculations.
```
aov(Y ~ grp, data = dat) %>% summary(intercept = TRUE)
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## (Intercept) 1 0.146 0.146 0.144 0.71427
## grp 1 23.516 23.516 23.214 0.00132 **
## Residuals 8 8.104 1.013
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Using the code above, write your own function that takes a table of data and returns the ANOVA results table like above.
9\.7 Glossary
-------------
| term | definition |
| --- | --- |
| [categorical](https://psyteachr.github.io/glossary/c#categorical) | Data that can only take certain values, such as types of pet. |
| [coding scheme](https://psyteachr.github.io/glossary/c#coding.scheme) | How to represent categorical variables with numbers for use in models |
| [continuous](https://psyteachr.github.io/glossary/c#continuous) | Data that can take on any values between other existing values. |
| [dependent variable](https://psyteachr.github.io/glossary/d#dependent.variable) | The target variable that is being analyzed, whose value is assumed to depend on other variables. |
| [effect code](https://psyteachr.github.io/glossary/e#effect.code) | A coding scheme for categorical variables that contrasts each group mean with the mean of all the group means. |
| [error term](https://psyteachr.github.io/glossary/e#error.term) | The term in a model that represents the difference between the actual and predicted values |
| [general linear model](https://psyteachr.github.io/glossary/g#general.linear.model) | A mathematical model comparing how one or more independent variables affect a continuous dependent variable |
| [independent variable](https://psyteachr.github.io/glossary/i#independent.variable) | A variable whose value is assumed to influence the value of a dependent variable. |
| [normal distribution](https://psyteachr.github.io/glossary/n#normal.distribution) | A symmetric distribution of data where values near the centre are most probable. |
| [residual error](https://psyteachr.github.io/glossary/r#residual.error) | That part of an observation that cannot be captured by the statistical model, and thus is assumed to reflect unknown factors. |
| [simulation](https://psyteachr.github.io/glossary/s#simulation) | Generating data from summary parameters |
| [standard deviation](https://psyteachr.github.io/glossary/s#standard.deviation) | A descriptive statistic that measures how spread out data are relative to the mean. |
9\.8 Exercises
--------------
Download the [exercises](exercises/09_glm_exercise.Rmd). See the [answers](exercises/09_glm_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(9)
# run this to access the answers
dataskills::exercise(9, answers = TRUE)
```
9\.1 Learning Objectives
------------------------
### 9\.1\.1 Basic
1. Define the [components](glm.html#glm-components) of the GLM
2. [Simulate data](glm.html#sim-glm) using GLM equations [(video)](https://youtu.be/JQ90LnVCbKc)
3. Identify the model parameters that correspond to the data\-generation parameters
4. Understand and plot [residuals](glm.html#residuals) [(video)](https://youtu.be/sr-NtxiH2Qk)
5. [Predict new values](glm.html#predict) using the model [(video)](https://youtu.be/0o4LEbVVWfM)
6. Explain the differences among [coding schemes](glm.html#coding-schemes) [(video)](https://youtu.be/SqL28AbLj3g)
### 9\.1\.2 Intermediate
7. Demonstrate the [relationships](glm.html#test-rels) among two\-sample t\-test, one\-way ANOVA, and linear regression
8. Given data and a GLM, [generate a decomposition matrix](glm.html#decomp) and calculate sums of squares, mean squares, and F ratios for a one\-way ANOVA
### 9\.1\.1 Basic
1. Define the [components](glm.html#glm-components) of the GLM
2. [Simulate data](glm.html#sim-glm) using GLM equations [(video)](https://youtu.be/JQ90LnVCbKc)
3. Identify the model parameters that correspond to the data\-generation parameters
4. Understand and plot [residuals](glm.html#residuals) [(video)](https://youtu.be/sr-NtxiH2Qk)
5. [Predict new values](glm.html#predict) using the model [(video)](https://youtu.be/0o4LEbVVWfM)
6. Explain the differences among [coding schemes](glm.html#coding-schemes) [(video)](https://youtu.be/SqL28AbLj3g)
### 9\.1\.2 Intermediate
7. Demonstrate the [relationships](glm.html#test-rels) among two\-sample t\-test, one\-way ANOVA, and linear regression
8. Given data and a GLM, [generate a decomposition matrix](glm.html#decomp) and calculate sums of squares, mean squares, and F ratios for a one\-way ANOVA
9\.2 Resources
--------------
* [Stub for this lesson](stubs/9_glm.Rmd)
* [Jeff Miller and Patricia Haden, Statistical Analysis with the Linear Model (free online textbook)](http://www.otago.ac.nz/psychology/otago039309.pdf)
* [lecture slides introducing the General Linear Model](slides/08_glm_slides.pdf)
* [GLM shiny app](http://rstudio2.psy.gla.ac.uk/Dale/GLM)
* [F distribution](http://rstudio2.psy.gla.ac.uk/Dale/fdist)
9\.3 Setup
----------
```
# libraries needed for these examples
library(tidyverse)
library(broom)
set.seed(30250) # makes sure random numbers are reproducible
```
9\.4 GLM
--------
### 9\.4\.1 What is the GLM?
The [General Linear Model](https://psyteachr.github.io/glossary/g#general-linear-model "A mathematical model comparing how one or more independent variables affect a continuous dependent variable") (GLM) is a general mathematical framework for expressing relationships among variables that can express or test linear relationships between a numerical [dependent variable](https://psyteachr.github.io/glossary/d#dependent-variable "The target variable that is being analyzed, whose value is assumed to depend on other variables.") and any combination of [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") or [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") [independent variables](https://psyteachr.github.io/glossary/i#independent-variable "A variable whose value is assumed to influence the value of a dependent variable.").
### 9\.4\.2 Components
There are some mathematical conventions that you need to learn to understand the equations representing linear models. Once you understand those, learning about the GLM will get much easier.
| Component of GLM | Notation |
| --- | --- |
| Dependent Variable (DV) | \\(Y\\) |
| Grand Average | \\(\\mu\\) (the Greek letter “mu”) |
| Main Effects | \\(A, B, C, \\ldots\\) |
| Interactions | \\(AB, AC, BC, ABC, \\ldots\\) |
| Random Error | \\(S(Group)\\) |
The linear equation predicts the dependent variable (\\(Y\\)) as the sum of the grand average value of \\(Y\\) (\\(\\mu\\), also called the intercept), the main effects of all the predictor variables (\\(A\+B\+C\+ \\ldots\\)), the interactions among all the predictor variables (\\(AB, AC, BC, ABC, \\ldots\\)), and some random error (\\(S(Group)\\)). The equation for a model with two predictor variables (\\(A\\) and \\(B\\)) and their interaction (\\(AB\\)) is written like this:
\\(Y\\) \~ \\(\\mu\+A\+B\+AB\+S(Group)\\)
Don’t worry if this doesn’t make sense until we walk through a concrete example.
### 9\.4\.3 Simulating data from GLM
A good way to learn about linear models is to [simulate](https://psyteachr.github.io/glossary/s#simulation "Generating data from summary parameters") data where you know exactly how the variables are related, and then analyse this simulated data to see where the parameters show up in the analysis.
We’ll start with a very simple linear model that just has a single categorical factor with two levels. Let’s say we’re predicting reaction times for congruent and incongruent trials in a Stroop task for a single participant. Average reaction time (`mu`) is 800ms, and is 50ms faster for congruent than incongruent trials (`effect`).
A **factor** is a categorical variable that is used to divide subjects into groups, usually to draw some comparison. Factors are composed of different **levels**. Do not confuse factors with levels!
In the example above, trial type is a factor level, incongrunt is a factor level, and congruent is a factor level.
You need to represent [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") factors with numbers. The numbers, or [coding scheme](https://psyteachr.github.io/glossary/c#coding-scheme "How to represent categorical variables with numbers for use in models") you choose will affect the numbers you get out of the analysis and how you need to interpret them. Here, we will [effect code](https://psyteachr.github.io/glossary/e#effect-code "A coding scheme for categorical variables that contrasts each group mean with the mean of all the group means.") the trial types so that congruent trials are coded as \+0\.5, and incongruent trials are coded as \-0\.5\.
A person won’t always respond exactly the same way. They might be a little faster on some trials than others, due to random fluctuations in attention, learning about the task, or fatigue. So we can add an [error term](https://psyteachr.github.io/glossary/e#error-term "The term in a model that represents the difference between the actual and predicted values") to each trial. We can’t know how much any specific trial will differ, but we can characterise the distribution of how much trials differ from average and then sample from this distribution.
Here, we’ll assume the error term is sampled from a [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable.") with a [standard deviation](https://psyteachr.github.io/glossary/s#standard-deviation "A descriptive statistic that measures how spread out data are relative to the mean.") of 100 ms (the mean of the error term distribution is always 0\). We’ll also sample 100 trials of each type, so we can see a range of variation.
So first create variables for all of the parameters that describe your data.
```
n_per_grp <- 100
mu <- 800 # average RT
effect <- 50 # average difference between congruent and incongruent trials
error_sd <- 100 # standard deviation of the error term
trial_types <- c("congruent" = 0.5, "incongruent" = -0.5) # effect code
```
Then simulate the data by creating a data table with a row for each trial and columns for the trial type and the error term (random numbers samples from a normal distribution with the SD specified by `error_sd`). For categorical variables, include both a column with the text labels (`trial_type`) and another column with the coded version (`trial_type.e`) to make it easier to check what the codings mean and to use for graphing. Calculate the dependent variable (`RT`) as the sum of the grand mean (`mu`), the coefficient (`effect`) multiplied by the effect\-coded predictor variable (`trial_type.e`), and the error term.
```
dat <- data.frame(
trial_type = rep(names(trial_types), each = n_per_grp)
) %>%
mutate(
trial_type.e = recode(trial_type, !!!trial_types),
error = rnorm(nrow(.), 0, error_sd),
RT = mu + effect*trial_type.e + error
)
```
The `!!!` (triple bang) in the code `recode(trial_type, !!!trial_types)` is a way to expand the vector `trial_types <- c(“congruent” = 0.5, “incongruent” = -0.5)`. It’s equivalent to `recode(trial_type, “congruent” = 0.5, “incongruent” = -0.5)`. This pattern avoids making mistakes with recoding because there is only one place where you set up the category to code mapping (in the `trial_types` vector).
Last but not least, always plot simulated data to make sure it looks like you expect.
```
ggplot(dat, aes(trial_type, RT)) +
geom_violin() +
geom_boxplot(aes(fill = trial_type),
width = 0.25, show.legend = FALSE)
```
Figure 9\.1: Simulated Data
### 9\.4\.4 Linear Regression
Now we can analyse the data we simulated using the function `lm()`. It takes the formula as the first argument. This is the same as the data\-generating equation, but you can omit the error term (this is implied), and takes the data table as the second argument. Use the `summary()` function to see the statistical summary.
```
my_lm <- lm(RT ~ trial_type.e, data = dat)
summary(my_lm)
```
```
##
## Call:
## lm(formula = RT ~ trial_type.e, data = dat)
##
## Residuals:
## Min 1Q Median 3Q Max
## -302.110 -70.052 0.948 68.262 246.220
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 788.192 7.206 109.376 < 2e-16 ***
## trial_type.e 61.938 14.413 4.297 2.71e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 101.9 on 198 degrees of freedom
## Multiple R-squared: 0.08532, Adjusted R-squared: 0.0807
## F-statistic: 18.47 on 1 and 198 DF, p-value: 2.707e-05
```
Notice how the **estimate** for the `(Intercept)` is close to the value we set for `mu` and the estimate for `trial_type.e` is close to the value we set for `effect`.
Change the values of `mu` and `effect`, resimulate the data, and re\-run the linear model. What happens to the estimates?
### 9\.4\.5 Residuals
You can use the `residuals()` function to extract the error term for each each data point. This is the DV values, minus the estimates for the intercept and trial type. We’ll make a density plot of the [residuals](https://psyteachr.github.io/glossary/r#residual-error "That part of an observation that cannot be captured by the statistical model, and thus is assumed to reflect unknown factors.") below and compare it to the normal distribution we used for the error term.
```
res <- residuals(my_lm)
ggplot(dat) +
stat_function(aes(0), color = "grey60",
fun = dnorm, n = 101,
args = list(mean = 0, sd = error_sd)) +
geom_density(aes(res, color = trial_type))
```
Figure 9\.2: Model residuals should be approximately normally distributed for each group
You can also compare the model residuals to the simulated error values. If the model is accurate, they should be almost identical. If the intercept estimate is slightly off, the points will be slightly above or below the black line. If the estimate for the effect of trial type is slightly off, there will be a small, systematic difference between residuals for congruent and incongruent trials.
```
ggplot(dat) +
geom_abline(slope = 1) +
geom_point(aes(error, res,color = trial_type)) +
ylab("Model Residuals") +
xlab("Simulated Error")
```
Figure 9\.3: Model residuals should be very similar to the simulated error
What happens to the residuals if you fit a model that ignores trial type (e.g., `lm(Y ~ 1, data = dat)`)?
### 9\.4\.6 Predict New Values
You can use the estimates from your model to predict new data points, given values for the model parameters. For this simple example, we just need to know the trial type to make a prediction.
For congruent trials, you would predict that a new data point would be equal to the intercept estimate plus the trial type estimate multiplied by 0\.5 (the effect code for congruent trials).
```
int_est <- my_lm$coefficients[["(Intercept)"]]
tt_est <- my_lm$coefficients[["trial_type.e"]]
tt_code <- trial_types[["congruent"]]
new_congruent_RT <- int_est + tt_est * tt_code
new_congruent_RT
```
```
## [1] 819.1605
```
You can also use the `predict()` function to do this more easily. The second argument is a data table with columns for the factors in the model and rows with the values that you want to use for the prediction.
```
predict(my_lm, newdata = tibble(trial_type.e = 0.5))
```
```
## 1
## 819.1605
```
If you look up this function using `?predict`, you will see that “The function invokes particular methods which depend on the class of the first argument.” What this means is that `predict()` works differently depending on whether you’re predicting from the output of `lm()` or other analysis functions. You can search for help on the lm version with `?predict.lm`.
### 9\.4\.7 Coding Categorical Variables
In the example above, we used **effect coding** for trial type. You can also use **sum coding**, which assigns \+1 and \-1 to the levels instead of \+0\.5 and \-0\.5\. More commonly, you might want to use **treatment coding**, which assigns 0 to one level (usually a baseline or control condition) and 1 to the other level (usually a treatment or experimental condition).
Here we will add sum\-coded and treatment\-coded versions of `trial_type` to the dataset using the `recode()` function.
```
dat <- dat %>% mutate(
trial_type.sum = recode(trial_type, "congruent" = +1, "incongruent" = -1),
trial_type.tr = recode(trial_type, "congruent" = 1, "incongruent" = 0)
)
```
If you define named vectors with your levels and coding, you can use them with the `recode()` function if you expand them using `!!!`.
```
tt_sum <- c("congruent" = +1,
"incongruent" = -1)
tt_tr <- c("congruent" = 1,
"incongruent" = 0)
dat <- dat %>% mutate(
trial_type.sum = recode(trial_type, !!!tt_sum),
trial_type.tr = recode(trial_type, !!!tt_tr)
)
```
Here are the coefficients for the effect\-coded version. They should be the same as those from the last analysis.
```
lm(RT ~ trial_type.e, data = dat)$coefficients
```
```
## (Intercept) trial_type.e
## 788.19166 61.93773
```
Here are the coefficients for the sum\-coded version. This give the same results as effect coding, except the estimate for the categorical factor will be exactly half as large, as it represents the difference between each trial type and the hypothetical condition of 0 (the overall mean RT), rather than the difference between the two trial types.
```
lm(RT ~ trial_type.sum, data = dat)$coefficients
```
```
## (Intercept) trial_type.sum
## 788.19166 30.96887
```
Here are the coefficients for the treatment\-coded version. The estimate for the categorical factor will be the same as in the effect\-coded version, but the intercept will decrease. It will be equal to the intercept minus the estimate for trial type from the sum\-coded version.
```
lm(RT ~ trial_type.tr, data = dat)$coefficients
```
```
## (Intercept) trial_type.tr
## 757.22279 61.93773
```
### 9\.4\.1 What is the GLM?
The [General Linear Model](https://psyteachr.github.io/glossary/g#general-linear-model "A mathematical model comparing how one or more independent variables affect a continuous dependent variable") (GLM) is a general mathematical framework for expressing relationships among variables that can express or test linear relationships between a numerical [dependent variable](https://psyteachr.github.io/glossary/d#dependent-variable "The target variable that is being analyzed, whose value is assumed to depend on other variables.") and any combination of [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") or [continuous](https://psyteachr.github.io/glossary/c#continuous "Data that can take on any values between other existing values.") [independent variables](https://psyteachr.github.io/glossary/i#independent-variable "A variable whose value is assumed to influence the value of a dependent variable.").
### 9\.4\.2 Components
There are some mathematical conventions that you need to learn to understand the equations representing linear models. Once you understand those, learning about the GLM will get much easier.
| Component of GLM | Notation |
| --- | --- |
| Dependent Variable (DV) | \\(Y\\) |
| Grand Average | \\(\\mu\\) (the Greek letter “mu”) |
| Main Effects | \\(A, B, C, \\ldots\\) |
| Interactions | \\(AB, AC, BC, ABC, \\ldots\\) |
| Random Error | \\(S(Group)\\) |
The linear equation predicts the dependent variable (\\(Y\\)) as the sum of the grand average value of \\(Y\\) (\\(\\mu\\), also called the intercept), the main effects of all the predictor variables (\\(A\+B\+C\+ \\ldots\\)), the interactions among all the predictor variables (\\(AB, AC, BC, ABC, \\ldots\\)), and some random error (\\(S(Group)\\)). The equation for a model with two predictor variables (\\(A\\) and \\(B\\)) and their interaction (\\(AB\\)) is written like this:
\\(Y\\) \~ \\(\\mu\+A\+B\+AB\+S(Group)\\)
Don’t worry if this doesn’t make sense until we walk through a concrete example.
### 9\.4\.3 Simulating data from GLM
A good way to learn about linear models is to [simulate](https://psyteachr.github.io/glossary/s#simulation "Generating data from summary parameters") data where you know exactly how the variables are related, and then analyse this simulated data to see where the parameters show up in the analysis.
We’ll start with a very simple linear model that just has a single categorical factor with two levels. Let’s say we’re predicting reaction times for congruent and incongruent trials in a Stroop task for a single participant. Average reaction time (`mu`) is 800ms, and is 50ms faster for congruent than incongruent trials (`effect`).
A **factor** is a categorical variable that is used to divide subjects into groups, usually to draw some comparison. Factors are composed of different **levels**. Do not confuse factors with levels!
In the example above, trial type is a factor level, incongrunt is a factor level, and congruent is a factor level.
You need to represent [categorical](https://psyteachr.github.io/glossary/c#categorical "Data that can only take certain values, such as types of pet.") factors with numbers. The numbers, or [coding scheme](https://psyteachr.github.io/glossary/c#coding-scheme "How to represent categorical variables with numbers for use in models") you choose will affect the numbers you get out of the analysis and how you need to interpret them. Here, we will [effect code](https://psyteachr.github.io/glossary/e#effect-code "A coding scheme for categorical variables that contrasts each group mean with the mean of all the group means.") the trial types so that congruent trials are coded as \+0\.5, and incongruent trials are coded as \-0\.5\.
A person won’t always respond exactly the same way. They might be a little faster on some trials than others, due to random fluctuations in attention, learning about the task, or fatigue. So we can add an [error term](https://psyteachr.github.io/glossary/e#error-term "The term in a model that represents the difference between the actual and predicted values") to each trial. We can’t know how much any specific trial will differ, but we can characterise the distribution of how much trials differ from average and then sample from this distribution.
Here, we’ll assume the error term is sampled from a [normal distribution](https://psyteachr.github.io/glossary/n#normal-distribution "A symmetric distribution of data where values near the centre are most probable.") with a [standard deviation](https://psyteachr.github.io/glossary/s#standard-deviation "A descriptive statistic that measures how spread out data are relative to the mean.") of 100 ms (the mean of the error term distribution is always 0\). We’ll also sample 100 trials of each type, so we can see a range of variation.
So first create variables for all of the parameters that describe your data.
```
n_per_grp <- 100
mu <- 800 # average RT
effect <- 50 # average difference between congruent and incongruent trials
error_sd <- 100 # standard deviation of the error term
trial_types <- c("congruent" = 0.5, "incongruent" = -0.5) # effect code
```
Then simulate the data by creating a data table with a row for each trial and columns for the trial type and the error term (random numbers samples from a normal distribution with the SD specified by `error_sd`). For categorical variables, include both a column with the text labels (`trial_type`) and another column with the coded version (`trial_type.e`) to make it easier to check what the codings mean and to use for graphing. Calculate the dependent variable (`RT`) as the sum of the grand mean (`mu`), the coefficient (`effect`) multiplied by the effect\-coded predictor variable (`trial_type.e`), and the error term.
```
dat <- data.frame(
trial_type = rep(names(trial_types), each = n_per_grp)
) %>%
mutate(
trial_type.e = recode(trial_type, !!!trial_types),
error = rnorm(nrow(.), 0, error_sd),
RT = mu + effect*trial_type.e + error
)
```
The `!!!` (triple bang) in the code `recode(trial_type, !!!trial_types)` is a way to expand the vector `trial_types <- c(“congruent” = 0.5, “incongruent” = -0.5)`. It’s equivalent to `recode(trial_type, “congruent” = 0.5, “incongruent” = -0.5)`. This pattern avoids making mistakes with recoding because there is only one place where you set up the category to code mapping (in the `trial_types` vector).
Last but not least, always plot simulated data to make sure it looks like you expect.
```
ggplot(dat, aes(trial_type, RT)) +
geom_violin() +
geom_boxplot(aes(fill = trial_type),
width = 0.25, show.legend = FALSE)
```
Figure 9\.1: Simulated Data
### 9\.4\.4 Linear Regression
Now we can analyse the data we simulated using the function `lm()`. It takes the formula as the first argument. This is the same as the data\-generating equation, but you can omit the error term (this is implied), and takes the data table as the second argument. Use the `summary()` function to see the statistical summary.
```
my_lm <- lm(RT ~ trial_type.e, data = dat)
summary(my_lm)
```
```
##
## Call:
## lm(formula = RT ~ trial_type.e, data = dat)
##
## Residuals:
## Min 1Q Median 3Q Max
## -302.110 -70.052 0.948 68.262 246.220
##
## Coefficients:
## Estimate Std. Error t value Pr(>|t|)
## (Intercept) 788.192 7.206 109.376 < 2e-16 ***
## trial_type.e 61.938 14.413 4.297 2.71e-05 ***
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
##
## Residual standard error: 101.9 on 198 degrees of freedom
## Multiple R-squared: 0.08532, Adjusted R-squared: 0.0807
## F-statistic: 18.47 on 1 and 198 DF, p-value: 2.707e-05
```
Notice how the **estimate** for the `(Intercept)` is close to the value we set for `mu` and the estimate for `trial_type.e` is close to the value we set for `effect`.
Change the values of `mu` and `effect`, resimulate the data, and re\-run the linear model. What happens to the estimates?
### 9\.4\.5 Residuals
You can use the `residuals()` function to extract the error term for each each data point. This is the DV values, minus the estimates for the intercept and trial type. We’ll make a density plot of the [residuals](https://psyteachr.github.io/glossary/r#residual-error "That part of an observation that cannot be captured by the statistical model, and thus is assumed to reflect unknown factors.") below and compare it to the normal distribution we used for the error term.
```
res <- residuals(my_lm)
ggplot(dat) +
stat_function(aes(0), color = "grey60",
fun = dnorm, n = 101,
args = list(mean = 0, sd = error_sd)) +
geom_density(aes(res, color = trial_type))
```
Figure 9\.2: Model residuals should be approximately normally distributed for each group
You can also compare the model residuals to the simulated error values. If the model is accurate, they should be almost identical. If the intercept estimate is slightly off, the points will be slightly above or below the black line. If the estimate for the effect of trial type is slightly off, there will be a small, systematic difference between residuals for congruent and incongruent trials.
```
ggplot(dat) +
geom_abline(slope = 1) +
geom_point(aes(error, res,color = trial_type)) +
ylab("Model Residuals") +
xlab("Simulated Error")
```
Figure 9\.3: Model residuals should be very similar to the simulated error
What happens to the residuals if you fit a model that ignores trial type (e.g., `lm(Y ~ 1, data = dat)`)?
### 9\.4\.6 Predict New Values
You can use the estimates from your model to predict new data points, given values for the model parameters. For this simple example, we just need to know the trial type to make a prediction.
For congruent trials, you would predict that a new data point would be equal to the intercept estimate plus the trial type estimate multiplied by 0\.5 (the effect code for congruent trials).
```
int_est <- my_lm$coefficients[["(Intercept)"]]
tt_est <- my_lm$coefficients[["trial_type.e"]]
tt_code <- trial_types[["congruent"]]
new_congruent_RT <- int_est + tt_est * tt_code
new_congruent_RT
```
```
## [1] 819.1605
```
You can also use the `predict()` function to do this more easily. The second argument is a data table with columns for the factors in the model and rows with the values that you want to use for the prediction.
```
predict(my_lm, newdata = tibble(trial_type.e = 0.5))
```
```
## 1
## 819.1605
```
If you look up this function using `?predict`, you will see that “The function invokes particular methods which depend on the class of the first argument.” What this means is that `predict()` works differently depending on whether you’re predicting from the output of `lm()` or other analysis functions. You can search for help on the lm version with `?predict.lm`.
### 9\.4\.7 Coding Categorical Variables
In the example above, we used **effect coding** for trial type. You can also use **sum coding**, which assigns \+1 and \-1 to the levels instead of \+0\.5 and \-0\.5\. More commonly, you might want to use **treatment coding**, which assigns 0 to one level (usually a baseline or control condition) and 1 to the other level (usually a treatment or experimental condition).
Here we will add sum\-coded and treatment\-coded versions of `trial_type` to the dataset using the `recode()` function.
```
dat <- dat %>% mutate(
trial_type.sum = recode(trial_type, "congruent" = +1, "incongruent" = -1),
trial_type.tr = recode(trial_type, "congruent" = 1, "incongruent" = 0)
)
```
If you define named vectors with your levels and coding, you can use them with the `recode()` function if you expand them using `!!!`.
```
tt_sum <- c("congruent" = +1,
"incongruent" = -1)
tt_tr <- c("congruent" = 1,
"incongruent" = 0)
dat <- dat %>% mutate(
trial_type.sum = recode(trial_type, !!!tt_sum),
trial_type.tr = recode(trial_type, !!!tt_tr)
)
```
Here are the coefficients for the effect\-coded version. They should be the same as those from the last analysis.
```
lm(RT ~ trial_type.e, data = dat)$coefficients
```
```
## (Intercept) trial_type.e
## 788.19166 61.93773
```
Here are the coefficients for the sum\-coded version. This give the same results as effect coding, except the estimate for the categorical factor will be exactly half as large, as it represents the difference between each trial type and the hypothetical condition of 0 (the overall mean RT), rather than the difference between the two trial types.
```
lm(RT ~ trial_type.sum, data = dat)$coefficients
```
```
## (Intercept) trial_type.sum
## 788.19166 30.96887
```
Here are the coefficients for the treatment\-coded version. The estimate for the categorical factor will be the same as in the effect\-coded version, but the intercept will decrease. It will be equal to the intercept minus the estimate for trial type from the sum\-coded version.
```
lm(RT ~ trial_type.tr, data = dat)$coefficients
```
```
## (Intercept) trial_type.tr
## 757.22279 61.93773
```
9\.5 Relationships among tests
------------------------------
### 9\.5\.1 T\-test
The t\-test is just a special, limited example of a general linear model.
```
t.test(RT ~ trial_type.e, data = dat, var.equal = TRUE)
```
```
##
## Two Sample t-test
##
## data: RT by trial_type.e
## t = -4.2975, df = 198, p-value = 2.707e-05
## alternative hypothesis: true difference in means between group -0.5 and group 0.5 is not equal to 0
## 95 percent confidence interval:
## -90.35945 -33.51601
## sample estimates:
## mean in group -0.5 mean in group 0.5
## 757.2228 819.1605
```
What happens when you use other codings for trial type in the t\-test above? Which coding maps onto the results of the t\-test best?
### 9\.5\.2 ANOVA
ANOVA is also a special, limited version of the linear model.
```
my_aov <- aov(RT ~ trial_type.e, data = dat)
summary(my_aov, intercept = TRUE)
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## (Intercept) 1 124249219 124249219 11963.12 < 2e-16 ***
## trial_type.e 1 191814 191814 18.47 2.71e-05 ***
## Residuals 198 2056432 10386
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The easiest way to get parameters out of an analysis is to use the `broom::tidy()` function. This returns a tidy table that you can extract numbers of interest from. Here, we just want to get the F\-value for the effect of trial\_type. Compare the square root of this value to the t\-value from the t\-tests above.
```
f <- broom::tidy(my_aov)$statistic[1]
sqrt(f)
```
```
## [1] 4.297498
```
### 9\.5\.1 T\-test
The t\-test is just a special, limited example of a general linear model.
```
t.test(RT ~ trial_type.e, data = dat, var.equal = TRUE)
```
```
##
## Two Sample t-test
##
## data: RT by trial_type.e
## t = -4.2975, df = 198, p-value = 2.707e-05
## alternative hypothesis: true difference in means between group -0.5 and group 0.5 is not equal to 0
## 95 percent confidence interval:
## -90.35945 -33.51601
## sample estimates:
## mean in group -0.5 mean in group 0.5
## 757.2228 819.1605
```
What happens when you use other codings for trial type in the t\-test above? Which coding maps onto the results of the t\-test best?
### 9\.5\.2 ANOVA
ANOVA is also a special, limited version of the linear model.
```
my_aov <- aov(RT ~ trial_type.e, data = dat)
summary(my_aov, intercept = TRUE)
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## (Intercept) 1 124249219 124249219 11963.12 < 2e-16 ***
## trial_type.e 1 191814 191814 18.47 2.71e-05 ***
## Residuals 198 2056432 10386
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
The easiest way to get parameters out of an analysis is to use the `broom::tidy()` function. This returns a tidy table that you can extract numbers of interest from. Here, we just want to get the F\-value for the effect of trial\_type. Compare the square root of this value to the t\-value from the t\-tests above.
```
f <- broom::tidy(my_aov)$statistic[1]
sqrt(f)
```
```
## [1] 4.297498
```
9\.6 Understanding ANOVA
------------------------
We’ll walk through an example of a one\-way ANOVA with the following equation:
\\(Y\_{ij} \= \\mu \+ A\_i \+ S(A)\_{ij}\\)
This means that each data point (\\(Y\_{ij}\\)) is predicted to be the sum of the grand mean (\\(\\mu\\)), plus the effect of factor A (\\(A\_i\\)), plus some residual error (\\(S(A)\_{ij}\\)).
### 9\.6\.1 Means, Variability, and Deviation Scores
Let’s create a simple simulation function so you can quickly create a two\-sample dataset with specified Ns, means, and SDs.
```
two_sample <- function(n = 10, m1 = 0, m2 = 0, sd1 = 1, sd2 = 1) {
s1 <- rnorm(n, m1, sd1)
s2 <- rnorm(n, m2, sd2)
data.frame(
Y = c(s1, s2),
grp = rep(c("A", "B"), each = n)
)
}
```
Now we will use `two_sample()` to create a dataset `dat` with N\=5 per group, means of \-2 and \+2, and SDs of 1 and 1 (yes, this is an effect size of d \= 4\).
```
dat <- two_sample(5, -2, +2, 1, 1)
```
You can calculate how each data point (`Y`) deviates from the overall sample mean (\\(\\hat{\\mu}\\)), which is represented by the horizontal grey line below and the deviations are the vertical grey lines. You can also calculate how different each point is from its group\-specific mean (\\(\\hat{A\_i}\\)), which are represented by the horizontal coloured lines below and the deviations are the coloured vertical lines.
Figure 9\.4: Deviations of each data point (Y) from the overall and group means
You can use these deviations to calculate variability between groups and within groups. ANOVA tests whether the variability between groups is larger than that within groups, accounting for the number of groups and observations.
### 9\.6\.2 Decomposition matrices
We can use the estimation equations for a one\-factor ANOVA to calculate the model components.
* `mu` is the overall mean
* `a` is how different each group mean is from the overall mean
* `err` is residual error, calculated by subtracting `mu` and `a` from `Y`
This produces a *decomposition matrix*, a table with columns for `Y`, `mu`, `a`, and `err`.
```
decomp <- dat %>%
select(Y, grp) %>%
mutate(mu = mean(Y)) %>% # calculate mu_hat
group_by(grp) %>%
mutate(a = mean(Y) - mu) %>% # calculate a_hat for each grp
ungroup() %>%
mutate(err = Y - mu - a) # calculate residual error
```
| Y | grp | mu | a | err |
| --- | --- | --- | --- | --- |
| \-1\.4770938 | A | 0\.1207513 | \-1\.533501 | \-0\.0643443 |
| \-2\.9508741 | A | 0\.1207513 | \-1\.533501 | \-1\.5381246 |
| \-0\.6376736 | A | 0\.1207513 | \-1\.533501 | 0\.7750759 |
| \-1\.7579084 | A | 0\.1207513 | \-1\.533501 | \-0\.3451589 |
| \-0\.2401977 | A | 0\.1207513 | \-1\.533501 | 1\.1725518 |
| 0\.1968155 | B | 0\.1207513 | 1\.533501 | \-1\.4574367 |
| 2\.6308008 | B | 0\.1207513 | 1\.533501 | 0\.9765486 |
| 2\.0293297 | B | 0\.1207513 | 1\.533501 | 0\.3750775 |
| 2\.1629037 | B | 0\.1207513 | 1\.533501 | 0\.5086516 |
| 1\.2514112 | B | 0\.1207513 | 1\.533501 | \-0\.4028410 |
Calculate sums of squares for `mu`, `a`, and `err`.
```
SS <- decomp %>%
summarise(mu = sum(mu*mu),
a = sum(a*a),
err = sum(err*err))
```
| mu | a | err |
| --- | --- | --- |
| 0\.1458088 | 23\.51625 | 8\.104182 |
If you’ve done everything right, `SS$mu + SS$a + SS$err` should equal the sum of squares for Y.
```
SS_Y <- sum(decomp$Y^2)
all.equal(SS_Y, SS$mu + SS$a + SS$err)
```
```
## [1] TRUE
```
Divide each sum of squares by its corresponding degrees of freedom (df) to calculate mean squares. The df for `mu` is 1, the df for factor `a` is `K-1` (K is the number of groups), and the df for `err` is `N - K` (N is the number of observations).
```
K <- n_distinct(dat$grp)
N <- nrow(dat)
df <- c(mu = 1, a = K - 1, err = N - K)
MS <- SS / df
```
| mu | a | err |
| --- | --- | --- |
| 0\.1458088 | 23\.51625 | 1\.013023 |
Then calculate an F\-ratio for `mu` and `a` by dividing their mean squares by the error term mean square. Get the p\-values that correspond to these F\-values using the `pf()` function.
```
F_mu <- MS$mu / MS$err
F_a <- MS$a / MS$err
p_mu <- pf(F_mu, df1 = df['mu'], df2 = df['err'], lower.tail = FALSE)
p_a <- pf(F_a, df1 = df['a'], df2 = df['err'], lower.tail = FALSE)
```
Put everything into a data frame to display it in the same way as the ANOVA summary function.
```
my_calcs <- data.frame(
term = c("Intercept", "grp", "Residuals"),
Df = df,
SS = c(SS$mu, SS$a, SS$err),
MS = c(MS$mu, MS$a, MS$err),
F = c(F_mu, F_a, NA),
p = c(p_mu, p_a, NA)
)
```
| | term | Df | SS | MS | F | p |
| --- | --- | --- | --- | --- | --- | --- |
| mu | Intercept | 1 | 0\.146 | 0\.146 | 0\.144 | 0\.714 |
| a | grp | 1 | 23\.516 | 23\.516 | 23\.214 | 0\.001 |
| err | Residuals | 8 | 8\.104 | 1\.013 | NA | NA |
Now run a one\-way ANOVA on your results and compare it to what you obtained in your calculations.
```
aov(Y ~ grp, data = dat) %>% summary(intercept = TRUE)
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## (Intercept) 1 0.146 0.146 0.144 0.71427
## grp 1 23.516 23.516 23.214 0.00132 **
## Residuals 8 8.104 1.013
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Using the code above, write your own function that takes a table of data and returns the ANOVA results table like above.
### 9\.6\.1 Means, Variability, and Deviation Scores
Let’s create a simple simulation function so you can quickly create a two\-sample dataset with specified Ns, means, and SDs.
```
two_sample <- function(n = 10, m1 = 0, m2 = 0, sd1 = 1, sd2 = 1) {
s1 <- rnorm(n, m1, sd1)
s2 <- rnorm(n, m2, sd2)
data.frame(
Y = c(s1, s2),
grp = rep(c("A", "B"), each = n)
)
}
```
Now we will use `two_sample()` to create a dataset `dat` with N\=5 per group, means of \-2 and \+2, and SDs of 1 and 1 (yes, this is an effect size of d \= 4\).
```
dat <- two_sample(5, -2, +2, 1, 1)
```
You can calculate how each data point (`Y`) deviates from the overall sample mean (\\(\\hat{\\mu}\\)), which is represented by the horizontal grey line below and the deviations are the vertical grey lines. You can also calculate how different each point is from its group\-specific mean (\\(\\hat{A\_i}\\)), which are represented by the horizontal coloured lines below and the deviations are the coloured vertical lines.
Figure 9\.4: Deviations of each data point (Y) from the overall and group means
You can use these deviations to calculate variability between groups and within groups. ANOVA tests whether the variability between groups is larger than that within groups, accounting for the number of groups and observations.
### 9\.6\.2 Decomposition matrices
We can use the estimation equations for a one\-factor ANOVA to calculate the model components.
* `mu` is the overall mean
* `a` is how different each group mean is from the overall mean
* `err` is residual error, calculated by subtracting `mu` and `a` from `Y`
This produces a *decomposition matrix*, a table with columns for `Y`, `mu`, `a`, and `err`.
```
decomp <- dat %>%
select(Y, grp) %>%
mutate(mu = mean(Y)) %>% # calculate mu_hat
group_by(grp) %>%
mutate(a = mean(Y) - mu) %>% # calculate a_hat for each grp
ungroup() %>%
mutate(err = Y - mu - a) # calculate residual error
```
| Y | grp | mu | a | err |
| --- | --- | --- | --- | --- |
| \-1\.4770938 | A | 0\.1207513 | \-1\.533501 | \-0\.0643443 |
| \-2\.9508741 | A | 0\.1207513 | \-1\.533501 | \-1\.5381246 |
| \-0\.6376736 | A | 0\.1207513 | \-1\.533501 | 0\.7750759 |
| \-1\.7579084 | A | 0\.1207513 | \-1\.533501 | \-0\.3451589 |
| \-0\.2401977 | A | 0\.1207513 | \-1\.533501 | 1\.1725518 |
| 0\.1968155 | B | 0\.1207513 | 1\.533501 | \-1\.4574367 |
| 2\.6308008 | B | 0\.1207513 | 1\.533501 | 0\.9765486 |
| 2\.0293297 | B | 0\.1207513 | 1\.533501 | 0\.3750775 |
| 2\.1629037 | B | 0\.1207513 | 1\.533501 | 0\.5086516 |
| 1\.2514112 | B | 0\.1207513 | 1\.533501 | \-0\.4028410 |
Calculate sums of squares for `mu`, `a`, and `err`.
```
SS <- decomp %>%
summarise(mu = sum(mu*mu),
a = sum(a*a),
err = sum(err*err))
```
| mu | a | err |
| --- | --- | --- |
| 0\.1458088 | 23\.51625 | 8\.104182 |
If you’ve done everything right, `SS$mu + SS$a + SS$err` should equal the sum of squares for Y.
```
SS_Y <- sum(decomp$Y^2)
all.equal(SS_Y, SS$mu + SS$a + SS$err)
```
```
## [1] TRUE
```
Divide each sum of squares by its corresponding degrees of freedom (df) to calculate mean squares. The df for `mu` is 1, the df for factor `a` is `K-1` (K is the number of groups), and the df for `err` is `N - K` (N is the number of observations).
```
K <- n_distinct(dat$grp)
N <- nrow(dat)
df <- c(mu = 1, a = K - 1, err = N - K)
MS <- SS / df
```
| mu | a | err |
| --- | --- | --- |
| 0\.1458088 | 23\.51625 | 1\.013023 |
Then calculate an F\-ratio for `mu` and `a` by dividing their mean squares by the error term mean square. Get the p\-values that correspond to these F\-values using the `pf()` function.
```
F_mu <- MS$mu / MS$err
F_a <- MS$a / MS$err
p_mu <- pf(F_mu, df1 = df['mu'], df2 = df['err'], lower.tail = FALSE)
p_a <- pf(F_a, df1 = df['a'], df2 = df['err'], lower.tail = FALSE)
```
Put everything into a data frame to display it in the same way as the ANOVA summary function.
```
my_calcs <- data.frame(
term = c("Intercept", "grp", "Residuals"),
Df = df,
SS = c(SS$mu, SS$a, SS$err),
MS = c(MS$mu, MS$a, MS$err),
F = c(F_mu, F_a, NA),
p = c(p_mu, p_a, NA)
)
```
| | term | Df | SS | MS | F | p |
| --- | --- | --- | --- | --- | --- | --- |
| mu | Intercept | 1 | 0\.146 | 0\.146 | 0\.144 | 0\.714 |
| a | grp | 1 | 23\.516 | 23\.516 | 23\.214 | 0\.001 |
| err | Residuals | 8 | 8\.104 | 1\.013 | NA | NA |
Now run a one\-way ANOVA on your results and compare it to what you obtained in your calculations.
```
aov(Y ~ grp, data = dat) %>% summary(intercept = TRUE)
```
```
## Df Sum Sq Mean Sq F value Pr(>F)
## (Intercept) 1 0.146 0.146 0.144 0.71427
## grp 1 23.516 23.516 23.214 0.00132 **
## Residuals 8 8.104 1.013
## ---
## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
```
Using the code above, write your own function that takes a table of data and returns the ANOVA results table like above.
9\.7 Glossary
-------------
| term | definition |
| --- | --- |
| [categorical](https://psyteachr.github.io/glossary/c#categorical) | Data that can only take certain values, such as types of pet. |
| [coding scheme](https://psyteachr.github.io/glossary/c#coding.scheme) | How to represent categorical variables with numbers for use in models |
| [continuous](https://psyteachr.github.io/glossary/c#continuous) | Data that can take on any values between other existing values. |
| [dependent variable](https://psyteachr.github.io/glossary/d#dependent.variable) | The target variable that is being analyzed, whose value is assumed to depend on other variables. |
| [effect code](https://psyteachr.github.io/glossary/e#effect.code) | A coding scheme for categorical variables that contrasts each group mean with the mean of all the group means. |
| [error term](https://psyteachr.github.io/glossary/e#error.term) | The term in a model that represents the difference between the actual and predicted values |
| [general linear model](https://psyteachr.github.io/glossary/g#general.linear.model) | A mathematical model comparing how one or more independent variables affect a continuous dependent variable |
| [independent variable](https://psyteachr.github.io/glossary/i#independent.variable) | A variable whose value is assumed to influence the value of a dependent variable. |
| [normal distribution](https://psyteachr.github.io/glossary/n#normal.distribution) | A symmetric distribution of data where values near the centre are most probable. |
| [residual error](https://psyteachr.github.io/glossary/r#residual.error) | That part of an observation that cannot be captured by the statistical model, and thus is assumed to reflect unknown factors. |
| [simulation](https://psyteachr.github.io/glossary/s#simulation) | Generating data from summary parameters |
| [standard deviation](https://psyteachr.github.io/glossary/s#standard.deviation) | A descriptive statistic that measures how spread out data are relative to the mean. |
9\.8 Exercises
--------------
Download the [exercises](exercises/09_glm_exercise.Rmd). See the [answers](exercises/09_glm_answers.Rmd) only after you’ve attempted all the questions.
```
# run this to access the exercise
dataskills::exercise(9)
# run this to access the answers
dataskills::exercise(9, answers = TRUE)
```
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/repro.html |
Chapter 10 Reproducible Workflows
=================================
10\.1 Learning Objectives
-------------------------
### 10\.1\.1 Basic
1. Create a reproducible script in R Markdown
2. Edit the YAML header to add table of contents and other options
3. Include a table
4. Include a figure
5. Use `source()` to include code from an external file
6. Report the output of an analysis using inline R
### 10\.1\.2 Intermediate
7. Output doc and PDF formats
8. Add a bibliography and in\-line citations
9. Format tables using `kableExtra`
### 10\.1\.3 Advanced
10. Create a computationally reproducible project in Code Ocean
10\.2 Resources
---------------
* [Chapter 27: R Markdown](http://r4ds.had.co.nz/r-markdown.html) in *R for Data Science*
* [R Markdown Cheat Sheet](http://www.rstudio.com/wp-content/uploads/2016/03/rmarkdown-cheatsheet-2.0.pdf)
* [R Markdown reference Guide](https://www.rstudio.com/wp-content/uploads/2015/03/rmarkdown-reference.pdf)
* [R Markdown Tutorial](https://rmarkdown.rstudio.com/lesson-1.html)
* [R Markdown: The Definitive Guide](https://bookdown.org/yihui/rmarkdown/) by Yihui Xie, J. J. Allaire, \& Garrett Grolemund
* [Papaja](https://crsh.github.io/papaja_man/) Reproducible APA Manuscripts
* [Code Ocean](https://codeocean.com/) for Computational Reproducibility
10\.3 Setup
-----------
```
library(tidyverse)
library(knitr)
library(broom)
set.seed(8675309)
```
10\.4 R Markdown
----------------
By now you should be pretty comfortable working with R Markdown files from the weekly formative exercises and set exercises. Here, we’ll explore some of the more advanced options and create an R Markdown document that produces a [reproducible](https://psyteachr.github.io/glossary/r#reproducibility "The extent to which the findings of a study can be repeated in some other context") manuscript.
First, make a new R Markdown document.
### 10\.4\.1 knitr options
When you create a new R Markdown file in RStudio, a setup chunk is automatically created.
````{r setup, include=FALSE}`
```
knitr::opts_chunk$set(echo = TRUE)
```
`````
You can set more default options for code chunks here. See the [knitr options documentation](https://yihui.name/knitr/options/) for explanations of the possible options.
````{r setup, include=FALSE}`
```
knitr::opts_chunk$set(
fig.width = 8,
fig.height = 5,
fig.path = 'images/',
echo = FALSE,
warning = TRUE,
message = FALSE,
cache = FALSE
)
```
`````
The code above sets the following options:
* `fig.width = 8` : figure width is 8 inches
* `fig.height = 5` : figure height is 5 inches
* `fig.path = 'images/'` : figures are saved in the directory “images”
* `echo = FALSE` : do not show code chunks in the rendered document
* `warning = FALSE` : do not show any function warnings
* `message = FALSE` : do not show any function messages
* `cache = FALSE` : run all the code to create all of the images and objects each time you knit (set to `TRUE` if you have time\-consuming code)
### 10\.4\.2 YAML Header
The [YAML](https://psyteachr.github.io/glossary/y#yaml "A structured format for information") header is where you can set several options.
```
---
title: "My Demo Document"
author: "Me"
output:
html_document:
theme: spacelab
highlight: tango
toc: true
toc_float:
collapsed: false
smooth_scroll: false
toc_depth: 3
number_sections: false
---
```
The built\-in themes are: “cerulean,” “cosmo,” “flatly,” “journal,” “lumen,” “paper,” “readable,” “sandstone,” “simplex,” “spacelab,” “united,” and “yeti.” You can [view and download more themes](http://www.datadreaming.org/post/r-markdown-theme-gallery/).
Try changing the values from `false` to `true` to see what the options do.
### 10\.4\.3 TOC and Document Headers
If you include a table of contents (`toc`), it is created from your document headers. Headers in [markdown](https://psyteachr.github.io/glossary/m#markdown "A way to specify formatting, such as headers, paragraphs, lists, bolding, and links.") are created by prefacing the header title with one or more hashes (`#`). Add a typical paper structure to your document like the one below.
```
## Abstract
My abstract here...
## Introduction
What's the question; why is it interesting?
## Methods
### Participants
How many participants and why? Do your power calculation here.
### Procedure
What will they do?
### Analysis
Describe the analysis plan...
## Results
Demo results for simulated data...
## Discussion
What does it all mean?
## References
```
### 10\.4\.4 Code Chunks
You can include [code chunks](https://psyteachr.github.io/glossary/c#chunk "A section of code in an R Markdown file") that create and display images, tables, or computations to include in your text. Let’s start by simulating some data.
First, create a code chunk in your document. You can put this before the abstract, since we won’t be showing the code in this document. We’ll use a modified version of the `two_sample` function from the [GLM lecture](09_glm.html) to create two groups with a difference of 0\.75 and 100 observations per group.
This function was modified to add sex and effect\-code both sex and group. Using the `recode` function to set effect or difference coding makes it clearer which value corresponds to which level. There is no effect of sex or interaction with group in these simulated data.
```
two_sample <- function(diff = 0.5, n_per_group = 20) {
tibble(Y = c(rnorm(n_per_group, -.5 * diff, sd = 1),
rnorm(n_per_group, .5 * diff, sd = 1)),
grp = factor(rep(c("a", "b"), each = n_per_group)),
sex = factor(rep(c("female", "male"), times = n_per_group))
) %>%
mutate(
grp_e = recode(grp, "a" = -0.5, "b" = 0.5),
sex_e = recode(sex, "female" = -0.5, "male" = 0.5)
)
}
```
This function requires the `tibble` and `dplyr` packages, so remember to load the whole tidyverse package at the top of this script (e.g., in the setup chunk).
Now we can make a separate code chunk to create our simulated dataset `dat`.
```
dat <- two_sample(diff = 0.75, n_per_group = 100)
```
#### 10\.4\.4\.1 Tables
Next, create a code chunk where you want to display a table of the descriptives (e.g., Participants section of the Methods). We’ll use tidyverse functions you learned in the [data wrangling lectures](04_wrangling.html) to create summary statistics for each group.
```
```{r, results='asis'}
dat %>%
group_by(grp, sex) %>%
summarise(n = n(),
Mean = mean(Y),
SD = sd(Y)) %>%
rename(group = grp) %>%
mutate_if(is.numeric, round, 3) %>%
knitr::kable()
```
```
```
## `summarise()` has grouped output by 'grp'. You can override using the `.groups` argument.
```
```
## `mutate_if()` ignored the following grouping variables:
## Column `group`
```
| group | sex | n | Mean | SD |
| --- | --- | --- | --- | --- |
| a | female | 50 | \-0\.361 | 0\.796 |
| a | male | 50 | \-0\.284 | 1\.052 |
| b | female | 50 | 0\.335 | 1\.080 |
| b | male | 50 | 0\.313 | 0\.904 |
Notice that the r chunk specifies the option `results='asis'`. This lets you format the table using the `kable()` function from `knitr`. You can also use more specialised functions from [papaja](https://crsh.github.io/papaja_man/reporting.html#tables) or [kableExtra](https://haozhu233.github.io/kableExtra/awesome_table_in_html.html) to format your tables.
#### 10\.4\.4\.2 Images
Next, create a code chunk where you want to display the image in your document. Let’s put it in the Results section. Use what you learned in the [data visualisation lecture](03_ggplot.html) to show violin\-boxplots for the two groups.
```
```{r, fig1, fig.cap="Figure 1. Scores by group and sex."}
ggplot(dat, aes(grp, Y, fill = sex)) +
geom_violin(alpha = 0.5) +
geom_boxplot(width = 0.25,
position = position_dodge(width = 0.9),
show.legend = FALSE) +
scale_fill_manual(values = c("orange", "purple")) +
xlab("Group") +
ylab("Score") +
theme(text = element_text(size = 30, family = "Times"))
```
```
The last line changes the default text size and font, which can be useful for generating figures that meet a journal’s requirements.
Figure 10\.1: Figure 1\. Scores by group and sex.
You can also include images that you did not create in R using the typical markdown syntax for images:
```
```
All the Things by [Hyperbole and a Half](http://hyperboleandahalf.blogspot.com/)
#### 10\.4\.4\.3 In\-line R
Now let’s use what you learned in the [GLM lecture](09_glm.html) to analyse our simulated data. The document is getting a little cluttered, so let’s move this code to external scripts.
* Create a new R script called “functions.R”
* Move the `library(tidyverse)` line and the `two_sample()` function definition to this file.
* Create a new R script called “analysis.R”
* Move the code for creating `dat` to this file.
* Add the following code to the end of the setup chunk:
```
source("functions.R")
source("analysis.R")
```
The `source` function lets you include code from an external file. This is really useful for making your documents readable. Just make sure you call your source files in the right order (e.g., include function definitions before you use the functions).
In the “analysis.R” file, we’re going to run the analysis code and save any numbers we might want to use in our manuscript to variables.
```
grp_lm <- lm(Y ~ grp_e * sex_e, data = dat)
stats <- grp_lm %>%
broom::tidy() %>%
mutate_if(is.numeric, round, 3)
```
The code above runs our analysis predicting `Y` from the effect\-coded group variable `grp_e`, the effect\-coded sex variable `sex_e` and their intereaction. The `tidy` function from the `broom` package turns the output into a tidy table. The `mutate_if` function uses the function `is.numeric` to check if each column should be mutated, adn if it is numeric, applies the `round` function with the digits argument set to 3\.
If you want to report the results of the analysis in a paragraph istead of a table, you need to know how to refer to each number in the table. Like with everything in R, there are many wways to do this. One is by specifying the column and row number like this:
```
stats$p.value[2]
```
```
## [1] 0
```
Another way is to create variables for each row like this:
```
grp_stats <- filter(stats, term == "grp_e")
sex_stats <- filter(stats, term == "sex_e")
ixn_stats <- filter(stats, term == "grp_e:sex_e")
```
Add the above code to the end of your analysis.R file. Then you can refer to columns by name like this:
```
grp_stats$p.value
sex_stats$statistic
ixn_stats$estimate
```
```
## [1] 0
## [1] 0.197
## [1] -0.099
```
You can insert these numbers into a paragraph with inline R code that looks like this:
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
p = `r grp_stats$p.value`).
There was no significant difference between men and women
(B = `r sex_statsestimate`,
t = `r sex_stats$statistic`,
p = `r sex_stats$p.value`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
p = `r ixn_stats$p.value`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \= 0\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
Remember, line breaks are ignored when you render the file (unless you add two spaces at the end of lines), so you can use line breaks to make it easier to read your text with inline R code.
The p\-values aren’t formatted in APA style. We wrote a function to deal with this in the [function lecture](07_functions.html). Add this function to the “functions.R” file and change the inline text to use the `report_p` function.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
fmt <- paste0("p = %.", digits, "f")
reported = sprintf(fmt, roundp)
}
reported
}
```
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
`r report_p(grp_stats$p.value, 3)`).
There was no significant difference between men and women
(B = `r sex_stats$estimate`,
t = `r sex_stats$statistic`,
`r report_p(sex_stats$p.value, 3)`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
`r report_p(ixn_stats$p.value, 3)`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \< .001\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
You might also want to report the statistics for the regression. There are a lot of numbers to format and insert, so it is easier to do this in the analysis script using `sprintf` for formatting.
```
s <- summary(grp_lm)
# calculate p value from fstatistic
fstat.p <- pf(s$fstatistic[1],
s$fstatistic[2],
s$fstatistic[3],
lower=FALSE)
adj_r <- sprintf(
"The regression equation had an adjusted $R^{2}$ of %.3f ($F_{(%i, %i)}$ = %.3f, %s).",
round(s$adj.r.squared, 3),
s$fstatistic[2],
s$fstatistic[3],
round(s$fstatistic[1], 3),
report_p(fstat.p, 3)
)
```
Then you can just insert the text in your manuscript like this: \` adj\_r\`:
The regression equation had an adjusted \\(R^{2}\\) of 0\.090 (\\(F\_{(3, 196\)}\\) \= 7\.546, p \< .001\).
### 10\.4\.5 Bibliography
There are several ways to do in\-text citations and automatically generate a [bibliography](https://rmarkdown.rstudio.com/authoring_bibliographies_and_citations.html#bibliographies) in RMarkdown.
#### 10\.4\.5\.1 Create a BibTeX File Manually
You can just make a BibTeX file and add citations manually. Make a new Text File in RStudio called “bibliography.bib.”
Next, add the line `bibliography: bibliography.bib` to your YAML header.
You can add citations in the following format:
```
@article{shortname,
author = {Author One and Author Two and Author Three},
title = {Paper Title},
journal = {Journal Title},
volume = {vol},
number = {issue},
pages = {startpage--endpage},
year = {year},
doi = {doi}
}
```
#### 10\.4\.5\.2 Citing R packages
You can get the citation for an R package using the function `citation`. You can paste the bibtex entry into your bibliography.bib file. Make sure to add a short name (e.g., “rmarkdown”) before the first comma to refer to the reference.
```
citation(package="rmarkdown")
```
```
##
## To cite the 'rmarkdown' package in publications, please use:
##
## JJ Allaire and Yihui Xie and Jonathan McPherson and Javier Luraschi
## and Kevin Ushey and Aron Atkins and Hadley Wickham and Joe Cheng and
## Winston Chang and Richard Iannone (2021). rmarkdown: Dynamic
## Documents for R. R package version 2.9.4. URL
## https://rmarkdown.rstudio.com.
##
## Yihui Xie and J.J. Allaire and Garrett Grolemund (2018). R Markdown:
## The Definitive Guide. Chapman and Hall/CRC. ISBN 9781138359338. URL
## https://bookdown.org/yihui/rmarkdown.
##
## Yihui Xie and Christophe Dervieux and Emily Riederer (2020). R
## Markdown Cookbook. Chapman and Hall/CRC. ISBN 9780367563837. URL
## https://bookdown.org/yihui/rmarkdown-cookbook.
##
## To see these entries in BibTeX format, use 'print(<citation>,
## bibtex=TRUE)', 'toBibtex(.)', or set
## 'options(citation.bibtex.max=999)'.
```
#### 10\.4\.5\.3 Download Citation Info
You can get the BibTeX formatted citation from most publisher websites. For example, go to the publisher’s page for [Equivalence Testing for Psychological Research: A Tutorial](https://journals.sagepub.com/doi/abs/10.1177/2515245918770963), click on the Cite button (in the sidebar or under the bottom Explore More menu), choose BibTeX format, and download the citation. You can open up the file in a text editor and copy the text. It should look like this:
```
@article{doi:10.1177/2515245918770963,
author = {Daniël Lakens and Anne M. Scheel and Peder M. Isager},
title ={Equivalence Testing for Psychological Research: A Tutorial},
journal = {Advances in Methods and Practices in Psychological Science},
volume = {1},
number = {2},
pages = {259-269},
year = {2018},
doi = {10.1177/2515245918770963},
URL = {
https://doi.org/10.1177/2515245918770963
},
eprint = {
https://doi.org/10.1177/2515245918770963
}
,
abstract = { Psychologists must be able to test both for the presence of an effect and for the absence of an effect. In addition to testing against zero, researchers can use the two one-sided tests (TOST) procedure to test for equivalence and reject the presence of a smallest effect size of interest (SESOI). The TOST procedure can be used to determine if an observed effect is surprisingly small, given that a true effect at least as extreme as the SESOI exists. We explain a range of approaches to determine the SESOI in psychological science and provide detailed examples of how equivalence tests should be performed and reported. Equivalence tests are an important extension of the statistical tools psychologists currently use and enable researchers to falsify predictions about the presence, and declare the absence, of meaningful effects. }
}
```
Paste the reference into your bibliography.bib file. Change `doi:10.1177/2515245918770963` in the first line of the reference to a short string you will use to cite the reference in your manuscript. We’ll use `TOSTtutorial`.
#### 10\.4\.5\.4 Converting from reference software
Most reference software like EndNote, Zotero or mendeley have exporting options that can export to BibTeX format. You just need to check the shortnames in the resulting file.
#### 10\.4\.5\.5 In\-text citations
You can cite reference in text like this:
```
This tutorial uses several R packages [@tidyverse;@rmarkdown].
```
This tutorial uses several R packages ([Wickham 2017](#ref-tidyverse); [Allaire et al. 2018](#ref-rmarkdown)).
Put a minus in front of the @ if you just want the year:
```
Lakens, Scheel and Isengar [-@TOSTtutorial] wrote a tutorial explaining how to test for the absence of an effect.
```
Lakens, Scheel and Isengar ([2018](#ref-TOSTtutorial)) wrote a tutorial explaining how to test for the absence of an effect.
#### 10\.4\.5\.6 Citation Styles
You can search a [list of style files](https://www.zotero.org/styles) for various journals and download a file that will format your bibliography for a specific journal’s style. You’ll need to add the line `csl: filename.csl` to your YAML header.
Add some citations to your bibliography.bib file, reference them in your text, and render your manuscript to see the automatically generated reference section. Try a few different citation style files.
### 10\.4\.6 Output Formats
You can knit your file to PDF or Word if you have the right packages installed on your computer.
### 10\.4\.7 Computational Reproducibility
Computational reproducibility refers to making all aspects of your analysis reproducible, including specifics of the software you used to run the code you wrote. R packages get updated periodically and some of these updates may break your code. Using a computational reproducibility platform guards against this by always running your code in the same environment.
[Code Ocean](https://codeocean.com/) is a new platform that lets you run your code in the cloud via a web browser.
10\.5 Glossary
--------------
| term | definition |
| --- | --- |
| [chunk](https://psyteachr.github.io/glossary/c#chunk) | A section of code in an R Markdown file |
| [markdown](https://psyteachr.github.io/glossary/m#markdown) | A way to specify formatting, such as headers, paragraphs, lists, bolding, and links. |
| [reproducibility](https://psyteachr.github.io/glossary/r#reproducibility) | The extent to which the findings of a study can be repeated in some other context |
| [yaml](https://psyteachr.github.io/glossary/y#yaml) | A structured format for information |
10\.6 References
----------------
10\.1 Learning Objectives
-------------------------
### 10\.1\.1 Basic
1. Create a reproducible script in R Markdown
2. Edit the YAML header to add table of contents and other options
3. Include a table
4. Include a figure
5. Use `source()` to include code from an external file
6. Report the output of an analysis using inline R
### 10\.1\.2 Intermediate
7. Output doc and PDF formats
8. Add a bibliography and in\-line citations
9. Format tables using `kableExtra`
### 10\.1\.3 Advanced
10. Create a computationally reproducible project in Code Ocean
### 10\.1\.1 Basic
1. Create a reproducible script in R Markdown
2. Edit the YAML header to add table of contents and other options
3. Include a table
4. Include a figure
5. Use `source()` to include code from an external file
6. Report the output of an analysis using inline R
### 10\.1\.2 Intermediate
7. Output doc and PDF formats
8. Add a bibliography and in\-line citations
9. Format tables using `kableExtra`
### 10\.1\.3 Advanced
10. Create a computationally reproducible project in Code Ocean
10\.2 Resources
---------------
* [Chapter 27: R Markdown](http://r4ds.had.co.nz/r-markdown.html) in *R for Data Science*
* [R Markdown Cheat Sheet](http://www.rstudio.com/wp-content/uploads/2016/03/rmarkdown-cheatsheet-2.0.pdf)
* [R Markdown reference Guide](https://www.rstudio.com/wp-content/uploads/2015/03/rmarkdown-reference.pdf)
* [R Markdown Tutorial](https://rmarkdown.rstudio.com/lesson-1.html)
* [R Markdown: The Definitive Guide](https://bookdown.org/yihui/rmarkdown/) by Yihui Xie, J. J. Allaire, \& Garrett Grolemund
* [Papaja](https://crsh.github.io/papaja_man/) Reproducible APA Manuscripts
* [Code Ocean](https://codeocean.com/) for Computational Reproducibility
10\.3 Setup
-----------
```
library(tidyverse)
library(knitr)
library(broom)
set.seed(8675309)
```
10\.4 R Markdown
----------------
By now you should be pretty comfortable working with R Markdown files from the weekly formative exercises and set exercises. Here, we’ll explore some of the more advanced options and create an R Markdown document that produces a [reproducible](https://psyteachr.github.io/glossary/r#reproducibility "The extent to which the findings of a study can be repeated in some other context") manuscript.
First, make a new R Markdown document.
### 10\.4\.1 knitr options
When you create a new R Markdown file in RStudio, a setup chunk is automatically created.
````{r setup, include=FALSE}`
```
knitr::opts_chunk$set(echo = TRUE)
```
`````
You can set more default options for code chunks here. See the [knitr options documentation](https://yihui.name/knitr/options/) for explanations of the possible options.
````{r setup, include=FALSE}`
```
knitr::opts_chunk$set(
fig.width = 8,
fig.height = 5,
fig.path = 'images/',
echo = FALSE,
warning = TRUE,
message = FALSE,
cache = FALSE
)
```
`````
The code above sets the following options:
* `fig.width = 8` : figure width is 8 inches
* `fig.height = 5` : figure height is 5 inches
* `fig.path = 'images/'` : figures are saved in the directory “images”
* `echo = FALSE` : do not show code chunks in the rendered document
* `warning = FALSE` : do not show any function warnings
* `message = FALSE` : do not show any function messages
* `cache = FALSE` : run all the code to create all of the images and objects each time you knit (set to `TRUE` if you have time\-consuming code)
### 10\.4\.2 YAML Header
The [YAML](https://psyteachr.github.io/glossary/y#yaml "A structured format for information") header is where you can set several options.
```
---
title: "My Demo Document"
author: "Me"
output:
html_document:
theme: spacelab
highlight: tango
toc: true
toc_float:
collapsed: false
smooth_scroll: false
toc_depth: 3
number_sections: false
---
```
The built\-in themes are: “cerulean,” “cosmo,” “flatly,” “journal,” “lumen,” “paper,” “readable,” “sandstone,” “simplex,” “spacelab,” “united,” and “yeti.” You can [view and download more themes](http://www.datadreaming.org/post/r-markdown-theme-gallery/).
Try changing the values from `false` to `true` to see what the options do.
### 10\.4\.3 TOC and Document Headers
If you include a table of contents (`toc`), it is created from your document headers. Headers in [markdown](https://psyteachr.github.io/glossary/m#markdown "A way to specify formatting, such as headers, paragraphs, lists, bolding, and links.") are created by prefacing the header title with one or more hashes (`#`). Add a typical paper structure to your document like the one below.
```
## Abstract
My abstract here...
## Introduction
What's the question; why is it interesting?
## Methods
### Participants
How many participants and why? Do your power calculation here.
### Procedure
What will they do?
### Analysis
Describe the analysis plan...
## Results
Demo results for simulated data...
## Discussion
What does it all mean?
## References
```
### 10\.4\.4 Code Chunks
You can include [code chunks](https://psyteachr.github.io/glossary/c#chunk "A section of code in an R Markdown file") that create and display images, tables, or computations to include in your text. Let’s start by simulating some data.
First, create a code chunk in your document. You can put this before the abstract, since we won’t be showing the code in this document. We’ll use a modified version of the `two_sample` function from the [GLM lecture](09_glm.html) to create two groups with a difference of 0\.75 and 100 observations per group.
This function was modified to add sex and effect\-code both sex and group. Using the `recode` function to set effect or difference coding makes it clearer which value corresponds to which level. There is no effect of sex or interaction with group in these simulated data.
```
two_sample <- function(diff = 0.5, n_per_group = 20) {
tibble(Y = c(rnorm(n_per_group, -.5 * diff, sd = 1),
rnorm(n_per_group, .5 * diff, sd = 1)),
grp = factor(rep(c("a", "b"), each = n_per_group)),
sex = factor(rep(c("female", "male"), times = n_per_group))
) %>%
mutate(
grp_e = recode(grp, "a" = -0.5, "b" = 0.5),
sex_e = recode(sex, "female" = -0.5, "male" = 0.5)
)
}
```
This function requires the `tibble` and `dplyr` packages, so remember to load the whole tidyverse package at the top of this script (e.g., in the setup chunk).
Now we can make a separate code chunk to create our simulated dataset `dat`.
```
dat <- two_sample(diff = 0.75, n_per_group = 100)
```
#### 10\.4\.4\.1 Tables
Next, create a code chunk where you want to display a table of the descriptives (e.g., Participants section of the Methods). We’ll use tidyverse functions you learned in the [data wrangling lectures](04_wrangling.html) to create summary statistics for each group.
```
```{r, results='asis'}
dat %>%
group_by(grp, sex) %>%
summarise(n = n(),
Mean = mean(Y),
SD = sd(Y)) %>%
rename(group = grp) %>%
mutate_if(is.numeric, round, 3) %>%
knitr::kable()
```
```
```
## `summarise()` has grouped output by 'grp'. You can override using the `.groups` argument.
```
```
## `mutate_if()` ignored the following grouping variables:
## Column `group`
```
| group | sex | n | Mean | SD |
| --- | --- | --- | --- | --- |
| a | female | 50 | \-0\.361 | 0\.796 |
| a | male | 50 | \-0\.284 | 1\.052 |
| b | female | 50 | 0\.335 | 1\.080 |
| b | male | 50 | 0\.313 | 0\.904 |
Notice that the r chunk specifies the option `results='asis'`. This lets you format the table using the `kable()` function from `knitr`. You can also use more specialised functions from [papaja](https://crsh.github.io/papaja_man/reporting.html#tables) or [kableExtra](https://haozhu233.github.io/kableExtra/awesome_table_in_html.html) to format your tables.
#### 10\.4\.4\.2 Images
Next, create a code chunk where you want to display the image in your document. Let’s put it in the Results section. Use what you learned in the [data visualisation lecture](03_ggplot.html) to show violin\-boxplots for the two groups.
```
```{r, fig1, fig.cap="Figure 1. Scores by group and sex."}
ggplot(dat, aes(grp, Y, fill = sex)) +
geom_violin(alpha = 0.5) +
geom_boxplot(width = 0.25,
position = position_dodge(width = 0.9),
show.legend = FALSE) +
scale_fill_manual(values = c("orange", "purple")) +
xlab("Group") +
ylab("Score") +
theme(text = element_text(size = 30, family = "Times"))
```
```
The last line changes the default text size and font, which can be useful for generating figures that meet a journal’s requirements.
Figure 10\.1: Figure 1\. Scores by group and sex.
You can also include images that you did not create in R using the typical markdown syntax for images:
```
```
All the Things by [Hyperbole and a Half](http://hyperboleandahalf.blogspot.com/)
#### 10\.4\.4\.3 In\-line R
Now let’s use what you learned in the [GLM lecture](09_glm.html) to analyse our simulated data. The document is getting a little cluttered, so let’s move this code to external scripts.
* Create a new R script called “functions.R”
* Move the `library(tidyverse)` line and the `two_sample()` function definition to this file.
* Create a new R script called “analysis.R”
* Move the code for creating `dat` to this file.
* Add the following code to the end of the setup chunk:
```
source("functions.R")
source("analysis.R")
```
The `source` function lets you include code from an external file. This is really useful for making your documents readable. Just make sure you call your source files in the right order (e.g., include function definitions before you use the functions).
In the “analysis.R” file, we’re going to run the analysis code and save any numbers we might want to use in our manuscript to variables.
```
grp_lm <- lm(Y ~ grp_e * sex_e, data = dat)
stats <- grp_lm %>%
broom::tidy() %>%
mutate_if(is.numeric, round, 3)
```
The code above runs our analysis predicting `Y` from the effect\-coded group variable `grp_e`, the effect\-coded sex variable `sex_e` and their intereaction. The `tidy` function from the `broom` package turns the output into a tidy table. The `mutate_if` function uses the function `is.numeric` to check if each column should be mutated, adn if it is numeric, applies the `round` function with the digits argument set to 3\.
If you want to report the results of the analysis in a paragraph istead of a table, you need to know how to refer to each number in the table. Like with everything in R, there are many wways to do this. One is by specifying the column and row number like this:
```
stats$p.value[2]
```
```
## [1] 0
```
Another way is to create variables for each row like this:
```
grp_stats <- filter(stats, term == "grp_e")
sex_stats <- filter(stats, term == "sex_e")
ixn_stats <- filter(stats, term == "grp_e:sex_e")
```
Add the above code to the end of your analysis.R file. Then you can refer to columns by name like this:
```
grp_stats$p.value
sex_stats$statistic
ixn_stats$estimate
```
```
## [1] 0
## [1] 0.197
## [1] -0.099
```
You can insert these numbers into a paragraph with inline R code that looks like this:
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
p = `r grp_stats$p.value`).
There was no significant difference between men and women
(B = `r sex_statsestimate`,
t = `r sex_stats$statistic`,
p = `r sex_stats$p.value`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
p = `r ixn_stats$p.value`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \= 0\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
Remember, line breaks are ignored when you render the file (unless you add two spaces at the end of lines), so you can use line breaks to make it easier to read your text with inline R code.
The p\-values aren’t formatted in APA style. We wrote a function to deal with this in the [function lecture](07_functions.html). Add this function to the “functions.R” file and change the inline text to use the `report_p` function.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
fmt <- paste0("p = %.", digits, "f")
reported = sprintf(fmt, roundp)
}
reported
}
```
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
`r report_p(grp_stats$p.value, 3)`).
There was no significant difference between men and women
(B = `r sex_stats$estimate`,
t = `r sex_stats$statistic`,
`r report_p(sex_stats$p.value, 3)`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
`r report_p(ixn_stats$p.value, 3)`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \< .001\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
You might also want to report the statistics for the regression. There are a lot of numbers to format and insert, so it is easier to do this in the analysis script using `sprintf` for formatting.
```
s <- summary(grp_lm)
# calculate p value from fstatistic
fstat.p <- pf(s$fstatistic[1],
s$fstatistic[2],
s$fstatistic[3],
lower=FALSE)
adj_r <- sprintf(
"The regression equation had an adjusted $R^{2}$ of %.3f ($F_{(%i, %i)}$ = %.3f, %s).",
round(s$adj.r.squared, 3),
s$fstatistic[2],
s$fstatistic[3],
round(s$fstatistic[1], 3),
report_p(fstat.p, 3)
)
```
Then you can just insert the text in your manuscript like this: \` adj\_r\`:
The regression equation had an adjusted \\(R^{2}\\) of 0\.090 (\\(F\_{(3, 196\)}\\) \= 7\.546, p \< .001\).
### 10\.4\.5 Bibliography
There are several ways to do in\-text citations and automatically generate a [bibliography](https://rmarkdown.rstudio.com/authoring_bibliographies_and_citations.html#bibliographies) in RMarkdown.
#### 10\.4\.5\.1 Create a BibTeX File Manually
You can just make a BibTeX file and add citations manually. Make a new Text File in RStudio called “bibliography.bib.”
Next, add the line `bibliography: bibliography.bib` to your YAML header.
You can add citations in the following format:
```
@article{shortname,
author = {Author One and Author Two and Author Three},
title = {Paper Title},
journal = {Journal Title},
volume = {vol},
number = {issue},
pages = {startpage--endpage},
year = {year},
doi = {doi}
}
```
#### 10\.4\.5\.2 Citing R packages
You can get the citation for an R package using the function `citation`. You can paste the bibtex entry into your bibliography.bib file. Make sure to add a short name (e.g., “rmarkdown”) before the first comma to refer to the reference.
```
citation(package="rmarkdown")
```
```
##
## To cite the 'rmarkdown' package in publications, please use:
##
## JJ Allaire and Yihui Xie and Jonathan McPherson and Javier Luraschi
## and Kevin Ushey and Aron Atkins and Hadley Wickham and Joe Cheng and
## Winston Chang and Richard Iannone (2021). rmarkdown: Dynamic
## Documents for R. R package version 2.9.4. URL
## https://rmarkdown.rstudio.com.
##
## Yihui Xie and J.J. Allaire and Garrett Grolemund (2018). R Markdown:
## The Definitive Guide. Chapman and Hall/CRC. ISBN 9781138359338. URL
## https://bookdown.org/yihui/rmarkdown.
##
## Yihui Xie and Christophe Dervieux and Emily Riederer (2020). R
## Markdown Cookbook. Chapman and Hall/CRC. ISBN 9780367563837. URL
## https://bookdown.org/yihui/rmarkdown-cookbook.
##
## To see these entries in BibTeX format, use 'print(<citation>,
## bibtex=TRUE)', 'toBibtex(.)', or set
## 'options(citation.bibtex.max=999)'.
```
#### 10\.4\.5\.3 Download Citation Info
You can get the BibTeX formatted citation from most publisher websites. For example, go to the publisher’s page for [Equivalence Testing for Psychological Research: A Tutorial](https://journals.sagepub.com/doi/abs/10.1177/2515245918770963), click on the Cite button (in the sidebar or under the bottom Explore More menu), choose BibTeX format, and download the citation. You can open up the file in a text editor and copy the text. It should look like this:
```
@article{doi:10.1177/2515245918770963,
author = {Daniël Lakens and Anne M. Scheel and Peder M. Isager},
title ={Equivalence Testing for Psychological Research: A Tutorial},
journal = {Advances in Methods and Practices in Psychological Science},
volume = {1},
number = {2},
pages = {259-269},
year = {2018},
doi = {10.1177/2515245918770963},
URL = {
https://doi.org/10.1177/2515245918770963
},
eprint = {
https://doi.org/10.1177/2515245918770963
}
,
abstract = { Psychologists must be able to test both for the presence of an effect and for the absence of an effect. In addition to testing against zero, researchers can use the two one-sided tests (TOST) procedure to test for equivalence and reject the presence of a smallest effect size of interest (SESOI). The TOST procedure can be used to determine if an observed effect is surprisingly small, given that a true effect at least as extreme as the SESOI exists. We explain a range of approaches to determine the SESOI in psychological science and provide detailed examples of how equivalence tests should be performed and reported. Equivalence tests are an important extension of the statistical tools psychologists currently use and enable researchers to falsify predictions about the presence, and declare the absence, of meaningful effects. }
}
```
Paste the reference into your bibliography.bib file. Change `doi:10.1177/2515245918770963` in the first line of the reference to a short string you will use to cite the reference in your manuscript. We’ll use `TOSTtutorial`.
#### 10\.4\.5\.4 Converting from reference software
Most reference software like EndNote, Zotero or mendeley have exporting options that can export to BibTeX format. You just need to check the shortnames in the resulting file.
#### 10\.4\.5\.5 In\-text citations
You can cite reference in text like this:
```
This tutorial uses several R packages [@tidyverse;@rmarkdown].
```
This tutorial uses several R packages ([Wickham 2017](#ref-tidyverse); [Allaire et al. 2018](#ref-rmarkdown)).
Put a minus in front of the @ if you just want the year:
```
Lakens, Scheel and Isengar [-@TOSTtutorial] wrote a tutorial explaining how to test for the absence of an effect.
```
Lakens, Scheel and Isengar ([2018](#ref-TOSTtutorial)) wrote a tutorial explaining how to test for the absence of an effect.
#### 10\.4\.5\.6 Citation Styles
You can search a [list of style files](https://www.zotero.org/styles) for various journals and download a file that will format your bibliography for a specific journal’s style. You’ll need to add the line `csl: filename.csl` to your YAML header.
Add some citations to your bibliography.bib file, reference them in your text, and render your manuscript to see the automatically generated reference section. Try a few different citation style files.
### 10\.4\.6 Output Formats
You can knit your file to PDF or Word if you have the right packages installed on your computer.
### 10\.4\.7 Computational Reproducibility
Computational reproducibility refers to making all aspects of your analysis reproducible, including specifics of the software you used to run the code you wrote. R packages get updated periodically and some of these updates may break your code. Using a computational reproducibility platform guards against this by always running your code in the same environment.
[Code Ocean](https://codeocean.com/) is a new platform that lets you run your code in the cloud via a web browser.
### 10\.4\.1 knitr options
When you create a new R Markdown file in RStudio, a setup chunk is automatically created.
````{r setup, include=FALSE}`
```
knitr::opts_chunk$set(echo = TRUE)
```
`````
You can set more default options for code chunks here. See the [knitr options documentation](https://yihui.name/knitr/options/) for explanations of the possible options.
````{r setup, include=FALSE}`
```
knitr::opts_chunk$set(
fig.width = 8,
fig.height = 5,
fig.path = 'images/',
echo = FALSE,
warning = TRUE,
message = FALSE,
cache = FALSE
)
```
`````
The code above sets the following options:
* `fig.width = 8` : figure width is 8 inches
* `fig.height = 5` : figure height is 5 inches
* `fig.path = 'images/'` : figures are saved in the directory “images”
* `echo = FALSE` : do not show code chunks in the rendered document
* `warning = FALSE` : do not show any function warnings
* `message = FALSE` : do not show any function messages
* `cache = FALSE` : run all the code to create all of the images and objects each time you knit (set to `TRUE` if you have time\-consuming code)
### 10\.4\.2 YAML Header
The [YAML](https://psyteachr.github.io/glossary/y#yaml "A structured format for information") header is where you can set several options.
```
---
title: "My Demo Document"
author: "Me"
output:
html_document:
theme: spacelab
highlight: tango
toc: true
toc_float:
collapsed: false
smooth_scroll: false
toc_depth: 3
number_sections: false
---
```
The built\-in themes are: “cerulean,” “cosmo,” “flatly,” “journal,” “lumen,” “paper,” “readable,” “sandstone,” “simplex,” “spacelab,” “united,” and “yeti.” You can [view and download more themes](http://www.datadreaming.org/post/r-markdown-theme-gallery/).
Try changing the values from `false` to `true` to see what the options do.
### 10\.4\.3 TOC and Document Headers
If you include a table of contents (`toc`), it is created from your document headers. Headers in [markdown](https://psyteachr.github.io/glossary/m#markdown "A way to specify formatting, such as headers, paragraphs, lists, bolding, and links.") are created by prefacing the header title with one or more hashes (`#`). Add a typical paper structure to your document like the one below.
```
## Abstract
My abstract here...
## Introduction
What's the question; why is it interesting?
## Methods
### Participants
How many participants and why? Do your power calculation here.
### Procedure
What will they do?
### Analysis
Describe the analysis plan...
## Results
Demo results for simulated data...
## Discussion
What does it all mean?
## References
```
### 10\.4\.4 Code Chunks
You can include [code chunks](https://psyteachr.github.io/glossary/c#chunk "A section of code in an R Markdown file") that create and display images, tables, or computations to include in your text. Let’s start by simulating some data.
First, create a code chunk in your document. You can put this before the abstract, since we won’t be showing the code in this document. We’ll use a modified version of the `two_sample` function from the [GLM lecture](09_glm.html) to create two groups with a difference of 0\.75 and 100 observations per group.
This function was modified to add sex and effect\-code both sex and group. Using the `recode` function to set effect or difference coding makes it clearer which value corresponds to which level. There is no effect of sex or interaction with group in these simulated data.
```
two_sample <- function(diff = 0.5, n_per_group = 20) {
tibble(Y = c(rnorm(n_per_group, -.5 * diff, sd = 1),
rnorm(n_per_group, .5 * diff, sd = 1)),
grp = factor(rep(c("a", "b"), each = n_per_group)),
sex = factor(rep(c("female", "male"), times = n_per_group))
) %>%
mutate(
grp_e = recode(grp, "a" = -0.5, "b" = 0.5),
sex_e = recode(sex, "female" = -0.5, "male" = 0.5)
)
}
```
This function requires the `tibble` and `dplyr` packages, so remember to load the whole tidyverse package at the top of this script (e.g., in the setup chunk).
Now we can make a separate code chunk to create our simulated dataset `dat`.
```
dat <- two_sample(diff = 0.75, n_per_group = 100)
```
#### 10\.4\.4\.1 Tables
Next, create a code chunk where you want to display a table of the descriptives (e.g., Participants section of the Methods). We’ll use tidyverse functions you learned in the [data wrangling lectures](04_wrangling.html) to create summary statistics for each group.
```
```{r, results='asis'}
dat %>%
group_by(grp, sex) %>%
summarise(n = n(),
Mean = mean(Y),
SD = sd(Y)) %>%
rename(group = grp) %>%
mutate_if(is.numeric, round, 3) %>%
knitr::kable()
```
```
```
## `summarise()` has grouped output by 'grp'. You can override using the `.groups` argument.
```
```
## `mutate_if()` ignored the following grouping variables:
## Column `group`
```
| group | sex | n | Mean | SD |
| --- | --- | --- | --- | --- |
| a | female | 50 | \-0\.361 | 0\.796 |
| a | male | 50 | \-0\.284 | 1\.052 |
| b | female | 50 | 0\.335 | 1\.080 |
| b | male | 50 | 0\.313 | 0\.904 |
Notice that the r chunk specifies the option `results='asis'`. This lets you format the table using the `kable()` function from `knitr`. You can also use more specialised functions from [papaja](https://crsh.github.io/papaja_man/reporting.html#tables) or [kableExtra](https://haozhu233.github.io/kableExtra/awesome_table_in_html.html) to format your tables.
#### 10\.4\.4\.2 Images
Next, create a code chunk where you want to display the image in your document. Let’s put it in the Results section. Use what you learned in the [data visualisation lecture](03_ggplot.html) to show violin\-boxplots for the two groups.
```
```{r, fig1, fig.cap="Figure 1. Scores by group and sex."}
ggplot(dat, aes(grp, Y, fill = sex)) +
geom_violin(alpha = 0.5) +
geom_boxplot(width = 0.25,
position = position_dodge(width = 0.9),
show.legend = FALSE) +
scale_fill_manual(values = c("orange", "purple")) +
xlab("Group") +
ylab("Score") +
theme(text = element_text(size = 30, family = "Times"))
```
```
The last line changes the default text size and font, which can be useful for generating figures that meet a journal’s requirements.
Figure 10\.1: Figure 1\. Scores by group and sex.
You can also include images that you did not create in R using the typical markdown syntax for images:
```
```
All the Things by [Hyperbole and a Half](http://hyperboleandahalf.blogspot.com/)
#### 10\.4\.4\.3 In\-line R
Now let’s use what you learned in the [GLM lecture](09_glm.html) to analyse our simulated data. The document is getting a little cluttered, so let’s move this code to external scripts.
* Create a new R script called “functions.R”
* Move the `library(tidyverse)` line and the `two_sample()` function definition to this file.
* Create a new R script called “analysis.R”
* Move the code for creating `dat` to this file.
* Add the following code to the end of the setup chunk:
```
source("functions.R")
source("analysis.R")
```
The `source` function lets you include code from an external file. This is really useful for making your documents readable. Just make sure you call your source files in the right order (e.g., include function definitions before you use the functions).
In the “analysis.R” file, we’re going to run the analysis code and save any numbers we might want to use in our manuscript to variables.
```
grp_lm <- lm(Y ~ grp_e * sex_e, data = dat)
stats <- grp_lm %>%
broom::tidy() %>%
mutate_if(is.numeric, round, 3)
```
The code above runs our analysis predicting `Y` from the effect\-coded group variable `grp_e`, the effect\-coded sex variable `sex_e` and their intereaction. The `tidy` function from the `broom` package turns the output into a tidy table. The `mutate_if` function uses the function `is.numeric` to check if each column should be mutated, adn if it is numeric, applies the `round` function with the digits argument set to 3\.
If you want to report the results of the analysis in a paragraph istead of a table, you need to know how to refer to each number in the table. Like with everything in R, there are many wways to do this. One is by specifying the column and row number like this:
```
stats$p.value[2]
```
```
## [1] 0
```
Another way is to create variables for each row like this:
```
grp_stats <- filter(stats, term == "grp_e")
sex_stats <- filter(stats, term == "sex_e")
ixn_stats <- filter(stats, term == "grp_e:sex_e")
```
Add the above code to the end of your analysis.R file. Then you can refer to columns by name like this:
```
grp_stats$p.value
sex_stats$statistic
ixn_stats$estimate
```
```
## [1] 0
## [1] 0.197
## [1] -0.099
```
You can insert these numbers into a paragraph with inline R code that looks like this:
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
p = `r grp_stats$p.value`).
There was no significant difference between men and women
(B = `r sex_statsestimate`,
t = `r sex_stats$statistic`,
p = `r sex_stats$p.value`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
p = `r ixn_stats$p.value`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \= 0\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
Remember, line breaks are ignored when you render the file (unless you add two spaces at the end of lines), so you can use line breaks to make it easier to read your text with inline R code.
The p\-values aren’t formatted in APA style. We wrote a function to deal with this in the [function lecture](07_functions.html). Add this function to the “functions.R” file and change the inline text to use the `report_p` function.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
fmt <- paste0("p = %.", digits, "f")
reported = sprintf(fmt, roundp)
}
reported
}
```
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
`r report_p(grp_stats$p.value, 3)`).
There was no significant difference between men and women
(B = `r sex_stats$estimate`,
t = `r sex_stats$statistic`,
`r report_p(sex_stats$p.value, 3)`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
`r report_p(ixn_stats$p.value, 3)`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \< .001\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
You might also want to report the statistics for the regression. There are a lot of numbers to format and insert, so it is easier to do this in the analysis script using `sprintf` for formatting.
```
s <- summary(grp_lm)
# calculate p value from fstatistic
fstat.p <- pf(s$fstatistic[1],
s$fstatistic[2],
s$fstatistic[3],
lower=FALSE)
adj_r <- sprintf(
"The regression equation had an adjusted $R^{2}$ of %.3f ($F_{(%i, %i)}$ = %.3f, %s).",
round(s$adj.r.squared, 3),
s$fstatistic[2],
s$fstatistic[3],
round(s$fstatistic[1], 3),
report_p(fstat.p, 3)
)
```
Then you can just insert the text in your manuscript like this: \` adj\_r\`:
The regression equation had an adjusted \\(R^{2}\\) of 0\.090 (\\(F\_{(3, 196\)}\\) \= 7\.546, p \< .001\).
#### 10\.4\.4\.1 Tables
Next, create a code chunk where you want to display a table of the descriptives (e.g., Participants section of the Methods). We’ll use tidyverse functions you learned in the [data wrangling lectures](04_wrangling.html) to create summary statistics for each group.
```
```{r, results='asis'}
dat %>%
group_by(grp, sex) %>%
summarise(n = n(),
Mean = mean(Y),
SD = sd(Y)) %>%
rename(group = grp) %>%
mutate_if(is.numeric, round, 3) %>%
knitr::kable()
```
```
```
## `summarise()` has grouped output by 'grp'. You can override using the `.groups` argument.
```
```
## `mutate_if()` ignored the following grouping variables:
## Column `group`
```
| group | sex | n | Mean | SD |
| --- | --- | --- | --- | --- |
| a | female | 50 | \-0\.361 | 0\.796 |
| a | male | 50 | \-0\.284 | 1\.052 |
| b | female | 50 | 0\.335 | 1\.080 |
| b | male | 50 | 0\.313 | 0\.904 |
Notice that the r chunk specifies the option `results='asis'`. This lets you format the table using the `kable()` function from `knitr`. You can also use more specialised functions from [papaja](https://crsh.github.io/papaja_man/reporting.html#tables) or [kableExtra](https://haozhu233.github.io/kableExtra/awesome_table_in_html.html) to format your tables.
#### 10\.4\.4\.2 Images
Next, create a code chunk where you want to display the image in your document. Let’s put it in the Results section. Use what you learned in the [data visualisation lecture](03_ggplot.html) to show violin\-boxplots for the two groups.
```
```{r, fig1, fig.cap="Figure 1. Scores by group and sex."}
ggplot(dat, aes(grp, Y, fill = sex)) +
geom_violin(alpha = 0.5) +
geom_boxplot(width = 0.25,
position = position_dodge(width = 0.9),
show.legend = FALSE) +
scale_fill_manual(values = c("orange", "purple")) +
xlab("Group") +
ylab("Score") +
theme(text = element_text(size = 30, family = "Times"))
```
```
The last line changes the default text size and font, which can be useful for generating figures that meet a journal’s requirements.
Figure 10\.1: Figure 1\. Scores by group and sex.
You can also include images that you did not create in R using the typical markdown syntax for images:
```
```
All the Things by [Hyperbole and a Half](http://hyperboleandahalf.blogspot.com/)
#### 10\.4\.4\.3 In\-line R
Now let’s use what you learned in the [GLM lecture](09_glm.html) to analyse our simulated data. The document is getting a little cluttered, so let’s move this code to external scripts.
* Create a new R script called “functions.R”
* Move the `library(tidyverse)` line and the `two_sample()` function definition to this file.
* Create a new R script called “analysis.R”
* Move the code for creating `dat` to this file.
* Add the following code to the end of the setup chunk:
```
source("functions.R")
source("analysis.R")
```
The `source` function lets you include code from an external file. This is really useful for making your documents readable. Just make sure you call your source files in the right order (e.g., include function definitions before you use the functions).
In the “analysis.R” file, we’re going to run the analysis code and save any numbers we might want to use in our manuscript to variables.
```
grp_lm <- lm(Y ~ grp_e * sex_e, data = dat)
stats <- grp_lm %>%
broom::tidy() %>%
mutate_if(is.numeric, round, 3)
```
The code above runs our analysis predicting `Y` from the effect\-coded group variable `grp_e`, the effect\-coded sex variable `sex_e` and their intereaction. The `tidy` function from the `broom` package turns the output into a tidy table. The `mutate_if` function uses the function `is.numeric` to check if each column should be mutated, adn if it is numeric, applies the `round` function with the digits argument set to 3\.
If you want to report the results of the analysis in a paragraph istead of a table, you need to know how to refer to each number in the table. Like with everything in R, there are many wways to do this. One is by specifying the column and row number like this:
```
stats$p.value[2]
```
```
## [1] 0
```
Another way is to create variables for each row like this:
```
grp_stats <- filter(stats, term == "grp_e")
sex_stats <- filter(stats, term == "sex_e")
ixn_stats <- filter(stats, term == "grp_e:sex_e")
```
Add the above code to the end of your analysis.R file. Then you can refer to columns by name like this:
```
grp_stats$p.value
sex_stats$statistic
ixn_stats$estimate
```
```
## [1] 0
## [1] 0.197
## [1] -0.099
```
You can insert these numbers into a paragraph with inline R code that looks like this:
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
p = `r grp_stats$p.value`).
There was no significant difference between men and women
(B = `r sex_statsestimate`,
t = `r sex_stats$statistic`,
p = `r sex_stats$p.value`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
p = `r ixn_stats$p.value`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \= 0\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
Remember, line breaks are ignored when you render the file (unless you add two spaces at the end of lines), so you can use line breaks to make it easier to read your text with inline R code.
The p\-values aren’t formatted in APA style. We wrote a function to deal with this in the [function lecture](07_functions.html). Add this function to the “functions.R” file and change the inline text to use the `report_p` function.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
fmt <- paste0("p = %.", digits, "f")
reported = sprintf(fmt, roundp)
}
reported
}
```
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
`r report_p(grp_stats$p.value, 3)`).
There was no significant difference between men and women
(B = `r sex_stats$estimate`,
t = `r sex_stats$statistic`,
`r report_p(sex_stats$p.value, 3)`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
`r report_p(ixn_stats$p.value, 3)`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \< .001\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
You might also want to report the statistics for the regression. There are a lot of numbers to format and insert, so it is easier to do this in the analysis script using `sprintf` for formatting.
```
s <- summary(grp_lm)
# calculate p value from fstatistic
fstat.p <- pf(s$fstatistic[1],
s$fstatistic[2],
s$fstatistic[3],
lower=FALSE)
adj_r <- sprintf(
"The regression equation had an adjusted $R^{2}$ of %.3f ($F_{(%i, %i)}$ = %.3f, %s).",
round(s$adj.r.squared, 3),
s$fstatistic[2],
s$fstatistic[3],
round(s$fstatistic[1], 3),
report_p(fstat.p, 3)
)
```
Then you can just insert the text in your manuscript like this: \` adj\_r\`:
The regression equation had an adjusted \\(R^{2}\\) of 0\.090 (\\(F\_{(3, 196\)}\\) \= 7\.546, p \< .001\).
### 10\.4\.5 Bibliography
There are several ways to do in\-text citations and automatically generate a [bibliography](https://rmarkdown.rstudio.com/authoring_bibliographies_and_citations.html#bibliographies) in RMarkdown.
#### 10\.4\.5\.1 Create a BibTeX File Manually
You can just make a BibTeX file and add citations manually. Make a new Text File in RStudio called “bibliography.bib.”
Next, add the line `bibliography: bibliography.bib` to your YAML header.
You can add citations in the following format:
```
@article{shortname,
author = {Author One and Author Two and Author Three},
title = {Paper Title},
journal = {Journal Title},
volume = {vol},
number = {issue},
pages = {startpage--endpage},
year = {year},
doi = {doi}
}
```
#### 10\.4\.5\.2 Citing R packages
You can get the citation for an R package using the function `citation`. You can paste the bibtex entry into your bibliography.bib file. Make sure to add a short name (e.g., “rmarkdown”) before the first comma to refer to the reference.
```
citation(package="rmarkdown")
```
```
##
## To cite the 'rmarkdown' package in publications, please use:
##
## JJ Allaire and Yihui Xie and Jonathan McPherson and Javier Luraschi
## and Kevin Ushey and Aron Atkins and Hadley Wickham and Joe Cheng and
## Winston Chang and Richard Iannone (2021). rmarkdown: Dynamic
## Documents for R. R package version 2.9.4. URL
## https://rmarkdown.rstudio.com.
##
## Yihui Xie and J.J. Allaire and Garrett Grolemund (2018). R Markdown:
## The Definitive Guide. Chapman and Hall/CRC. ISBN 9781138359338. URL
## https://bookdown.org/yihui/rmarkdown.
##
## Yihui Xie and Christophe Dervieux and Emily Riederer (2020). R
## Markdown Cookbook. Chapman and Hall/CRC. ISBN 9780367563837. URL
## https://bookdown.org/yihui/rmarkdown-cookbook.
##
## To see these entries in BibTeX format, use 'print(<citation>,
## bibtex=TRUE)', 'toBibtex(.)', or set
## 'options(citation.bibtex.max=999)'.
```
#### 10\.4\.5\.3 Download Citation Info
You can get the BibTeX formatted citation from most publisher websites. For example, go to the publisher’s page for [Equivalence Testing for Psychological Research: A Tutorial](https://journals.sagepub.com/doi/abs/10.1177/2515245918770963), click on the Cite button (in the sidebar or under the bottom Explore More menu), choose BibTeX format, and download the citation. You can open up the file in a text editor and copy the text. It should look like this:
```
@article{doi:10.1177/2515245918770963,
author = {Daniël Lakens and Anne M. Scheel and Peder M. Isager},
title ={Equivalence Testing for Psychological Research: A Tutorial},
journal = {Advances in Methods and Practices in Psychological Science},
volume = {1},
number = {2},
pages = {259-269},
year = {2018},
doi = {10.1177/2515245918770963},
URL = {
https://doi.org/10.1177/2515245918770963
},
eprint = {
https://doi.org/10.1177/2515245918770963
}
,
abstract = { Psychologists must be able to test both for the presence of an effect and for the absence of an effect. In addition to testing against zero, researchers can use the two one-sided tests (TOST) procedure to test for equivalence and reject the presence of a smallest effect size of interest (SESOI). The TOST procedure can be used to determine if an observed effect is surprisingly small, given that a true effect at least as extreme as the SESOI exists. We explain a range of approaches to determine the SESOI in psychological science and provide detailed examples of how equivalence tests should be performed and reported. Equivalence tests are an important extension of the statistical tools psychologists currently use and enable researchers to falsify predictions about the presence, and declare the absence, of meaningful effects. }
}
```
Paste the reference into your bibliography.bib file. Change `doi:10.1177/2515245918770963` in the first line of the reference to a short string you will use to cite the reference in your manuscript. We’ll use `TOSTtutorial`.
#### 10\.4\.5\.4 Converting from reference software
Most reference software like EndNote, Zotero or mendeley have exporting options that can export to BibTeX format. You just need to check the shortnames in the resulting file.
#### 10\.4\.5\.5 In\-text citations
You can cite reference in text like this:
```
This tutorial uses several R packages [@tidyverse;@rmarkdown].
```
This tutorial uses several R packages ([Wickham 2017](#ref-tidyverse); [Allaire et al. 2018](#ref-rmarkdown)).
Put a minus in front of the @ if you just want the year:
```
Lakens, Scheel and Isengar [-@TOSTtutorial] wrote a tutorial explaining how to test for the absence of an effect.
```
Lakens, Scheel and Isengar ([2018](#ref-TOSTtutorial)) wrote a tutorial explaining how to test for the absence of an effect.
#### 10\.4\.5\.6 Citation Styles
You can search a [list of style files](https://www.zotero.org/styles) for various journals and download a file that will format your bibliography for a specific journal’s style. You’ll need to add the line `csl: filename.csl` to your YAML header.
Add some citations to your bibliography.bib file, reference them in your text, and render your manuscript to see the automatically generated reference section. Try a few different citation style files.
#### 10\.4\.5\.1 Create a BibTeX File Manually
You can just make a BibTeX file and add citations manually. Make a new Text File in RStudio called “bibliography.bib.”
Next, add the line `bibliography: bibliography.bib` to your YAML header.
You can add citations in the following format:
```
@article{shortname,
author = {Author One and Author Two and Author Three},
title = {Paper Title},
journal = {Journal Title},
volume = {vol},
number = {issue},
pages = {startpage--endpage},
year = {year},
doi = {doi}
}
```
#### 10\.4\.5\.2 Citing R packages
You can get the citation for an R package using the function `citation`. You can paste the bibtex entry into your bibliography.bib file. Make sure to add a short name (e.g., “rmarkdown”) before the first comma to refer to the reference.
```
citation(package="rmarkdown")
```
```
##
## To cite the 'rmarkdown' package in publications, please use:
##
## JJ Allaire and Yihui Xie and Jonathan McPherson and Javier Luraschi
## and Kevin Ushey and Aron Atkins and Hadley Wickham and Joe Cheng and
## Winston Chang and Richard Iannone (2021). rmarkdown: Dynamic
## Documents for R. R package version 2.9.4. URL
## https://rmarkdown.rstudio.com.
##
## Yihui Xie and J.J. Allaire and Garrett Grolemund (2018). R Markdown:
## The Definitive Guide. Chapman and Hall/CRC. ISBN 9781138359338. URL
## https://bookdown.org/yihui/rmarkdown.
##
## Yihui Xie and Christophe Dervieux and Emily Riederer (2020). R
## Markdown Cookbook. Chapman and Hall/CRC. ISBN 9780367563837. URL
## https://bookdown.org/yihui/rmarkdown-cookbook.
##
## To see these entries in BibTeX format, use 'print(<citation>,
## bibtex=TRUE)', 'toBibtex(.)', or set
## 'options(citation.bibtex.max=999)'.
```
#### 10\.4\.5\.3 Download Citation Info
You can get the BibTeX formatted citation from most publisher websites. For example, go to the publisher’s page for [Equivalence Testing for Psychological Research: A Tutorial](https://journals.sagepub.com/doi/abs/10.1177/2515245918770963), click on the Cite button (in the sidebar or under the bottom Explore More menu), choose BibTeX format, and download the citation. You can open up the file in a text editor and copy the text. It should look like this:
```
@article{doi:10.1177/2515245918770963,
author = {Daniël Lakens and Anne M. Scheel and Peder M. Isager},
title ={Equivalence Testing for Psychological Research: A Tutorial},
journal = {Advances in Methods and Practices in Psychological Science},
volume = {1},
number = {2},
pages = {259-269},
year = {2018},
doi = {10.1177/2515245918770963},
URL = {
https://doi.org/10.1177/2515245918770963
},
eprint = {
https://doi.org/10.1177/2515245918770963
}
,
abstract = { Psychologists must be able to test both for the presence of an effect and for the absence of an effect. In addition to testing against zero, researchers can use the two one-sided tests (TOST) procedure to test for equivalence and reject the presence of a smallest effect size of interest (SESOI). The TOST procedure can be used to determine if an observed effect is surprisingly small, given that a true effect at least as extreme as the SESOI exists. We explain a range of approaches to determine the SESOI in psychological science and provide detailed examples of how equivalence tests should be performed and reported. Equivalence tests are an important extension of the statistical tools psychologists currently use and enable researchers to falsify predictions about the presence, and declare the absence, of meaningful effects. }
}
```
Paste the reference into your bibliography.bib file. Change `doi:10.1177/2515245918770963` in the first line of the reference to a short string you will use to cite the reference in your manuscript. We’ll use `TOSTtutorial`.
#### 10\.4\.5\.4 Converting from reference software
Most reference software like EndNote, Zotero or mendeley have exporting options that can export to BibTeX format. You just need to check the shortnames in the resulting file.
#### 10\.4\.5\.5 In\-text citations
You can cite reference in text like this:
```
This tutorial uses several R packages [@tidyverse;@rmarkdown].
```
This tutorial uses several R packages ([Wickham 2017](#ref-tidyverse); [Allaire et al. 2018](#ref-rmarkdown)).
Put a minus in front of the @ if you just want the year:
```
Lakens, Scheel and Isengar [-@TOSTtutorial] wrote a tutorial explaining how to test for the absence of an effect.
```
Lakens, Scheel and Isengar ([2018](#ref-TOSTtutorial)) wrote a tutorial explaining how to test for the absence of an effect.
#### 10\.4\.5\.6 Citation Styles
You can search a [list of style files](https://www.zotero.org/styles) for various journals and download a file that will format your bibliography for a specific journal’s style. You’ll need to add the line `csl: filename.csl` to your YAML header.
Add some citations to your bibliography.bib file, reference them in your text, and render your manuscript to see the automatically generated reference section. Try a few different citation style files.
### 10\.4\.6 Output Formats
You can knit your file to PDF or Word if you have the right packages installed on your computer.
### 10\.4\.7 Computational Reproducibility
Computational reproducibility refers to making all aspects of your analysis reproducible, including specifics of the software you used to run the code you wrote. R packages get updated periodically and some of these updates may break your code. Using a computational reproducibility platform guards against this by always running your code in the same environment.
[Code Ocean](https://codeocean.com/) is a new platform that lets you run your code in the cloud via a web browser.
10\.5 Glossary
--------------
| term | definition |
| --- | --- |
| [chunk](https://psyteachr.github.io/glossary/c#chunk) | A section of code in an R Markdown file |
| [markdown](https://psyteachr.github.io/glossary/m#markdown) | A way to specify formatting, such as headers, paragraphs, lists, bolding, and links. |
| [reproducibility](https://psyteachr.github.io/glossary/r#reproducibility) | The extent to which the findings of a study can be repeated in some other context |
| [yaml](https://psyteachr.github.io/glossary/y#yaml) | A structured format for information |
| Data Visualization |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/repro.html |
Chapter 10 Reproducible Workflows
=================================
10\.1 Learning Objectives
-------------------------
### 10\.1\.1 Basic
1. Create a reproducible script in R Markdown
2. Edit the YAML header to add table of contents and other options
3. Include a table
4. Include a figure
5. Use `source()` to include code from an external file
6. Report the output of an analysis using inline R
### 10\.1\.2 Intermediate
7. Output doc and PDF formats
8. Add a bibliography and in\-line citations
9. Format tables using `kableExtra`
### 10\.1\.3 Advanced
10. Create a computationally reproducible project in Code Ocean
10\.2 Resources
---------------
* [Chapter 27: R Markdown](http://r4ds.had.co.nz/r-markdown.html) in *R for Data Science*
* [R Markdown Cheat Sheet](http://www.rstudio.com/wp-content/uploads/2016/03/rmarkdown-cheatsheet-2.0.pdf)
* [R Markdown reference Guide](https://www.rstudio.com/wp-content/uploads/2015/03/rmarkdown-reference.pdf)
* [R Markdown Tutorial](https://rmarkdown.rstudio.com/lesson-1.html)
* [R Markdown: The Definitive Guide](https://bookdown.org/yihui/rmarkdown/) by Yihui Xie, J. J. Allaire, \& Garrett Grolemund
* [Papaja](https://crsh.github.io/papaja_man/) Reproducible APA Manuscripts
* [Code Ocean](https://codeocean.com/) for Computational Reproducibility
10\.3 Setup
-----------
```
library(tidyverse)
library(knitr)
library(broom)
set.seed(8675309)
```
10\.4 R Markdown
----------------
By now you should be pretty comfortable working with R Markdown files from the weekly formative exercises and set exercises. Here, we’ll explore some of the more advanced options and create an R Markdown document that produces a [reproducible](https://psyteachr.github.io/glossary/r#reproducibility "The extent to which the findings of a study can be repeated in some other context") manuscript.
First, make a new R Markdown document.
### 10\.4\.1 knitr options
When you create a new R Markdown file in RStudio, a setup chunk is automatically created.
````{r setup, include=FALSE}`
```
knitr::opts_chunk$set(echo = TRUE)
```
`````
You can set more default options for code chunks here. See the [knitr options documentation](https://yihui.name/knitr/options/) for explanations of the possible options.
````{r setup, include=FALSE}`
```
knitr::opts_chunk$set(
fig.width = 8,
fig.height = 5,
fig.path = 'images/',
echo = FALSE,
warning = TRUE,
message = FALSE,
cache = FALSE
)
```
`````
The code above sets the following options:
* `fig.width = 8` : figure width is 8 inches
* `fig.height = 5` : figure height is 5 inches
* `fig.path = 'images/'` : figures are saved in the directory “images”
* `echo = FALSE` : do not show code chunks in the rendered document
* `warning = FALSE` : do not show any function warnings
* `message = FALSE` : do not show any function messages
* `cache = FALSE` : run all the code to create all of the images and objects each time you knit (set to `TRUE` if you have time\-consuming code)
### 10\.4\.2 YAML Header
The [YAML](https://psyteachr.github.io/glossary/y#yaml "A structured format for information") header is where you can set several options.
```
---
title: "My Demo Document"
author: "Me"
output:
html_document:
theme: spacelab
highlight: tango
toc: true
toc_float:
collapsed: false
smooth_scroll: false
toc_depth: 3
number_sections: false
---
```
The built\-in themes are: “cerulean,” “cosmo,” “flatly,” “journal,” “lumen,” “paper,” “readable,” “sandstone,” “simplex,” “spacelab,” “united,” and “yeti.” You can [view and download more themes](http://www.datadreaming.org/post/r-markdown-theme-gallery/).
Try changing the values from `false` to `true` to see what the options do.
### 10\.4\.3 TOC and Document Headers
If you include a table of contents (`toc`), it is created from your document headers. Headers in [markdown](https://psyteachr.github.io/glossary/m#markdown "A way to specify formatting, such as headers, paragraphs, lists, bolding, and links.") are created by prefacing the header title with one or more hashes (`#`). Add a typical paper structure to your document like the one below.
```
## Abstract
My abstract here...
## Introduction
What's the question; why is it interesting?
## Methods
### Participants
How many participants and why? Do your power calculation here.
### Procedure
What will they do?
### Analysis
Describe the analysis plan...
## Results
Demo results for simulated data...
## Discussion
What does it all mean?
## References
```
### 10\.4\.4 Code Chunks
You can include [code chunks](https://psyteachr.github.io/glossary/c#chunk "A section of code in an R Markdown file") that create and display images, tables, or computations to include in your text. Let’s start by simulating some data.
First, create a code chunk in your document. You can put this before the abstract, since we won’t be showing the code in this document. We’ll use a modified version of the `two_sample` function from the [GLM lecture](09_glm.html) to create two groups with a difference of 0\.75 and 100 observations per group.
This function was modified to add sex and effect\-code both sex and group. Using the `recode` function to set effect or difference coding makes it clearer which value corresponds to which level. There is no effect of sex or interaction with group in these simulated data.
```
two_sample <- function(diff = 0.5, n_per_group = 20) {
tibble(Y = c(rnorm(n_per_group, -.5 * diff, sd = 1),
rnorm(n_per_group, .5 * diff, sd = 1)),
grp = factor(rep(c("a", "b"), each = n_per_group)),
sex = factor(rep(c("female", "male"), times = n_per_group))
) %>%
mutate(
grp_e = recode(grp, "a" = -0.5, "b" = 0.5),
sex_e = recode(sex, "female" = -0.5, "male" = 0.5)
)
}
```
This function requires the `tibble` and `dplyr` packages, so remember to load the whole tidyverse package at the top of this script (e.g., in the setup chunk).
Now we can make a separate code chunk to create our simulated dataset `dat`.
```
dat <- two_sample(diff = 0.75, n_per_group = 100)
```
#### 10\.4\.4\.1 Tables
Next, create a code chunk where you want to display a table of the descriptives (e.g., Participants section of the Methods). We’ll use tidyverse functions you learned in the [data wrangling lectures](04_wrangling.html) to create summary statistics for each group.
```
```{r, results='asis'}
dat %>%
group_by(grp, sex) %>%
summarise(n = n(),
Mean = mean(Y),
SD = sd(Y)) %>%
rename(group = grp) %>%
mutate_if(is.numeric, round, 3) %>%
knitr::kable()
```
```
```
## `summarise()` has grouped output by 'grp'. You can override using the `.groups` argument.
```
```
## `mutate_if()` ignored the following grouping variables:
## Column `group`
```
| group | sex | n | Mean | SD |
| --- | --- | --- | --- | --- |
| a | female | 50 | \-0\.361 | 0\.796 |
| a | male | 50 | \-0\.284 | 1\.052 |
| b | female | 50 | 0\.335 | 1\.080 |
| b | male | 50 | 0\.313 | 0\.904 |
Notice that the r chunk specifies the option `results='asis'`. This lets you format the table using the `kable()` function from `knitr`. You can also use more specialised functions from [papaja](https://crsh.github.io/papaja_man/reporting.html#tables) or [kableExtra](https://haozhu233.github.io/kableExtra/awesome_table_in_html.html) to format your tables.
#### 10\.4\.4\.2 Images
Next, create a code chunk where you want to display the image in your document. Let’s put it in the Results section. Use what you learned in the [data visualisation lecture](03_ggplot.html) to show violin\-boxplots for the two groups.
```
```{r, fig1, fig.cap="Figure 1. Scores by group and sex."}
ggplot(dat, aes(grp, Y, fill = sex)) +
geom_violin(alpha = 0.5) +
geom_boxplot(width = 0.25,
position = position_dodge(width = 0.9),
show.legend = FALSE) +
scale_fill_manual(values = c("orange", "purple")) +
xlab("Group") +
ylab("Score") +
theme(text = element_text(size = 30, family = "Times"))
```
```
The last line changes the default text size and font, which can be useful for generating figures that meet a journal’s requirements.
Figure 10\.1: Figure 1\. Scores by group and sex.
You can also include images that you did not create in R using the typical markdown syntax for images:
```
```
All the Things by [Hyperbole and a Half](http://hyperboleandahalf.blogspot.com/)
#### 10\.4\.4\.3 In\-line R
Now let’s use what you learned in the [GLM lecture](09_glm.html) to analyse our simulated data. The document is getting a little cluttered, so let’s move this code to external scripts.
* Create a new R script called “functions.R”
* Move the `library(tidyverse)` line and the `two_sample()` function definition to this file.
* Create a new R script called “analysis.R”
* Move the code for creating `dat` to this file.
* Add the following code to the end of the setup chunk:
```
source("functions.R")
source("analysis.R")
```
The `source` function lets you include code from an external file. This is really useful for making your documents readable. Just make sure you call your source files in the right order (e.g., include function definitions before you use the functions).
In the “analysis.R” file, we’re going to run the analysis code and save any numbers we might want to use in our manuscript to variables.
```
grp_lm <- lm(Y ~ grp_e * sex_e, data = dat)
stats <- grp_lm %>%
broom::tidy() %>%
mutate_if(is.numeric, round, 3)
```
The code above runs our analysis predicting `Y` from the effect\-coded group variable `grp_e`, the effect\-coded sex variable `sex_e` and their intereaction. The `tidy` function from the `broom` package turns the output into a tidy table. The `mutate_if` function uses the function `is.numeric` to check if each column should be mutated, adn if it is numeric, applies the `round` function with the digits argument set to 3\.
If you want to report the results of the analysis in a paragraph istead of a table, you need to know how to refer to each number in the table. Like with everything in R, there are many wways to do this. One is by specifying the column and row number like this:
```
stats$p.value[2]
```
```
## [1] 0
```
Another way is to create variables for each row like this:
```
grp_stats <- filter(stats, term == "grp_e")
sex_stats <- filter(stats, term == "sex_e")
ixn_stats <- filter(stats, term == "grp_e:sex_e")
```
Add the above code to the end of your analysis.R file. Then you can refer to columns by name like this:
```
grp_stats$p.value
sex_stats$statistic
ixn_stats$estimate
```
```
## [1] 0
## [1] 0.197
## [1] -0.099
```
You can insert these numbers into a paragraph with inline R code that looks like this:
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
p = `r grp_stats$p.value`).
There was no significant difference between men and women
(B = `r sex_statsestimate`,
t = `r sex_stats$statistic`,
p = `r sex_stats$p.value`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
p = `r ixn_stats$p.value`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \= 0\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
Remember, line breaks are ignored when you render the file (unless you add two spaces at the end of lines), so you can use line breaks to make it easier to read your text with inline R code.
The p\-values aren’t formatted in APA style. We wrote a function to deal with this in the [function lecture](07_functions.html). Add this function to the “functions.R” file and change the inline text to use the `report_p` function.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
fmt <- paste0("p = %.", digits, "f")
reported = sprintf(fmt, roundp)
}
reported
}
```
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
`r report_p(grp_stats$p.value, 3)`).
There was no significant difference between men and women
(B = `r sex_stats$estimate`,
t = `r sex_stats$statistic`,
`r report_p(sex_stats$p.value, 3)`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
`r report_p(ixn_stats$p.value, 3)`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \< .001\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
You might also want to report the statistics for the regression. There are a lot of numbers to format and insert, so it is easier to do this in the analysis script using `sprintf` for formatting.
```
s <- summary(grp_lm)
# calculate p value from fstatistic
fstat.p <- pf(s$fstatistic[1],
s$fstatistic[2],
s$fstatistic[3],
lower=FALSE)
adj_r <- sprintf(
"The regression equation had an adjusted $R^{2}$ of %.3f ($F_{(%i, %i)}$ = %.3f, %s).",
round(s$adj.r.squared, 3),
s$fstatistic[2],
s$fstatistic[3],
round(s$fstatistic[1], 3),
report_p(fstat.p, 3)
)
```
Then you can just insert the text in your manuscript like this: \` adj\_r\`:
The regression equation had an adjusted \\(R^{2}\\) of 0\.090 (\\(F\_{(3, 196\)}\\) \= 7\.546, p \< .001\).
### 10\.4\.5 Bibliography
There are several ways to do in\-text citations and automatically generate a [bibliography](https://rmarkdown.rstudio.com/authoring_bibliographies_and_citations.html#bibliographies) in RMarkdown.
#### 10\.4\.5\.1 Create a BibTeX File Manually
You can just make a BibTeX file and add citations manually. Make a new Text File in RStudio called “bibliography.bib.”
Next, add the line `bibliography: bibliography.bib` to your YAML header.
You can add citations in the following format:
```
@article{shortname,
author = {Author One and Author Two and Author Three},
title = {Paper Title},
journal = {Journal Title},
volume = {vol},
number = {issue},
pages = {startpage--endpage},
year = {year},
doi = {doi}
}
```
#### 10\.4\.5\.2 Citing R packages
You can get the citation for an R package using the function `citation`. You can paste the bibtex entry into your bibliography.bib file. Make sure to add a short name (e.g., “rmarkdown”) before the first comma to refer to the reference.
```
citation(package="rmarkdown")
```
```
##
## To cite the 'rmarkdown' package in publications, please use:
##
## JJ Allaire and Yihui Xie and Jonathan McPherson and Javier Luraschi
## and Kevin Ushey and Aron Atkins and Hadley Wickham and Joe Cheng and
## Winston Chang and Richard Iannone (2021). rmarkdown: Dynamic
## Documents for R. R package version 2.9.4. URL
## https://rmarkdown.rstudio.com.
##
## Yihui Xie and J.J. Allaire and Garrett Grolemund (2018). R Markdown:
## The Definitive Guide. Chapman and Hall/CRC. ISBN 9781138359338. URL
## https://bookdown.org/yihui/rmarkdown.
##
## Yihui Xie and Christophe Dervieux and Emily Riederer (2020). R
## Markdown Cookbook. Chapman and Hall/CRC. ISBN 9780367563837. URL
## https://bookdown.org/yihui/rmarkdown-cookbook.
##
## To see these entries in BibTeX format, use 'print(<citation>,
## bibtex=TRUE)', 'toBibtex(.)', or set
## 'options(citation.bibtex.max=999)'.
```
#### 10\.4\.5\.3 Download Citation Info
You can get the BibTeX formatted citation from most publisher websites. For example, go to the publisher’s page for [Equivalence Testing for Psychological Research: A Tutorial](https://journals.sagepub.com/doi/abs/10.1177/2515245918770963), click on the Cite button (in the sidebar or under the bottom Explore More menu), choose BibTeX format, and download the citation. You can open up the file in a text editor and copy the text. It should look like this:
```
@article{doi:10.1177/2515245918770963,
author = {Daniël Lakens and Anne M. Scheel and Peder M. Isager},
title ={Equivalence Testing for Psychological Research: A Tutorial},
journal = {Advances in Methods and Practices in Psychological Science},
volume = {1},
number = {2},
pages = {259-269},
year = {2018},
doi = {10.1177/2515245918770963},
URL = {
https://doi.org/10.1177/2515245918770963
},
eprint = {
https://doi.org/10.1177/2515245918770963
}
,
abstract = { Psychologists must be able to test both for the presence of an effect and for the absence of an effect. In addition to testing against zero, researchers can use the two one-sided tests (TOST) procedure to test for equivalence and reject the presence of a smallest effect size of interest (SESOI). The TOST procedure can be used to determine if an observed effect is surprisingly small, given that a true effect at least as extreme as the SESOI exists. We explain a range of approaches to determine the SESOI in psychological science and provide detailed examples of how equivalence tests should be performed and reported. Equivalence tests are an important extension of the statistical tools psychologists currently use and enable researchers to falsify predictions about the presence, and declare the absence, of meaningful effects. }
}
```
Paste the reference into your bibliography.bib file. Change `doi:10.1177/2515245918770963` in the first line of the reference to a short string you will use to cite the reference in your manuscript. We’ll use `TOSTtutorial`.
#### 10\.4\.5\.4 Converting from reference software
Most reference software like EndNote, Zotero or mendeley have exporting options that can export to BibTeX format. You just need to check the shortnames in the resulting file.
#### 10\.4\.5\.5 In\-text citations
You can cite reference in text like this:
```
This tutorial uses several R packages [@tidyverse;@rmarkdown].
```
This tutorial uses several R packages ([Wickham 2017](#ref-tidyverse); [Allaire et al. 2018](#ref-rmarkdown)).
Put a minus in front of the @ if you just want the year:
```
Lakens, Scheel and Isengar [-@TOSTtutorial] wrote a tutorial explaining how to test for the absence of an effect.
```
Lakens, Scheel and Isengar ([2018](#ref-TOSTtutorial)) wrote a tutorial explaining how to test for the absence of an effect.
#### 10\.4\.5\.6 Citation Styles
You can search a [list of style files](https://www.zotero.org/styles) for various journals and download a file that will format your bibliography for a specific journal’s style. You’ll need to add the line `csl: filename.csl` to your YAML header.
Add some citations to your bibliography.bib file, reference them in your text, and render your manuscript to see the automatically generated reference section. Try a few different citation style files.
### 10\.4\.6 Output Formats
You can knit your file to PDF or Word if you have the right packages installed on your computer.
### 10\.4\.7 Computational Reproducibility
Computational reproducibility refers to making all aspects of your analysis reproducible, including specifics of the software you used to run the code you wrote. R packages get updated periodically and some of these updates may break your code. Using a computational reproducibility platform guards against this by always running your code in the same environment.
[Code Ocean](https://codeocean.com/) is a new platform that lets you run your code in the cloud via a web browser.
10\.5 Glossary
--------------
| term | definition |
| --- | --- |
| [chunk](https://psyteachr.github.io/glossary/c#chunk) | A section of code in an R Markdown file |
| [markdown](https://psyteachr.github.io/glossary/m#markdown) | A way to specify formatting, such as headers, paragraphs, lists, bolding, and links. |
| [reproducibility](https://psyteachr.github.io/glossary/r#reproducibility) | The extent to which the findings of a study can be repeated in some other context |
| [yaml](https://psyteachr.github.io/glossary/y#yaml) | A structured format for information |
10\.6 References
----------------
10\.1 Learning Objectives
-------------------------
### 10\.1\.1 Basic
1. Create a reproducible script in R Markdown
2. Edit the YAML header to add table of contents and other options
3. Include a table
4. Include a figure
5. Use `source()` to include code from an external file
6. Report the output of an analysis using inline R
### 10\.1\.2 Intermediate
7. Output doc and PDF formats
8. Add a bibliography and in\-line citations
9. Format tables using `kableExtra`
### 10\.1\.3 Advanced
10. Create a computationally reproducible project in Code Ocean
### 10\.1\.1 Basic
1. Create a reproducible script in R Markdown
2. Edit the YAML header to add table of contents and other options
3. Include a table
4. Include a figure
5. Use `source()` to include code from an external file
6. Report the output of an analysis using inline R
### 10\.1\.2 Intermediate
7. Output doc and PDF formats
8. Add a bibliography and in\-line citations
9. Format tables using `kableExtra`
### 10\.1\.3 Advanced
10. Create a computationally reproducible project in Code Ocean
10\.2 Resources
---------------
* [Chapter 27: R Markdown](http://r4ds.had.co.nz/r-markdown.html) in *R for Data Science*
* [R Markdown Cheat Sheet](http://www.rstudio.com/wp-content/uploads/2016/03/rmarkdown-cheatsheet-2.0.pdf)
* [R Markdown reference Guide](https://www.rstudio.com/wp-content/uploads/2015/03/rmarkdown-reference.pdf)
* [R Markdown Tutorial](https://rmarkdown.rstudio.com/lesson-1.html)
* [R Markdown: The Definitive Guide](https://bookdown.org/yihui/rmarkdown/) by Yihui Xie, J. J. Allaire, \& Garrett Grolemund
* [Papaja](https://crsh.github.io/papaja_man/) Reproducible APA Manuscripts
* [Code Ocean](https://codeocean.com/) for Computational Reproducibility
10\.3 Setup
-----------
```
library(tidyverse)
library(knitr)
library(broom)
set.seed(8675309)
```
10\.4 R Markdown
----------------
By now you should be pretty comfortable working with R Markdown files from the weekly formative exercises and set exercises. Here, we’ll explore some of the more advanced options and create an R Markdown document that produces a [reproducible](https://psyteachr.github.io/glossary/r#reproducibility "The extent to which the findings of a study can be repeated in some other context") manuscript.
First, make a new R Markdown document.
### 10\.4\.1 knitr options
When you create a new R Markdown file in RStudio, a setup chunk is automatically created.
````{r setup, include=FALSE}`
```
knitr::opts_chunk$set(echo = TRUE)
```
`````
You can set more default options for code chunks here. See the [knitr options documentation](https://yihui.name/knitr/options/) for explanations of the possible options.
````{r setup, include=FALSE}`
```
knitr::opts_chunk$set(
fig.width = 8,
fig.height = 5,
fig.path = 'images/',
echo = FALSE,
warning = TRUE,
message = FALSE,
cache = FALSE
)
```
`````
The code above sets the following options:
* `fig.width = 8` : figure width is 8 inches
* `fig.height = 5` : figure height is 5 inches
* `fig.path = 'images/'` : figures are saved in the directory “images”
* `echo = FALSE` : do not show code chunks in the rendered document
* `warning = FALSE` : do not show any function warnings
* `message = FALSE` : do not show any function messages
* `cache = FALSE` : run all the code to create all of the images and objects each time you knit (set to `TRUE` if you have time\-consuming code)
### 10\.4\.2 YAML Header
The [YAML](https://psyteachr.github.io/glossary/y#yaml "A structured format for information") header is where you can set several options.
```
---
title: "My Demo Document"
author: "Me"
output:
html_document:
theme: spacelab
highlight: tango
toc: true
toc_float:
collapsed: false
smooth_scroll: false
toc_depth: 3
number_sections: false
---
```
The built\-in themes are: “cerulean,” “cosmo,” “flatly,” “journal,” “lumen,” “paper,” “readable,” “sandstone,” “simplex,” “spacelab,” “united,” and “yeti.” You can [view and download more themes](http://www.datadreaming.org/post/r-markdown-theme-gallery/).
Try changing the values from `false` to `true` to see what the options do.
### 10\.4\.3 TOC and Document Headers
If you include a table of contents (`toc`), it is created from your document headers. Headers in [markdown](https://psyteachr.github.io/glossary/m#markdown "A way to specify formatting, such as headers, paragraphs, lists, bolding, and links.") are created by prefacing the header title with one or more hashes (`#`). Add a typical paper structure to your document like the one below.
```
## Abstract
My abstract here...
## Introduction
What's the question; why is it interesting?
## Methods
### Participants
How many participants and why? Do your power calculation here.
### Procedure
What will they do?
### Analysis
Describe the analysis plan...
## Results
Demo results for simulated data...
## Discussion
What does it all mean?
## References
```
### 10\.4\.4 Code Chunks
You can include [code chunks](https://psyteachr.github.io/glossary/c#chunk "A section of code in an R Markdown file") that create and display images, tables, or computations to include in your text. Let’s start by simulating some data.
First, create a code chunk in your document. You can put this before the abstract, since we won’t be showing the code in this document. We’ll use a modified version of the `two_sample` function from the [GLM lecture](09_glm.html) to create two groups with a difference of 0\.75 and 100 observations per group.
This function was modified to add sex and effect\-code both sex and group. Using the `recode` function to set effect or difference coding makes it clearer which value corresponds to which level. There is no effect of sex or interaction with group in these simulated data.
```
two_sample <- function(diff = 0.5, n_per_group = 20) {
tibble(Y = c(rnorm(n_per_group, -.5 * diff, sd = 1),
rnorm(n_per_group, .5 * diff, sd = 1)),
grp = factor(rep(c("a", "b"), each = n_per_group)),
sex = factor(rep(c("female", "male"), times = n_per_group))
) %>%
mutate(
grp_e = recode(grp, "a" = -0.5, "b" = 0.5),
sex_e = recode(sex, "female" = -0.5, "male" = 0.5)
)
}
```
This function requires the `tibble` and `dplyr` packages, so remember to load the whole tidyverse package at the top of this script (e.g., in the setup chunk).
Now we can make a separate code chunk to create our simulated dataset `dat`.
```
dat <- two_sample(diff = 0.75, n_per_group = 100)
```
#### 10\.4\.4\.1 Tables
Next, create a code chunk where you want to display a table of the descriptives (e.g., Participants section of the Methods). We’ll use tidyverse functions you learned in the [data wrangling lectures](04_wrangling.html) to create summary statistics for each group.
```
```{r, results='asis'}
dat %>%
group_by(grp, sex) %>%
summarise(n = n(),
Mean = mean(Y),
SD = sd(Y)) %>%
rename(group = grp) %>%
mutate_if(is.numeric, round, 3) %>%
knitr::kable()
```
```
```
## `summarise()` has grouped output by 'grp'. You can override using the `.groups` argument.
```
```
## `mutate_if()` ignored the following grouping variables:
## Column `group`
```
| group | sex | n | Mean | SD |
| --- | --- | --- | --- | --- |
| a | female | 50 | \-0\.361 | 0\.796 |
| a | male | 50 | \-0\.284 | 1\.052 |
| b | female | 50 | 0\.335 | 1\.080 |
| b | male | 50 | 0\.313 | 0\.904 |
Notice that the r chunk specifies the option `results='asis'`. This lets you format the table using the `kable()` function from `knitr`. You can also use more specialised functions from [papaja](https://crsh.github.io/papaja_man/reporting.html#tables) or [kableExtra](https://haozhu233.github.io/kableExtra/awesome_table_in_html.html) to format your tables.
#### 10\.4\.4\.2 Images
Next, create a code chunk where you want to display the image in your document. Let’s put it in the Results section. Use what you learned in the [data visualisation lecture](03_ggplot.html) to show violin\-boxplots for the two groups.
```
```{r, fig1, fig.cap="Figure 1. Scores by group and sex."}
ggplot(dat, aes(grp, Y, fill = sex)) +
geom_violin(alpha = 0.5) +
geom_boxplot(width = 0.25,
position = position_dodge(width = 0.9),
show.legend = FALSE) +
scale_fill_manual(values = c("orange", "purple")) +
xlab("Group") +
ylab("Score") +
theme(text = element_text(size = 30, family = "Times"))
```
```
The last line changes the default text size and font, which can be useful for generating figures that meet a journal’s requirements.
Figure 10\.1: Figure 1\. Scores by group and sex.
You can also include images that you did not create in R using the typical markdown syntax for images:
```
```
All the Things by [Hyperbole and a Half](http://hyperboleandahalf.blogspot.com/)
#### 10\.4\.4\.3 In\-line R
Now let’s use what you learned in the [GLM lecture](09_glm.html) to analyse our simulated data. The document is getting a little cluttered, so let’s move this code to external scripts.
* Create a new R script called “functions.R”
* Move the `library(tidyverse)` line and the `two_sample()` function definition to this file.
* Create a new R script called “analysis.R”
* Move the code for creating `dat` to this file.
* Add the following code to the end of the setup chunk:
```
source("functions.R")
source("analysis.R")
```
The `source` function lets you include code from an external file. This is really useful for making your documents readable. Just make sure you call your source files in the right order (e.g., include function definitions before you use the functions).
In the “analysis.R” file, we’re going to run the analysis code and save any numbers we might want to use in our manuscript to variables.
```
grp_lm <- lm(Y ~ grp_e * sex_e, data = dat)
stats <- grp_lm %>%
broom::tidy() %>%
mutate_if(is.numeric, round, 3)
```
The code above runs our analysis predicting `Y` from the effect\-coded group variable `grp_e`, the effect\-coded sex variable `sex_e` and their intereaction. The `tidy` function from the `broom` package turns the output into a tidy table. The `mutate_if` function uses the function `is.numeric` to check if each column should be mutated, adn if it is numeric, applies the `round` function with the digits argument set to 3\.
If you want to report the results of the analysis in a paragraph istead of a table, you need to know how to refer to each number in the table. Like with everything in R, there are many wways to do this. One is by specifying the column and row number like this:
```
stats$p.value[2]
```
```
## [1] 0
```
Another way is to create variables for each row like this:
```
grp_stats <- filter(stats, term == "grp_e")
sex_stats <- filter(stats, term == "sex_e")
ixn_stats <- filter(stats, term == "grp_e:sex_e")
```
Add the above code to the end of your analysis.R file. Then you can refer to columns by name like this:
```
grp_stats$p.value
sex_stats$statistic
ixn_stats$estimate
```
```
## [1] 0
## [1] 0.197
## [1] -0.099
```
You can insert these numbers into a paragraph with inline R code that looks like this:
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
p = `r grp_stats$p.value`).
There was no significant difference between men and women
(B = `r sex_statsestimate`,
t = `r sex_stats$statistic`,
p = `r sex_stats$p.value`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
p = `r ixn_stats$p.value`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \= 0\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
Remember, line breaks are ignored when you render the file (unless you add two spaces at the end of lines), so you can use line breaks to make it easier to read your text with inline R code.
The p\-values aren’t formatted in APA style. We wrote a function to deal with this in the [function lecture](07_functions.html). Add this function to the “functions.R” file and change the inline text to use the `report_p` function.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
fmt <- paste0("p = %.", digits, "f")
reported = sprintf(fmt, roundp)
}
reported
}
```
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
`r report_p(grp_stats$p.value, 3)`).
There was no significant difference between men and women
(B = `r sex_stats$estimate`,
t = `r sex_stats$statistic`,
`r report_p(sex_stats$p.value, 3)`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
`r report_p(ixn_stats$p.value, 3)`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \< .001\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
You might also want to report the statistics for the regression. There are a lot of numbers to format and insert, so it is easier to do this in the analysis script using `sprintf` for formatting.
```
s <- summary(grp_lm)
# calculate p value from fstatistic
fstat.p <- pf(s$fstatistic[1],
s$fstatistic[2],
s$fstatistic[3],
lower=FALSE)
adj_r <- sprintf(
"The regression equation had an adjusted $R^{2}$ of %.3f ($F_{(%i, %i)}$ = %.3f, %s).",
round(s$adj.r.squared, 3),
s$fstatistic[2],
s$fstatistic[3],
round(s$fstatistic[1], 3),
report_p(fstat.p, 3)
)
```
Then you can just insert the text in your manuscript like this: \` adj\_r\`:
The regression equation had an adjusted \\(R^{2}\\) of 0\.090 (\\(F\_{(3, 196\)}\\) \= 7\.546, p \< .001\).
### 10\.4\.5 Bibliography
There are several ways to do in\-text citations and automatically generate a [bibliography](https://rmarkdown.rstudio.com/authoring_bibliographies_and_citations.html#bibliographies) in RMarkdown.
#### 10\.4\.5\.1 Create a BibTeX File Manually
You can just make a BibTeX file and add citations manually. Make a new Text File in RStudio called “bibliography.bib.”
Next, add the line `bibliography: bibliography.bib` to your YAML header.
You can add citations in the following format:
```
@article{shortname,
author = {Author One and Author Two and Author Three},
title = {Paper Title},
journal = {Journal Title},
volume = {vol},
number = {issue},
pages = {startpage--endpage},
year = {year},
doi = {doi}
}
```
#### 10\.4\.5\.2 Citing R packages
You can get the citation for an R package using the function `citation`. You can paste the bibtex entry into your bibliography.bib file. Make sure to add a short name (e.g., “rmarkdown”) before the first comma to refer to the reference.
```
citation(package="rmarkdown")
```
```
##
## To cite the 'rmarkdown' package in publications, please use:
##
## JJ Allaire and Yihui Xie and Jonathan McPherson and Javier Luraschi
## and Kevin Ushey and Aron Atkins and Hadley Wickham and Joe Cheng and
## Winston Chang and Richard Iannone (2021). rmarkdown: Dynamic
## Documents for R. R package version 2.9.4. URL
## https://rmarkdown.rstudio.com.
##
## Yihui Xie and J.J. Allaire and Garrett Grolemund (2018). R Markdown:
## The Definitive Guide. Chapman and Hall/CRC. ISBN 9781138359338. URL
## https://bookdown.org/yihui/rmarkdown.
##
## Yihui Xie and Christophe Dervieux and Emily Riederer (2020). R
## Markdown Cookbook. Chapman and Hall/CRC. ISBN 9780367563837. URL
## https://bookdown.org/yihui/rmarkdown-cookbook.
##
## To see these entries in BibTeX format, use 'print(<citation>,
## bibtex=TRUE)', 'toBibtex(.)', or set
## 'options(citation.bibtex.max=999)'.
```
#### 10\.4\.5\.3 Download Citation Info
You can get the BibTeX formatted citation from most publisher websites. For example, go to the publisher’s page for [Equivalence Testing for Psychological Research: A Tutorial](https://journals.sagepub.com/doi/abs/10.1177/2515245918770963), click on the Cite button (in the sidebar or under the bottom Explore More menu), choose BibTeX format, and download the citation. You can open up the file in a text editor and copy the text. It should look like this:
```
@article{doi:10.1177/2515245918770963,
author = {Daniël Lakens and Anne M. Scheel and Peder M. Isager},
title ={Equivalence Testing for Psychological Research: A Tutorial},
journal = {Advances in Methods and Practices in Psychological Science},
volume = {1},
number = {2},
pages = {259-269},
year = {2018},
doi = {10.1177/2515245918770963},
URL = {
https://doi.org/10.1177/2515245918770963
},
eprint = {
https://doi.org/10.1177/2515245918770963
}
,
abstract = { Psychologists must be able to test both for the presence of an effect and for the absence of an effect. In addition to testing against zero, researchers can use the two one-sided tests (TOST) procedure to test for equivalence and reject the presence of a smallest effect size of interest (SESOI). The TOST procedure can be used to determine if an observed effect is surprisingly small, given that a true effect at least as extreme as the SESOI exists. We explain a range of approaches to determine the SESOI in psychological science and provide detailed examples of how equivalence tests should be performed and reported. Equivalence tests are an important extension of the statistical tools psychologists currently use and enable researchers to falsify predictions about the presence, and declare the absence, of meaningful effects. }
}
```
Paste the reference into your bibliography.bib file. Change `doi:10.1177/2515245918770963` in the first line of the reference to a short string you will use to cite the reference in your manuscript. We’ll use `TOSTtutorial`.
#### 10\.4\.5\.4 Converting from reference software
Most reference software like EndNote, Zotero or mendeley have exporting options that can export to BibTeX format. You just need to check the shortnames in the resulting file.
#### 10\.4\.5\.5 In\-text citations
You can cite reference in text like this:
```
This tutorial uses several R packages [@tidyverse;@rmarkdown].
```
This tutorial uses several R packages ([Wickham 2017](#ref-tidyverse); [Allaire et al. 2018](#ref-rmarkdown)).
Put a minus in front of the @ if you just want the year:
```
Lakens, Scheel and Isengar [-@TOSTtutorial] wrote a tutorial explaining how to test for the absence of an effect.
```
Lakens, Scheel and Isengar ([2018](#ref-TOSTtutorial)) wrote a tutorial explaining how to test for the absence of an effect.
#### 10\.4\.5\.6 Citation Styles
You can search a [list of style files](https://www.zotero.org/styles) for various journals and download a file that will format your bibliography for a specific journal’s style. You’ll need to add the line `csl: filename.csl` to your YAML header.
Add some citations to your bibliography.bib file, reference them in your text, and render your manuscript to see the automatically generated reference section. Try a few different citation style files.
### 10\.4\.6 Output Formats
You can knit your file to PDF or Word if you have the right packages installed on your computer.
### 10\.4\.7 Computational Reproducibility
Computational reproducibility refers to making all aspects of your analysis reproducible, including specifics of the software you used to run the code you wrote. R packages get updated periodically and some of these updates may break your code. Using a computational reproducibility platform guards against this by always running your code in the same environment.
[Code Ocean](https://codeocean.com/) is a new platform that lets you run your code in the cloud via a web browser.
### 10\.4\.1 knitr options
When you create a new R Markdown file in RStudio, a setup chunk is automatically created.
````{r setup, include=FALSE}`
```
knitr::opts_chunk$set(echo = TRUE)
```
`````
You can set more default options for code chunks here. See the [knitr options documentation](https://yihui.name/knitr/options/) for explanations of the possible options.
````{r setup, include=FALSE}`
```
knitr::opts_chunk$set(
fig.width = 8,
fig.height = 5,
fig.path = 'images/',
echo = FALSE,
warning = TRUE,
message = FALSE,
cache = FALSE
)
```
`````
The code above sets the following options:
* `fig.width = 8` : figure width is 8 inches
* `fig.height = 5` : figure height is 5 inches
* `fig.path = 'images/'` : figures are saved in the directory “images”
* `echo = FALSE` : do not show code chunks in the rendered document
* `warning = FALSE` : do not show any function warnings
* `message = FALSE` : do not show any function messages
* `cache = FALSE` : run all the code to create all of the images and objects each time you knit (set to `TRUE` if you have time\-consuming code)
### 10\.4\.2 YAML Header
The [YAML](https://psyteachr.github.io/glossary/y#yaml "A structured format for information") header is where you can set several options.
```
---
title: "My Demo Document"
author: "Me"
output:
html_document:
theme: spacelab
highlight: tango
toc: true
toc_float:
collapsed: false
smooth_scroll: false
toc_depth: 3
number_sections: false
---
```
The built\-in themes are: “cerulean,” “cosmo,” “flatly,” “journal,” “lumen,” “paper,” “readable,” “sandstone,” “simplex,” “spacelab,” “united,” and “yeti.” You can [view and download more themes](http://www.datadreaming.org/post/r-markdown-theme-gallery/).
Try changing the values from `false` to `true` to see what the options do.
### 10\.4\.3 TOC and Document Headers
If you include a table of contents (`toc`), it is created from your document headers. Headers in [markdown](https://psyteachr.github.io/glossary/m#markdown "A way to specify formatting, such as headers, paragraphs, lists, bolding, and links.") are created by prefacing the header title with one or more hashes (`#`). Add a typical paper structure to your document like the one below.
```
## Abstract
My abstract here...
## Introduction
What's the question; why is it interesting?
## Methods
### Participants
How many participants and why? Do your power calculation here.
### Procedure
What will they do?
### Analysis
Describe the analysis plan...
## Results
Demo results for simulated data...
## Discussion
What does it all mean?
## References
```
### 10\.4\.4 Code Chunks
You can include [code chunks](https://psyteachr.github.io/glossary/c#chunk "A section of code in an R Markdown file") that create and display images, tables, or computations to include in your text. Let’s start by simulating some data.
First, create a code chunk in your document. You can put this before the abstract, since we won’t be showing the code in this document. We’ll use a modified version of the `two_sample` function from the [GLM lecture](09_glm.html) to create two groups with a difference of 0\.75 and 100 observations per group.
This function was modified to add sex and effect\-code both sex and group. Using the `recode` function to set effect or difference coding makes it clearer which value corresponds to which level. There is no effect of sex or interaction with group in these simulated data.
```
two_sample <- function(diff = 0.5, n_per_group = 20) {
tibble(Y = c(rnorm(n_per_group, -.5 * diff, sd = 1),
rnorm(n_per_group, .5 * diff, sd = 1)),
grp = factor(rep(c("a", "b"), each = n_per_group)),
sex = factor(rep(c("female", "male"), times = n_per_group))
) %>%
mutate(
grp_e = recode(grp, "a" = -0.5, "b" = 0.5),
sex_e = recode(sex, "female" = -0.5, "male" = 0.5)
)
}
```
This function requires the `tibble` and `dplyr` packages, so remember to load the whole tidyverse package at the top of this script (e.g., in the setup chunk).
Now we can make a separate code chunk to create our simulated dataset `dat`.
```
dat <- two_sample(diff = 0.75, n_per_group = 100)
```
#### 10\.4\.4\.1 Tables
Next, create a code chunk where you want to display a table of the descriptives (e.g., Participants section of the Methods). We’ll use tidyverse functions you learned in the [data wrangling lectures](04_wrangling.html) to create summary statistics for each group.
```
```{r, results='asis'}
dat %>%
group_by(grp, sex) %>%
summarise(n = n(),
Mean = mean(Y),
SD = sd(Y)) %>%
rename(group = grp) %>%
mutate_if(is.numeric, round, 3) %>%
knitr::kable()
```
```
```
## `summarise()` has grouped output by 'grp'. You can override using the `.groups` argument.
```
```
## `mutate_if()` ignored the following grouping variables:
## Column `group`
```
| group | sex | n | Mean | SD |
| --- | --- | --- | --- | --- |
| a | female | 50 | \-0\.361 | 0\.796 |
| a | male | 50 | \-0\.284 | 1\.052 |
| b | female | 50 | 0\.335 | 1\.080 |
| b | male | 50 | 0\.313 | 0\.904 |
Notice that the r chunk specifies the option `results='asis'`. This lets you format the table using the `kable()` function from `knitr`. You can also use more specialised functions from [papaja](https://crsh.github.io/papaja_man/reporting.html#tables) or [kableExtra](https://haozhu233.github.io/kableExtra/awesome_table_in_html.html) to format your tables.
#### 10\.4\.4\.2 Images
Next, create a code chunk where you want to display the image in your document. Let’s put it in the Results section. Use what you learned in the [data visualisation lecture](03_ggplot.html) to show violin\-boxplots for the two groups.
```
```{r, fig1, fig.cap="Figure 1. Scores by group and sex."}
ggplot(dat, aes(grp, Y, fill = sex)) +
geom_violin(alpha = 0.5) +
geom_boxplot(width = 0.25,
position = position_dodge(width = 0.9),
show.legend = FALSE) +
scale_fill_manual(values = c("orange", "purple")) +
xlab("Group") +
ylab("Score") +
theme(text = element_text(size = 30, family = "Times"))
```
```
The last line changes the default text size and font, which can be useful for generating figures that meet a journal’s requirements.
Figure 10\.1: Figure 1\. Scores by group and sex.
You can also include images that you did not create in R using the typical markdown syntax for images:
```
```
All the Things by [Hyperbole and a Half](http://hyperboleandahalf.blogspot.com/)
#### 10\.4\.4\.3 In\-line R
Now let’s use what you learned in the [GLM lecture](09_glm.html) to analyse our simulated data. The document is getting a little cluttered, so let’s move this code to external scripts.
* Create a new R script called “functions.R”
* Move the `library(tidyverse)` line and the `two_sample()` function definition to this file.
* Create a new R script called “analysis.R”
* Move the code for creating `dat` to this file.
* Add the following code to the end of the setup chunk:
```
source("functions.R")
source("analysis.R")
```
The `source` function lets you include code from an external file. This is really useful for making your documents readable. Just make sure you call your source files in the right order (e.g., include function definitions before you use the functions).
In the “analysis.R” file, we’re going to run the analysis code and save any numbers we might want to use in our manuscript to variables.
```
grp_lm <- lm(Y ~ grp_e * sex_e, data = dat)
stats <- grp_lm %>%
broom::tidy() %>%
mutate_if(is.numeric, round, 3)
```
The code above runs our analysis predicting `Y` from the effect\-coded group variable `grp_e`, the effect\-coded sex variable `sex_e` and their intereaction. The `tidy` function from the `broom` package turns the output into a tidy table. The `mutate_if` function uses the function `is.numeric` to check if each column should be mutated, adn if it is numeric, applies the `round` function with the digits argument set to 3\.
If you want to report the results of the analysis in a paragraph istead of a table, you need to know how to refer to each number in the table. Like with everything in R, there are many wways to do this. One is by specifying the column and row number like this:
```
stats$p.value[2]
```
```
## [1] 0
```
Another way is to create variables for each row like this:
```
grp_stats <- filter(stats, term == "grp_e")
sex_stats <- filter(stats, term == "sex_e")
ixn_stats <- filter(stats, term == "grp_e:sex_e")
```
Add the above code to the end of your analysis.R file. Then you can refer to columns by name like this:
```
grp_stats$p.value
sex_stats$statistic
ixn_stats$estimate
```
```
## [1] 0
## [1] 0.197
## [1] -0.099
```
You can insert these numbers into a paragraph with inline R code that looks like this:
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
p = `r grp_stats$p.value`).
There was no significant difference between men and women
(B = `r sex_statsestimate`,
t = `r sex_stats$statistic`,
p = `r sex_stats$p.value`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
p = `r ixn_stats$p.value`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \= 0\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
Remember, line breaks are ignored when you render the file (unless you add two spaces at the end of lines), so you can use line breaks to make it easier to read your text with inline R code.
The p\-values aren’t formatted in APA style. We wrote a function to deal with this in the [function lecture](07_functions.html). Add this function to the “functions.R” file and change the inline text to use the `report_p` function.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
fmt <- paste0("p = %.", digits, "f")
reported = sprintf(fmt, roundp)
}
reported
}
```
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
`r report_p(grp_stats$p.value, 3)`).
There was no significant difference between men and women
(B = `r sex_stats$estimate`,
t = `r sex_stats$statistic`,
`r report_p(sex_stats$p.value, 3)`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
`r report_p(ixn_stats$p.value, 3)`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \< .001\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
You might also want to report the statistics for the regression. There are a lot of numbers to format and insert, so it is easier to do this in the analysis script using `sprintf` for formatting.
```
s <- summary(grp_lm)
# calculate p value from fstatistic
fstat.p <- pf(s$fstatistic[1],
s$fstatistic[2],
s$fstatistic[3],
lower=FALSE)
adj_r <- sprintf(
"The regression equation had an adjusted $R^{2}$ of %.3f ($F_{(%i, %i)}$ = %.3f, %s).",
round(s$adj.r.squared, 3),
s$fstatistic[2],
s$fstatistic[3],
round(s$fstatistic[1], 3),
report_p(fstat.p, 3)
)
```
Then you can just insert the text in your manuscript like this: \` adj\_r\`:
The regression equation had an adjusted \\(R^{2}\\) of 0\.090 (\\(F\_{(3, 196\)}\\) \= 7\.546, p \< .001\).
#### 10\.4\.4\.1 Tables
Next, create a code chunk where you want to display a table of the descriptives (e.g., Participants section of the Methods). We’ll use tidyverse functions you learned in the [data wrangling lectures](04_wrangling.html) to create summary statistics for each group.
```
```{r, results='asis'}
dat %>%
group_by(grp, sex) %>%
summarise(n = n(),
Mean = mean(Y),
SD = sd(Y)) %>%
rename(group = grp) %>%
mutate_if(is.numeric, round, 3) %>%
knitr::kable()
```
```
```
## `summarise()` has grouped output by 'grp'. You can override using the `.groups` argument.
```
```
## `mutate_if()` ignored the following grouping variables:
## Column `group`
```
| group | sex | n | Mean | SD |
| --- | --- | --- | --- | --- |
| a | female | 50 | \-0\.361 | 0\.796 |
| a | male | 50 | \-0\.284 | 1\.052 |
| b | female | 50 | 0\.335 | 1\.080 |
| b | male | 50 | 0\.313 | 0\.904 |
Notice that the r chunk specifies the option `results='asis'`. This lets you format the table using the `kable()` function from `knitr`. You can also use more specialised functions from [papaja](https://crsh.github.io/papaja_man/reporting.html#tables) or [kableExtra](https://haozhu233.github.io/kableExtra/awesome_table_in_html.html) to format your tables.
#### 10\.4\.4\.2 Images
Next, create a code chunk where you want to display the image in your document. Let’s put it in the Results section. Use what you learned in the [data visualisation lecture](03_ggplot.html) to show violin\-boxplots for the two groups.
```
```{r, fig1, fig.cap="Figure 1. Scores by group and sex."}
ggplot(dat, aes(grp, Y, fill = sex)) +
geom_violin(alpha = 0.5) +
geom_boxplot(width = 0.25,
position = position_dodge(width = 0.9),
show.legend = FALSE) +
scale_fill_manual(values = c("orange", "purple")) +
xlab("Group") +
ylab("Score") +
theme(text = element_text(size = 30, family = "Times"))
```
```
The last line changes the default text size and font, which can be useful for generating figures that meet a journal’s requirements.
Figure 10\.1: Figure 1\. Scores by group and sex.
You can also include images that you did not create in R using the typical markdown syntax for images:
```
```
All the Things by [Hyperbole and a Half](http://hyperboleandahalf.blogspot.com/)
#### 10\.4\.4\.3 In\-line R
Now let’s use what you learned in the [GLM lecture](09_glm.html) to analyse our simulated data. The document is getting a little cluttered, so let’s move this code to external scripts.
* Create a new R script called “functions.R”
* Move the `library(tidyverse)` line and the `two_sample()` function definition to this file.
* Create a new R script called “analysis.R”
* Move the code for creating `dat` to this file.
* Add the following code to the end of the setup chunk:
```
source("functions.R")
source("analysis.R")
```
The `source` function lets you include code from an external file. This is really useful for making your documents readable. Just make sure you call your source files in the right order (e.g., include function definitions before you use the functions).
In the “analysis.R” file, we’re going to run the analysis code and save any numbers we might want to use in our manuscript to variables.
```
grp_lm <- lm(Y ~ grp_e * sex_e, data = dat)
stats <- grp_lm %>%
broom::tidy() %>%
mutate_if(is.numeric, round, 3)
```
The code above runs our analysis predicting `Y` from the effect\-coded group variable `grp_e`, the effect\-coded sex variable `sex_e` and their intereaction. The `tidy` function from the `broom` package turns the output into a tidy table. The `mutate_if` function uses the function `is.numeric` to check if each column should be mutated, adn if it is numeric, applies the `round` function with the digits argument set to 3\.
If you want to report the results of the analysis in a paragraph istead of a table, you need to know how to refer to each number in the table. Like with everything in R, there are many wways to do this. One is by specifying the column and row number like this:
```
stats$p.value[2]
```
```
## [1] 0
```
Another way is to create variables for each row like this:
```
grp_stats <- filter(stats, term == "grp_e")
sex_stats <- filter(stats, term == "sex_e")
ixn_stats <- filter(stats, term == "grp_e:sex_e")
```
Add the above code to the end of your analysis.R file. Then you can refer to columns by name like this:
```
grp_stats$p.value
sex_stats$statistic
ixn_stats$estimate
```
```
## [1] 0
## [1] 0.197
## [1] -0.099
```
You can insert these numbers into a paragraph with inline R code that looks like this:
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
p = `r grp_stats$p.value`).
There was no significant difference between men and women
(B = `r sex_statsestimate`,
t = `r sex_stats$statistic`,
p = `r sex_stats$p.value`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
p = `r ixn_stats$p.value`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \= 0\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
Remember, line breaks are ignored when you render the file (unless you add two spaces at the end of lines), so you can use line breaks to make it easier to read your text with inline R code.
The p\-values aren’t formatted in APA style. We wrote a function to deal with this in the [function lecture](07_functions.html). Add this function to the “functions.R” file and change the inline text to use the `report_p` function.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
fmt <- paste0("p = %.", digits, "f")
reported = sprintf(fmt, roundp)
}
reported
}
```
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
`r report_p(grp_stats$p.value, 3)`).
There was no significant difference between men and women
(B = `r sex_stats$estimate`,
t = `r sex_stats$statistic`,
`r report_p(sex_stats$p.value, 3)`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
`r report_p(ixn_stats$p.value, 3)`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \< .001\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
You might also want to report the statistics for the regression. There are a lot of numbers to format and insert, so it is easier to do this in the analysis script using `sprintf` for formatting.
```
s <- summary(grp_lm)
# calculate p value from fstatistic
fstat.p <- pf(s$fstatistic[1],
s$fstatistic[2],
s$fstatistic[3],
lower=FALSE)
adj_r <- sprintf(
"The regression equation had an adjusted $R^{2}$ of %.3f ($F_{(%i, %i)}$ = %.3f, %s).",
round(s$adj.r.squared, 3),
s$fstatistic[2],
s$fstatistic[3],
round(s$fstatistic[1], 3),
report_p(fstat.p, 3)
)
```
Then you can just insert the text in your manuscript like this: \` adj\_r\`:
The regression equation had an adjusted \\(R^{2}\\) of 0\.090 (\\(F\_{(3, 196\)}\\) \= 7\.546, p \< .001\).
### 10\.4\.5 Bibliography
There are several ways to do in\-text citations and automatically generate a [bibliography](https://rmarkdown.rstudio.com/authoring_bibliographies_and_citations.html#bibliographies) in RMarkdown.
#### 10\.4\.5\.1 Create a BibTeX File Manually
You can just make a BibTeX file and add citations manually. Make a new Text File in RStudio called “bibliography.bib.”
Next, add the line `bibliography: bibliography.bib` to your YAML header.
You can add citations in the following format:
```
@article{shortname,
author = {Author One and Author Two and Author Three},
title = {Paper Title},
journal = {Journal Title},
volume = {vol},
number = {issue},
pages = {startpage--endpage},
year = {year},
doi = {doi}
}
```
#### 10\.4\.5\.2 Citing R packages
You can get the citation for an R package using the function `citation`. You can paste the bibtex entry into your bibliography.bib file. Make sure to add a short name (e.g., “rmarkdown”) before the first comma to refer to the reference.
```
citation(package="rmarkdown")
```
```
##
## To cite the 'rmarkdown' package in publications, please use:
##
## JJ Allaire and Yihui Xie and Jonathan McPherson and Javier Luraschi
## and Kevin Ushey and Aron Atkins and Hadley Wickham and Joe Cheng and
## Winston Chang and Richard Iannone (2021). rmarkdown: Dynamic
## Documents for R. R package version 2.9.4. URL
## https://rmarkdown.rstudio.com.
##
## Yihui Xie and J.J. Allaire and Garrett Grolemund (2018). R Markdown:
## The Definitive Guide. Chapman and Hall/CRC. ISBN 9781138359338. URL
## https://bookdown.org/yihui/rmarkdown.
##
## Yihui Xie and Christophe Dervieux and Emily Riederer (2020). R
## Markdown Cookbook. Chapman and Hall/CRC. ISBN 9780367563837. URL
## https://bookdown.org/yihui/rmarkdown-cookbook.
##
## To see these entries in BibTeX format, use 'print(<citation>,
## bibtex=TRUE)', 'toBibtex(.)', or set
## 'options(citation.bibtex.max=999)'.
```
#### 10\.4\.5\.3 Download Citation Info
You can get the BibTeX formatted citation from most publisher websites. For example, go to the publisher’s page for [Equivalence Testing for Psychological Research: A Tutorial](https://journals.sagepub.com/doi/abs/10.1177/2515245918770963), click on the Cite button (in the sidebar or under the bottom Explore More menu), choose BibTeX format, and download the citation. You can open up the file in a text editor and copy the text. It should look like this:
```
@article{doi:10.1177/2515245918770963,
author = {Daniël Lakens and Anne M. Scheel and Peder M. Isager},
title ={Equivalence Testing for Psychological Research: A Tutorial},
journal = {Advances in Methods and Practices in Psychological Science},
volume = {1},
number = {2},
pages = {259-269},
year = {2018},
doi = {10.1177/2515245918770963},
URL = {
https://doi.org/10.1177/2515245918770963
},
eprint = {
https://doi.org/10.1177/2515245918770963
}
,
abstract = { Psychologists must be able to test both for the presence of an effect and for the absence of an effect. In addition to testing against zero, researchers can use the two one-sided tests (TOST) procedure to test for equivalence and reject the presence of a smallest effect size of interest (SESOI). The TOST procedure can be used to determine if an observed effect is surprisingly small, given that a true effect at least as extreme as the SESOI exists. We explain a range of approaches to determine the SESOI in psychological science and provide detailed examples of how equivalence tests should be performed and reported. Equivalence tests are an important extension of the statistical tools psychologists currently use and enable researchers to falsify predictions about the presence, and declare the absence, of meaningful effects. }
}
```
Paste the reference into your bibliography.bib file. Change `doi:10.1177/2515245918770963` in the first line of the reference to a short string you will use to cite the reference in your manuscript. We’ll use `TOSTtutorial`.
#### 10\.4\.5\.4 Converting from reference software
Most reference software like EndNote, Zotero or mendeley have exporting options that can export to BibTeX format. You just need to check the shortnames in the resulting file.
#### 10\.4\.5\.5 In\-text citations
You can cite reference in text like this:
```
This tutorial uses several R packages [@tidyverse;@rmarkdown].
```
This tutorial uses several R packages ([Wickham 2017](#ref-tidyverse); [Allaire et al. 2018](#ref-rmarkdown)).
Put a minus in front of the @ if you just want the year:
```
Lakens, Scheel and Isengar [-@TOSTtutorial] wrote a tutorial explaining how to test for the absence of an effect.
```
Lakens, Scheel and Isengar ([2018](#ref-TOSTtutorial)) wrote a tutorial explaining how to test for the absence of an effect.
#### 10\.4\.5\.6 Citation Styles
You can search a [list of style files](https://www.zotero.org/styles) for various journals and download a file that will format your bibliography for a specific journal’s style. You’ll need to add the line `csl: filename.csl` to your YAML header.
Add some citations to your bibliography.bib file, reference them in your text, and render your manuscript to see the automatically generated reference section. Try a few different citation style files.
#### 10\.4\.5\.1 Create a BibTeX File Manually
You can just make a BibTeX file and add citations manually. Make a new Text File in RStudio called “bibliography.bib.”
Next, add the line `bibliography: bibliography.bib` to your YAML header.
You can add citations in the following format:
```
@article{shortname,
author = {Author One and Author Two and Author Three},
title = {Paper Title},
journal = {Journal Title},
volume = {vol},
number = {issue},
pages = {startpage--endpage},
year = {year},
doi = {doi}
}
```
#### 10\.4\.5\.2 Citing R packages
You can get the citation for an R package using the function `citation`. You can paste the bibtex entry into your bibliography.bib file. Make sure to add a short name (e.g., “rmarkdown”) before the first comma to refer to the reference.
```
citation(package="rmarkdown")
```
```
##
## To cite the 'rmarkdown' package in publications, please use:
##
## JJ Allaire and Yihui Xie and Jonathan McPherson and Javier Luraschi
## and Kevin Ushey and Aron Atkins and Hadley Wickham and Joe Cheng and
## Winston Chang and Richard Iannone (2021). rmarkdown: Dynamic
## Documents for R. R package version 2.9.4. URL
## https://rmarkdown.rstudio.com.
##
## Yihui Xie and J.J. Allaire and Garrett Grolemund (2018). R Markdown:
## The Definitive Guide. Chapman and Hall/CRC. ISBN 9781138359338. URL
## https://bookdown.org/yihui/rmarkdown.
##
## Yihui Xie and Christophe Dervieux and Emily Riederer (2020). R
## Markdown Cookbook. Chapman and Hall/CRC. ISBN 9780367563837. URL
## https://bookdown.org/yihui/rmarkdown-cookbook.
##
## To see these entries in BibTeX format, use 'print(<citation>,
## bibtex=TRUE)', 'toBibtex(.)', or set
## 'options(citation.bibtex.max=999)'.
```
#### 10\.4\.5\.3 Download Citation Info
You can get the BibTeX formatted citation from most publisher websites. For example, go to the publisher’s page for [Equivalence Testing for Psychological Research: A Tutorial](https://journals.sagepub.com/doi/abs/10.1177/2515245918770963), click on the Cite button (in the sidebar or under the bottom Explore More menu), choose BibTeX format, and download the citation. You can open up the file in a text editor and copy the text. It should look like this:
```
@article{doi:10.1177/2515245918770963,
author = {Daniël Lakens and Anne M. Scheel and Peder M. Isager},
title ={Equivalence Testing for Psychological Research: A Tutorial},
journal = {Advances in Methods and Practices in Psychological Science},
volume = {1},
number = {2},
pages = {259-269},
year = {2018},
doi = {10.1177/2515245918770963},
URL = {
https://doi.org/10.1177/2515245918770963
},
eprint = {
https://doi.org/10.1177/2515245918770963
}
,
abstract = { Psychologists must be able to test both for the presence of an effect and for the absence of an effect. In addition to testing against zero, researchers can use the two one-sided tests (TOST) procedure to test for equivalence and reject the presence of a smallest effect size of interest (SESOI). The TOST procedure can be used to determine if an observed effect is surprisingly small, given that a true effect at least as extreme as the SESOI exists. We explain a range of approaches to determine the SESOI in psychological science and provide detailed examples of how equivalence tests should be performed and reported. Equivalence tests are an important extension of the statistical tools psychologists currently use and enable researchers to falsify predictions about the presence, and declare the absence, of meaningful effects. }
}
```
Paste the reference into your bibliography.bib file. Change `doi:10.1177/2515245918770963` in the first line of the reference to a short string you will use to cite the reference in your manuscript. We’ll use `TOSTtutorial`.
#### 10\.4\.5\.4 Converting from reference software
Most reference software like EndNote, Zotero or mendeley have exporting options that can export to BibTeX format. You just need to check the shortnames in the resulting file.
#### 10\.4\.5\.5 In\-text citations
You can cite reference in text like this:
```
This tutorial uses several R packages [@tidyverse;@rmarkdown].
```
This tutorial uses several R packages ([Wickham 2017](#ref-tidyverse); [Allaire et al. 2018](#ref-rmarkdown)).
Put a minus in front of the @ if you just want the year:
```
Lakens, Scheel and Isengar [-@TOSTtutorial] wrote a tutorial explaining how to test for the absence of an effect.
```
Lakens, Scheel and Isengar ([2018](#ref-TOSTtutorial)) wrote a tutorial explaining how to test for the absence of an effect.
#### 10\.4\.5\.6 Citation Styles
You can search a [list of style files](https://www.zotero.org/styles) for various journals and download a file that will format your bibliography for a specific journal’s style. You’ll need to add the line `csl: filename.csl` to your YAML header.
Add some citations to your bibliography.bib file, reference them in your text, and render your manuscript to see the automatically generated reference section. Try a few different citation style files.
### 10\.4\.6 Output Formats
You can knit your file to PDF or Word if you have the right packages installed on your computer.
### 10\.4\.7 Computational Reproducibility
Computational reproducibility refers to making all aspects of your analysis reproducible, including specifics of the software you used to run the code you wrote. R packages get updated periodically and some of these updates may break your code. Using a computational reproducibility platform guards against this by always running your code in the same environment.
[Code Ocean](https://codeocean.com/) is a new platform that lets you run your code in the cloud via a web browser.
10\.5 Glossary
--------------
| term | definition |
| --- | --- |
| [chunk](https://psyteachr.github.io/glossary/c#chunk) | A section of code in an R Markdown file |
| [markdown](https://psyteachr.github.io/glossary/m#markdown) | A way to specify formatting, such as headers, paragraphs, lists, bolding, and links. |
| [reproducibility](https://psyteachr.github.io/glossary/r#reproducibility) | The extent to which the findings of a study can be repeated in some other context |
| [yaml](https://psyteachr.github.io/glossary/y#yaml) | A structured format for information |
| Field Specific |
psyteachr.github.io | https://psyteachr.github.io/msc-data-skills/repro.html |
Chapter 10 Reproducible Workflows
=================================
10\.1 Learning Objectives
-------------------------
### 10\.1\.1 Basic
1. Create a reproducible script in R Markdown
2. Edit the YAML header to add table of contents and other options
3. Include a table
4. Include a figure
5. Use `source()` to include code from an external file
6. Report the output of an analysis using inline R
### 10\.1\.2 Intermediate
7. Output doc and PDF formats
8. Add a bibliography and in\-line citations
9. Format tables using `kableExtra`
### 10\.1\.3 Advanced
10. Create a computationally reproducible project in Code Ocean
10\.2 Resources
---------------
* [Chapter 27: R Markdown](http://r4ds.had.co.nz/r-markdown.html) in *R for Data Science*
* [R Markdown Cheat Sheet](http://www.rstudio.com/wp-content/uploads/2016/03/rmarkdown-cheatsheet-2.0.pdf)
* [R Markdown reference Guide](https://www.rstudio.com/wp-content/uploads/2015/03/rmarkdown-reference.pdf)
* [R Markdown Tutorial](https://rmarkdown.rstudio.com/lesson-1.html)
* [R Markdown: The Definitive Guide](https://bookdown.org/yihui/rmarkdown/) by Yihui Xie, J. J. Allaire, \& Garrett Grolemund
* [Papaja](https://crsh.github.io/papaja_man/) Reproducible APA Manuscripts
* [Code Ocean](https://codeocean.com/) for Computational Reproducibility
10\.3 Setup
-----------
```
library(tidyverse)
library(knitr)
library(broom)
set.seed(8675309)
```
10\.4 R Markdown
----------------
By now you should be pretty comfortable working with R Markdown files from the weekly formative exercises and set exercises. Here, we’ll explore some of the more advanced options and create an R Markdown document that produces a [reproducible](https://psyteachr.github.io/glossary/r#reproducibility "The extent to which the findings of a study can be repeated in some other context") manuscript.
First, make a new R Markdown document.
### 10\.4\.1 knitr options
When you create a new R Markdown file in RStudio, a setup chunk is automatically created.
````{r setup, include=FALSE}`
```
knitr::opts_chunk$set(echo = TRUE)
```
`````
You can set more default options for code chunks here. See the [knitr options documentation](https://yihui.name/knitr/options/) for explanations of the possible options.
````{r setup, include=FALSE}`
```
knitr::opts_chunk$set(
fig.width = 8,
fig.height = 5,
fig.path = 'images/',
echo = FALSE,
warning = TRUE,
message = FALSE,
cache = FALSE
)
```
`````
The code above sets the following options:
* `fig.width = 8` : figure width is 8 inches
* `fig.height = 5` : figure height is 5 inches
* `fig.path = 'images/'` : figures are saved in the directory “images”
* `echo = FALSE` : do not show code chunks in the rendered document
* `warning = FALSE` : do not show any function warnings
* `message = FALSE` : do not show any function messages
* `cache = FALSE` : run all the code to create all of the images and objects each time you knit (set to `TRUE` if you have time\-consuming code)
### 10\.4\.2 YAML Header
The [YAML](https://psyteachr.github.io/glossary/y#yaml "A structured format for information") header is where you can set several options.
```
---
title: "My Demo Document"
author: "Me"
output:
html_document:
theme: spacelab
highlight: tango
toc: true
toc_float:
collapsed: false
smooth_scroll: false
toc_depth: 3
number_sections: false
---
```
The built\-in themes are: “cerulean,” “cosmo,” “flatly,” “journal,” “lumen,” “paper,” “readable,” “sandstone,” “simplex,” “spacelab,” “united,” and “yeti.” You can [view and download more themes](http://www.datadreaming.org/post/r-markdown-theme-gallery/).
Try changing the values from `false` to `true` to see what the options do.
### 10\.4\.3 TOC and Document Headers
If you include a table of contents (`toc`), it is created from your document headers. Headers in [markdown](https://psyteachr.github.io/glossary/m#markdown "A way to specify formatting, such as headers, paragraphs, lists, bolding, and links.") are created by prefacing the header title with one or more hashes (`#`). Add a typical paper structure to your document like the one below.
```
## Abstract
My abstract here...
## Introduction
What's the question; why is it interesting?
## Methods
### Participants
How many participants and why? Do your power calculation here.
### Procedure
What will they do?
### Analysis
Describe the analysis plan...
## Results
Demo results for simulated data...
## Discussion
What does it all mean?
## References
```
### 10\.4\.4 Code Chunks
You can include [code chunks](https://psyteachr.github.io/glossary/c#chunk "A section of code in an R Markdown file") that create and display images, tables, or computations to include in your text. Let’s start by simulating some data.
First, create a code chunk in your document. You can put this before the abstract, since we won’t be showing the code in this document. We’ll use a modified version of the `two_sample` function from the [GLM lecture](09_glm.html) to create two groups with a difference of 0\.75 and 100 observations per group.
This function was modified to add sex and effect\-code both sex and group. Using the `recode` function to set effect or difference coding makes it clearer which value corresponds to which level. There is no effect of sex or interaction with group in these simulated data.
```
two_sample <- function(diff = 0.5, n_per_group = 20) {
tibble(Y = c(rnorm(n_per_group, -.5 * diff, sd = 1),
rnorm(n_per_group, .5 * diff, sd = 1)),
grp = factor(rep(c("a", "b"), each = n_per_group)),
sex = factor(rep(c("female", "male"), times = n_per_group))
) %>%
mutate(
grp_e = recode(grp, "a" = -0.5, "b" = 0.5),
sex_e = recode(sex, "female" = -0.5, "male" = 0.5)
)
}
```
This function requires the `tibble` and `dplyr` packages, so remember to load the whole tidyverse package at the top of this script (e.g., in the setup chunk).
Now we can make a separate code chunk to create our simulated dataset `dat`.
```
dat <- two_sample(diff = 0.75, n_per_group = 100)
```
#### 10\.4\.4\.1 Tables
Next, create a code chunk where you want to display a table of the descriptives (e.g., Participants section of the Methods). We’ll use tidyverse functions you learned in the [data wrangling lectures](04_wrangling.html) to create summary statistics for each group.
```
```{r, results='asis'}
dat %>%
group_by(grp, sex) %>%
summarise(n = n(),
Mean = mean(Y),
SD = sd(Y)) %>%
rename(group = grp) %>%
mutate_if(is.numeric, round, 3) %>%
knitr::kable()
```
```
```
## `summarise()` has grouped output by 'grp'. You can override using the `.groups` argument.
```
```
## `mutate_if()` ignored the following grouping variables:
## Column `group`
```
| group | sex | n | Mean | SD |
| --- | --- | --- | --- | --- |
| a | female | 50 | \-0\.361 | 0\.796 |
| a | male | 50 | \-0\.284 | 1\.052 |
| b | female | 50 | 0\.335 | 1\.080 |
| b | male | 50 | 0\.313 | 0\.904 |
Notice that the r chunk specifies the option `results='asis'`. This lets you format the table using the `kable()` function from `knitr`. You can also use more specialised functions from [papaja](https://crsh.github.io/papaja_man/reporting.html#tables) or [kableExtra](https://haozhu233.github.io/kableExtra/awesome_table_in_html.html) to format your tables.
#### 10\.4\.4\.2 Images
Next, create a code chunk where you want to display the image in your document. Let’s put it in the Results section. Use what you learned in the [data visualisation lecture](03_ggplot.html) to show violin\-boxplots for the two groups.
```
```{r, fig1, fig.cap="Figure 1. Scores by group and sex."}
ggplot(dat, aes(grp, Y, fill = sex)) +
geom_violin(alpha = 0.5) +
geom_boxplot(width = 0.25,
position = position_dodge(width = 0.9),
show.legend = FALSE) +
scale_fill_manual(values = c("orange", "purple")) +
xlab("Group") +
ylab("Score") +
theme(text = element_text(size = 30, family = "Times"))
```
```
The last line changes the default text size and font, which can be useful for generating figures that meet a journal’s requirements.
Figure 10\.1: Figure 1\. Scores by group and sex.
You can also include images that you did not create in R using the typical markdown syntax for images:
```
```
All the Things by [Hyperbole and a Half](http://hyperboleandahalf.blogspot.com/)
#### 10\.4\.4\.3 In\-line R
Now let’s use what you learned in the [GLM lecture](09_glm.html) to analyse our simulated data. The document is getting a little cluttered, so let’s move this code to external scripts.
* Create a new R script called “functions.R”
* Move the `library(tidyverse)` line and the `two_sample()` function definition to this file.
* Create a new R script called “analysis.R”
* Move the code for creating `dat` to this file.
* Add the following code to the end of the setup chunk:
```
source("functions.R")
source("analysis.R")
```
The `source` function lets you include code from an external file. This is really useful for making your documents readable. Just make sure you call your source files in the right order (e.g., include function definitions before you use the functions).
In the “analysis.R” file, we’re going to run the analysis code and save any numbers we might want to use in our manuscript to variables.
```
grp_lm <- lm(Y ~ grp_e * sex_e, data = dat)
stats <- grp_lm %>%
broom::tidy() %>%
mutate_if(is.numeric, round, 3)
```
The code above runs our analysis predicting `Y` from the effect\-coded group variable `grp_e`, the effect\-coded sex variable `sex_e` and their intereaction. The `tidy` function from the `broom` package turns the output into a tidy table. The `mutate_if` function uses the function `is.numeric` to check if each column should be mutated, adn if it is numeric, applies the `round` function with the digits argument set to 3\.
If you want to report the results of the analysis in a paragraph istead of a table, you need to know how to refer to each number in the table. Like with everything in R, there are many wways to do this. One is by specifying the column and row number like this:
```
stats$p.value[2]
```
```
## [1] 0
```
Another way is to create variables for each row like this:
```
grp_stats <- filter(stats, term == "grp_e")
sex_stats <- filter(stats, term == "sex_e")
ixn_stats <- filter(stats, term == "grp_e:sex_e")
```
Add the above code to the end of your analysis.R file. Then you can refer to columns by name like this:
```
grp_stats$p.value
sex_stats$statistic
ixn_stats$estimate
```
```
## [1] 0
## [1] 0.197
## [1] -0.099
```
You can insert these numbers into a paragraph with inline R code that looks like this:
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
p = `r grp_stats$p.value`).
There was no significant difference between men and women
(B = `r sex_statsestimate`,
t = `r sex_stats$statistic`,
p = `r sex_stats$p.value`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
p = `r ixn_stats$p.value`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \= 0\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
Remember, line breaks are ignored when you render the file (unless you add two spaces at the end of lines), so you can use line breaks to make it easier to read your text with inline R code.
The p\-values aren’t formatted in APA style. We wrote a function to deal with this in the [function lecture](07_functions.html). Add this function to the “functions.R” file and change the inline text to use the `report_p` function.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
fmt <- paste0("p = %.", digits, "f")
reported = sprintf(fmt, roundp)
}
reported
}
```
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
`r report_p(grp_stats$p.value, 3)`).
There was no significant difference between men and women
(B = `r sex_stats$estimate`,
t = `r sex_stats$statistic`,
`r report_p(sex_stats$p.value, 3)`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
`r report_p(ixn_stats$p.value, 3)`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \< .001\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
You might also want to report the statistics for the regression. There are a lot of numbers to format and insert, so it is easier to do this in the analysis script using `sprintf` for formatting.
```
s <- summary(grp_lm)
# calculate p value from fstatistic
fstat.p <- pf(s$fstatistic[1],
s$fstatistic[2],
s$fstatistic[3],
lower=FALSE)
adj_r <- sprintf(
"The regression equation had an adjusted $R^{2}$ of %.3f ($F_{(%i, %i)}$ = %.3f, %s).",
round(s$adj.r.squared, 3),
s$fstatistic[2],
s$fstatistic[3],
round(s$fstatistic[1], 3),
report_p(fstat.p, 3)
)
```
Then you can just insert the text in your manuscript like this: \` adj\_r\`:
The regression equation had an adjusted \\(R^{2}\\) of 0\.090 (\\(F\_{(3, 196\)}\\) \= 7\.546, p \< .001\).
### 10\.4\.5 Bibliography
There are several ways to do in\-text citations and automatically generate a [bibliography](https://rmarkdown.rstudio.com/authoring_bibliographies_and_citations.html#bibliographies) in RMarkdown.
#### 10\.4\.5\.1 Create a BibTeX File Manually
You can just make a BibTeX file and add citations manually. Make a new Text File in RStudio called “bibliography.bib.”
Next, add the line `bibliography: bibliography.bib` to your YAML header.
You can add citations in the following format:
```
@article{shortname,
author = {Author One and Author Two and Author Three},
title = {Paper Title},
journal = {Journal Title},
volume = {vol},
number = {issue},
pages = {startpage--endpage},
year = {year},
doi = {doi}
}
```
#### 10\.4\.5\.2 Citing R packages
You can get the citation for an R package using the function `citation`. You can paste the bibtex entry into your bibliography.bib file. Make sure to add a short name (e.g., “rmarkdown”) before the first comma to refer to the reference.
```
citation(package="rmarkdown")
```
```
##
## To cite the 'rmarkdown' package in publications, please use:
##
## JJ Allaire and Yihui Xie and Jonathan McPherson and Javier Luraschi
## and Kevin Ushey and Aron Atkins and Hadley Wickham and Joe Cheng and
## Winston Chang and Richard Iannone (2021). rmarkdown: Dynamic
## Documents for R. R package version 2.9.4. URL
## https://rmarkdown.rstudio.com.
##
## Yihui Xie and J.J. Allaire and Garrett Grolemund (2018). R Markdown:
## The Definitive Guide. Chapman and Hall/CRC. ISBN 9781138359338. URL
## https://bookdown.org/yihui/rmarkdown.
##
## Yihui Xie and Christophe Dervieux and Emily Riederer (2020). R
## Markdown Cookbook. Chapman and Hall/CRC. ISBN 9780367563837. URL
## https://bookdown.org/yihui/rmarkdown-cookbook.
##
## To see these entries in BibTeX format, use 'print(<citation>,
## bibtex=TRUE)', 'toBibtex(.)', or set
## 'options(citation.bibtex.max=999)'.
```
#### 10\.4\.5\.3 Download Citation Info
You can get the BibTeX formatted citation from most publisher websites. For example, go to the publisher’s page for [Equivalence Testing for Psychological Research: A Tutorial](https://journals.sagepub.com/doi/abs/10.1177/2515245918770963), click on the Cite button (in the sidebar or under the bottom Explore More menu), choose BibTeX format, and download the citation. You can open up the file in a text editor and copy the text. It should look like this:
```
@article{doi:10.1177/2515245918770963,
author = {Daniël Lakens and Anne M. Scheel and Peder M. Isager},
title ={Equivalence Testing for Psychological Research: A Tutorial},
journal = {Advances in Methods and Practices in Psychological Science},
volume = {1},
number = {2},
pages = {259-269},
year = {2018},
doi = {10.1177/2515245918770963},
URL = {
https://doi.org/10.1177/2515245918770963
},
eprint = {
https://doi.org/10.1177/2515245918770963
}
,
abstract = { Psychologists must be able to test both for the presence of an effect and for the absence of an effect. In addition to testing against zero, researchers can use the two one-sided tests (TOST) procedure to test for equivalence and reject the presence of a smallest effect size of interest (SESOI). The TOST procedure can be used to determine if an observed effect is surprisingly small, given that a true effect at least as extreme as the SESOI exists. We explain a range of approaches to determine the SESOI in psychological science and provide detailed examples of how equivalence tests should be performed and reported. Equivalence tests are an important extension of the statistical tools psychologists currently use and enable researchers to falsify predictions about the presence, and declare the absence, of meaningful effects. }
}
```
Paste the reference into your bibliography.bib file. Change `doi:10.1177/2515245918770963` in the first line of the reference to a short string you will use to cite the reference in your manuscript. We’ll use `TOSTtutorial`.
#### 10\.4\.5\.4 Converting from reference software
Most reference software like EndNote, Zotero or mendeley have exporting options that can export to BibTeX format. You just need to check the shortnames in the resulting file.
#### 10\.4\.5\.5 In\-text citations
You can cite reference in text like this:
```
This tutorial uses several R packages [@tidyverse;@rmarkdown].
```
This tutorial uses several R packages ([Wickham 2017](#ref-tidyverse); [Allaire et al. 2018](#ref-rmarkdown)).
Put a minus in front of the @ if you just want the year:
```
Lakens, Scheel and Isengar [-@TOSTtutorial] wrote a tutorial explaining how to test for the absence of an effect.
```
Lakens, Scheel and Isengar ([2018](#ref-TOSTtutorial)) wrote a tutorial explaining how to test for the absence of an effect.
#### 10\.4\.5\.6 Citation Styles
You can search a [list of style files](https://www.zotero.org/styles) for various journals and download a file that will format your bibliography for a specific journal’s style. You’ll need to add the line `csl: filename.csl` to your YAML header.
Add some citations to your bibliography.bib file, reference them in your text, and render your manuscript to see the automatically generated reference section. Try a few different citation style files.
### 10\.4\.6 Output Formats
You can knit your file to PDF or Word if you have the right packages installed on your computer.
### 10\.4\.7 Computational Reproducibility
Computational reproducibility refers to making all aspects of your analysis reproducible, including specifics of the software you used to run the code you wrote. R packages get updated periodically and some of these updates may break your code. Using a computational reproducibility platform guards against this by always running your code in the same environment.
[Code Ocean](https://codeocean.com/) is a new platform that lets you run your code in the cloud via a web browser.
10\.5 Glossary
--------------
| term | definition |
| --- | --- |
| [chunk](https://psyteachr.github.io/glossary/c#chunk) | A section of code in an R Markdown file |
| [markdown](https://psyteachr.github.io/glossary/m#markdown) | A way to specify formatting, such as headers, paragraphs, lists, bolding, and links. |
| [reproducibility](https://psyteachr.github.io/glossary/r#reproducibility) | The extent to which the findings of a study can be repeated in some other context |
| [yaml](https://psyteachr.github.io/glossary/y#yaml) | A structured format for information |
10\.6 References
----------------
10\.1 Learning Objectives
-------------------------
### 10\.1\.1 Basic
1. Create a reproducible script in R Markdown
2. Edit the YAML header to add table of contents and other options
3. Include a table
4. Include a figure
5. Use `source()` to include code from an external file
6. Report the output of an analysis using inline R
### 10\.1\.2 Intermediate
7. Output doc and PDF formats
8. Add a bibliography and in\-line citations
9. Format tables using `kableExtra`
### 10\.1\.3 Advanced
10. Create a computationally reproducible project in Code Ocean
### 10\.1\.1 Basic
1. Create a reproducible script in R Markdown
2. Edit the YAML header to add table of contents and other options
3. Include a table
4. Include a figure
5. Use `source()` to include code from an external file
6. Report the output of an analysis using inline R
### 10\.1\.2 Intermediate
7. Output doc and PDF formats
8. Add a bibliography and in\-line citations
9. Format tables using `kableExtra`
### 10\.1\.3 Advanced
10. Create a computationally reproducible project in Code Ocean
10\.2 Resources
---------------
* [Chapter 27: R Markdown](http://r4ds.had.co.nz/r-markdown.html) in *R for Data Science*
* [R Markdown Cheat Sheet](http://www.rstudio.com/wp-content/uploads/2016/03/rmarkdown-cheatsheet-2.0.pdf)
* [R Markdown reference Guide](https://www.rstudio.com/wp-content/uploads/2015/03/rmarkdown-reference.pdf)
* [R Markdown Tutorial](https://rmarkdown.rstudio.com/lesson-1.html)
* [R Markdown: The Definitive Guide](https://bookdown.org/yihui/rmarkdown/) by Yihui Xie, J. J. Allaire, \& Garrett Grolemund
* [Papaja](https://crsh.github.io/papaja_man/) Reproducible APA Manuscripts
* [Code Ocean](https://codeocean.com/) for Computational Reproducibility
10\.3 Setup
-----------
```
library(tidyverse)
library(knitr)
library(broom)
set.seed(8675309)
```
10\.4 R Markdown
----------------
By now you should be pretty comfortable working with R Markdown files from the weekly formative exercises and set exercises. Here, we’ll explore some of the more advanced options and create an R Markdown document that produces a [reproducible](https://psyteachr.github.io/glossary/r#reproducibility "The extent to which the findings of a study can be repeated in some other context") manuscript.
First, make a new R Markdown document.
### 10\.4\.1 knitr options
When you create a new R Markdown file in RStudio, a setup chunk is automatically created.
````{r setup, include=FALSE}`
```
knitr::opts_chunk$set(echo = TRUE)
```
`````
You can set more default options for code chunks here. See the [knitr options documentation](https://yihui.name/knitr/options/) for explanations of the possible options.
````{r setup, include=FALSE}`
```
knitr::opts_chunk$set(
fig.width = 8,
fig.height = 5,
fig.path = 'images/',
echo = FALSE,
warning = TRUE,
message = FALSE,
cache = FALSE
)
```
`````
The code above sets the following options:
* `fig.width = 8` : figure width is 8 inches
* `fig.height = 5` : figure height is 5 inches
* `fig.path = 'images/'` : figures are saved in the directory “images”
* `echo = FALSE` : do not show code chunks in the rendered document
* `warning = FALSE` : do not show any function warnings
* `message = FALSE` : do not show any function messages
* `cache = FALSE` : run all the code to create all of the images and objects each time you knit (set to `TRUE` if you have time\-consuming code)
### 10\.4\.2 YAML Header
The [YAML](https://psyteachr.github.io/glossary/y#yaml "A structured format for information") header is where you can set several options.
```
---
title: "My Demo Document"
author: "Me"
output:
html_document:
theme: spacelab
highlight: tango
toc: true
toc_float:
collapsed: false
smooth_scroll: false
toc_depth: 3
number_sections: false
---
```
The built\-in themes are: “cerulean,” “cosmo,” “flatly,” “journal,” “lumen,” “paper,” “readable,” “sandstone,” “simplex,” “spacelab,” “united,” and “yeti.” You can [view and download more themes](http://www.datadreaming.org/post/r-markdown-theme-gallery/).
Try changing the values from `false` to `true` to see what the options do.
### 10\.4\.3 TOC and Document Headers
If you include a table of contents (`toc`), it is created from your document headers. Headers in [markdown](https://psyteachr.github.io/glossary/m#markdown "A way to specify formatting, such as headers, paragraphs, lists, bolding, and links.") are created by prefacing the header title with one or more hashes (`#`). Add a typical paper structure to your document like the one below.
```
## Abstract
My abstract here...
## Introduction
What's the question; why is it interesting?
## Methods
### Participants
How many participants and why? Do your power calculation here.
### Procedure
What will they do?
### Analysis
Describe the analysis plan...
## Results
Demo results for simulated data...
## Discussion
What does it all mean?
## References
```
### 10\.4\.4 Code Chunks
You can include [code chunks](https://psyteachr.github.io/glossary/c#chunk "A section of code in an R Markdown file") that create and display images, tables, or computations to include in your text. Let’s start by simulating some data.
First, create a code chunk in your document. You can put this before the abstract, since we won’t be showing the code in this document. We’ll use a modified version of the `two_sample` function from the [GLM lecture](09_glm.html) to create two groups with a difference of 0\.75 and 100 observations per group.
This function was modified to add sex and effect\-code both sex and group. Using the `recode` function to set effect or difference coding makes it clearer which value corresponds to which level. There is no effect of sex or interaction with group in these simulated data.
```
two_sample <- function(diff = 0.5, n_per_group = 20) {
tibble(Y = c(rnorm(n_per_group, -.5 * diff, sd = 1),
rnorm(n_per_group, .5 * diff, sd = 1)),
grp = factor(rep(c("a", "b"), each = n_per_group)),
sex = factor(rep(c("female", "male"), times = n_per_group))
) %>%
mutate(
grp_e = recode(grp, "a" = -0.5, "b" = 0.5),
sex_e = recode(sex, "female" = -0.5, "male" = 0.5)
)
}
```
This function requires the `tibble` and `dplyr` packages, so remember to load the whole tidyverse package at the top of this script (e.g., in the setup chunk).
Now we can make a separate code chunk to create our simulated dataset `dat`.
```
dat <- two_sample(diff = 0.75, n_per_group = 100)
```
#### 10\.4\.4\.1 Tables
Next, create a code chunk where you want to display a table of the descriptives (e.g., Participants section of the Methods). We’ll use tidyverse functions you learned in the [data wrangling lectures](04_wrangling.html) to create summary statistics for each group.
```
```{r, results='asis'}
dat %>%
group_by(grp, sex) %>%
summarise(n = n(),
Mean = mean(Y),
SD = sd(Y)) %>%
rename(group = grp) %>%
mutate_if(is.numeric, round, 3) %>%
knitr::kable()
```
```
```
## `summarise()` has grouped output by 'grp'. You can override using the `.groups` argument.
```
```
## `mutate_if()` ignored the following grouping variables:
## Column `group`
```
| group | sex | n | Mean | SD |
| --- | --- | --- | --- | --- |
| a | female | 50 | \-0\.361 | 0\.796 |
| a | male | 50 | \-0\.284 | 1\.052 |
| b | female | 50 | 0\.335 | 1\.080 |
| b | male | 50 | 0\.313 | 0\.904 |
Notice that the r chunk specifies the option `results='asis'`. This lets you format the table using the `kable()` function from `knitr`. You can also use more specialised functions from [papaja](https://crsh.github.io/papaja_man/reporting.html#tables) or [kableExtra](https://haozhu233.github.io/kableExtra/awesome_table_in_html.html) to format your tables.
#### 10\.4\.4\.2 Images
Next, create a code chunk where you want to display the image in your document. Let’s put it in the Results section. Use what you learned in the [data visualisation lecture](03_ggplot.html) to show violin\-boxplots for the two groups.
```
```{r, fig1, fig.cap="Figure 1. Scores by group and sex."}
ggplot(dat, aes(grp, Y, fill = sex)) +
geom_violin(alpha = 0.5) +
geom_boxplot(width = 0.25,
position = position_dodge(width = 0.9),
show.legend = FALSE) +
scale_fill_manual(values = c("orange", "purple")) +
xlab("Group") +
ylab("Score") +
theme(text = element_text(size = 30, family = "Times"))
```
```
The last line changes the default text size and font, which can be useful for generating figures that meet a journal’s requirements.
Figure 10\.1: Figure 1\. Scores by group and sex.
You can also include images that you did not create in R using the typical markdown syntax for images:
```
```
All the Things by [Hyperbole and a Half](http://hyperboleandahalf.blogspot.com/)
#### 10\.4\.4\.3 In\-line R
Now let’s use what you learned in the [GLM lecture](09_glm.html) to analyse our simulated data. The document is getting a little cluttered, so let’s move this code to external scripts.
* Create a new R script called “functions.R”
* Move the `library(tidyverse)` line and the `two_sample()` function definition to this file.
* Create a new R script called “analysis.R”
* Move the code for creating `dat` to this file.
* Add the following code to the end of the setup chunk:
```
source("functions.R")
source("analysis.R")
```
The `source` function lets you include code from an external file. This is really useful for making your documents readable. Just make sure you call your source files in the right order (e.g., include function definitions before you use the functions).
In the “analysis.R” file, we’re going to run the analysis code and save any numbers we might want to use in our manuscript to variables.
```
grp_lm <- lm(Y ~ grp_e * sex_e, data = dat)
stats <- grp_lm %>%
broom::tidy() %>%
mutate_if(is.numeric, round, 3)
```
The code above runs our analysis predicting `Y` from the effect\-coded group variable `grp_e`, the effect\-coded sex variable `sex_e` and their intereaction. The `tidy` function from the `broom` package turns the output into a tidy table. The `mutate_if` function uses the function `is.numeric` to check if each column should be mutated, adn if it is numeric, applies the `round` function with the digits argument set to 3\.
If you want to report the results of the analysis in a paragraph istead of a table, you need to know how to refer to each number in the table. Like with everything in R, there are many wways to do this. One is by specifying the column and row number like this:
```
stats$p.value[2]
```
```
## [1] 0
```
Another way is to create variables for each row like this:
```
grp_stats <- filter(stats, term == "grp_e")
sex_stats <- filter(stats, term == "sex_e")
ixn_stats <- filter(stats, term == "grp_e:sex_e")
```
Add the above code to the end of your analysis.R file. Then you can refer to columns by name like this:
```
grp_stats$p.value
sex_stats$statistic
ixn_stats$estimate
```
```
## [1] 0
## [1] 0.197
## [1] -0.099
```
You can insert these numbers into a paragraph with inline R code that looks like this:
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
p = `r grp_stats$p.value`).
There was no significant difference between men and women
(B = `r sex_statsestimate`,
t = `r sex_stats$statistic`,
p = `r sex_stats$p.value`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
p = `r ixn_stats$p.value`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \= 0\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
Remember, line breaks are ignored when you render the file (unless you add two spaces at the end of lines), so you can use line breaks to make it easier to read your text with inline R code.
The p\-values aren’t formatted in APA style. We wrote a function to deal with this in the [function lecture](07_functions.html). Add this function to the “functions.R” file and change the inline text to use the `report_p` function.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
fmt <- paste0("p = %.", digits, "f")
reported = sprintf(fmt, roundp)
}
reported
}
```
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
`r report_p(grp_stats$p.value, 3)`).
There was no significant difference between men and women
(B = `r sex_stats$estimate`,
t = `r sex_stats$statistic`,
`r report_p(sex_stats$p.value, 3)`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
`r report_p(ixn_stats$p.value, 3)`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \< .001\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
You might also want to report the statistics for the regression. There are a lot of numbers to format and insert, so it is easier to do this in the analysis script using `sprintf` for formatting.
```
s <- summary(grp_lm)
# calculate p value from fstatistic
fstat.p <- pf(s$fstatistic[1],
s$fstatistic[2],
s$fstatistic[3],
lower=FALSE)
adj_r <- sprintf(
"The regression equation had an adjusted $R^{2}$ of %.3f ($F_{(%i, %i)}$ = %.3f, %s).",
round(s$adj.r.squared, 3),
s$fstatistic[2],
s$fstatistic[3],
round(s$fstatistic[1], 3),
report_p(fstat.p, 3)
)
```
Then you can just insert the text in your manuscript like this: \` adj\_r\`:
The regression equation had an adjusted \\(R^{2}\\) of 0\.090 (\\(F\_{(3, 196\)}\\) \= 7\.546, p \< .001\).
### 10\.4\.5 Bibliography
There are several ways to do in\-text citations and automatically generate a [bibliography](https://rmarkdown.rstudio.com/authoring_bibliographies_and_citations.html#bibliographies) in RMarkdown.
#### 10\.4\.5\.1 Create a BibTeX File Manually
You can just make a BibTeX file and add citations manually. Make a new Text File in RStudio called “bibliography.bib.”
Next, add the line `bibliography: bibliography.bib` to your YAML header.
You can add citations in the following format:
```
@article{shortname,
author = {Author One and Author Two and Author Three},
title = {Paper Title},
journal = {Journal Title},
volume = {vol},
number = {issue},
pages = {startpage--endpage},
year = {year},
doi = {doi}
}
```
#### 10\.4\.5\.2 Citing R packages
You can get the citation for an R package using the function `citation`. You can paste the bibtex entry into your bibliography.bib file. Make sure to add a short name (e.g., “rmarkdown”) before the first comma to refer to the reference.
```
citation(package="rmarkdown")
```
```
##
## To cite the 'rmarkdown' package in publications, please use:
##
## JJ Allaire and Yihui Xie and Jonathan McPherson and Javier Luraschi
## and Kevin Ushey and Aron Atkins and Hadley Wickham and Joe Cheng and
## Winston Chang and Richard Iannone (2021). rmarkdown: Dynamic
## Documents for R. R package version 2.9.4. URL
## https://rmarkdown.rstudio.com.
##
## Yihui Xie and J.J. Allaire and Garrett Grolemund (2018). R Markdown:
## The Definitive Guide. Chapman and Hall/CRC. ISBN 9781138359338. URL
## https://bookdown.org/yihui/rmarkdown.
##
## Yihui Xie and Christophe Dervieux and Emily Riederer (2020). R
## Markdown Cookbook. Chapman and Hall/CRC. ISBN 9780367563837. URL
## https://bookdown.org/yihui/rmarkdown-cookbook.
##
## To see these entries in BibTeX format, use 'print(<citation>,
## bibtex=TRUE)', 'toBibtex(.)', or set
## 'options(citation.bibtex.max=999)'.
```
#### 10\.4\.5\.3 Download Citation Info
You can get the BibTeX formatted citation from most publisher websites. For example, go to the publisher’s page for [Equivalence Testing for Psychological Research: A Tutorial](https://journals.sagepub.com/doi/abs/10.1177/2515245918770963), click on the Cite button (in the sidebar or under the bottom Explore More menu), choose BibTeX format, and download the citation. You can open up the file in a text editor and copy the text. It should look like this:
```
@article{doi:10.1177/2515245918770963,
author = {Daniël Lakens and Anne M. Scheel and Peder M. Isager},
title ={Equivalence Testing for Psychological Research: A Tutorial},
journal = {Advances in Methods and Practices in Psychological Science},
volume = {1},
number = {2},
pages = {259-269},
year = {2018},
doi = {10.1177/2515245918770963},
URL = {
https://doi.org/10.1177/2515245918770963
},
eprint = {
https://doi.org/10.1177/2515245918770963
}
,
abstract = { Psychologists must be able to test both for the presence of an effect and for the absence of an effect. In addition to testing against zero, researchers can use the two one-sided tests (TOST) procedure to test for equivalence and reject the presence of a smallest effect size of interest (SESOI). The TOST procedure can be used to determine if an observed effect is surprisingly small, given that a true effect at least as extreme as the SESOI exists. We explain a range of approaches to determine the SESOI in psychological science and provide detailed examples of how equivalence tests should be performed and reported. Equivalence tests are an important extension of the statistical tools psychologists currently use and enable researchers to falsify predictions about the presence, and declare the absence, of meaningful effects. }
}
```
Paste the reference into your bibliography.bib file. Change `doi:10.1177/2515245918770963` in the first line of the reference to a short string you will use to cite the reference in your manuscript. We’ll use `TOSTtutorial`.
#### 10\.4\.5\.4 Converting from reference software
Most reference software like EndNote, Zotero or mendeley have exporting options that can export to BibTeX format. You just need to check the shortnames in the resulting file.
#### 10\.4\.5\.5 In\-text citations
You can cite reference in text like this:
```
This tutorial uses several R packages [@tidyverse;@rmarkdown].
```
This tutorial uses several R packages ([Wickham 2017](#ref-tidyverse); [Allaire et al. 2018](#ref-rmarkdown)).
Put a minus in front of the @ if you just want the year:
```
Lakens, Scheel and Isengar [-@TOSTtutorial] wrote a tutorial explaining how to test for the absence of an effect.
```
Lakens, Scheel and Isengar ([2018](#ref-TOSTtutorial)) wrote a tutorial explaining how to test for the absence of an effect.
#### 10\.4\.5\.6 Citation Styles
You can search a [list of style files](https://www.zotero.org/styles) for various journals and download a file that will format your bibliography for a specific journal’s style. You’ll need to add the line `csl: filename.csl` to your YAML header.
Add some citations to your bibliography.bib file, reference them in your text, and render your manuscript to see the automatically generated reference section. Try a few different citation style files.
### 10\.4\.6 Output Formats
You can knit your file to PDF or Word if you have the right packages installed on your computer.
### 10\.4\.7 Computational Reproducibility
Computational reproducibility refers to making all aspects of your analysis reproducible, including specifics of the software you used to run the code you wrote. R packages get updated periodically and some of these updates may break your code. Using a computational reproducibility platform guards against this by always running your code in the same environment.
[Code Ocean](https://codeocean.com/) is a new platform that lets you run your code in the cloud via a web browser.
### 10\.4\.1 knitr options
When you create a new R Markdown file in RStudio, a setup chunk is automatically created.
````{r setup, include=FALSE}`
```
knitr::opts_chunk$set(echo = TRUE)
```
`````
You can set more default options for code chunks here. See the [knitr options documentation](https://yihui.name/knitr/options/) for explanations of the possible options.
````{r setup, include=FALSE}`
```
knitr::opts_chunk$set(
fig.width = 8,
fig.height = 5,
fig.path = 'images/',
echo = FALSE,
warning = TRUE,
message = FALSE,
cache = FALSE
)
```
`````
The code above sets the following options:
* `fig.width = 8` : figure width is 8 inches
* `fig.height = 5` : figure height is 5 inches
* `fig.path = 'images/'` : figures are saved in the directory “images”
* `echo = FALSE` : do not show code chunks in the rendered document
* `warning = FALSE` : do not show any function warnings
* `message = FALSE` : do not show any function messages
* `cache = FALSE` : run all the code to create all of the images and objects each time you knit (set to `TRUE` if you have time\-consuming code)
### 10\.4\.2 YAML Header
The [YAML](https://psyteachr.github.io/glossary/y#yaml "A structured format for information") header is where you can set several options.
```
---
title: "My Demo Document"
author: "Me"
output:
html_document:
theme: spacelab
highlight: tango
toc: true
toc_float:
collapsed: false
smooth_scroll: false
toc_depth: 3
number_sections: false
---
```
The built\-in themes are: “cerulean,” “cosmo,” “flatly,” “journal,” “lumen,” “paper,” “readable,” “sandstone,” “simplex,” “spacelab,” “united,” and “yeti.” You can [view and download more themes](http://www.datadreaming.org/post/r-markdown-theme-gallery/).
Try changing the values from `false` to `true` to see what the options do.
### 10\.4\.3 TOC and Document Headers
If you include a table of contents (`toc`), it is created from your document headers. Headers in [markdown](https://psyteachr.github.io/glossary/m#markdown "A way to specify formatting, such as headers, paragraphs, lists, bolding, and links.") are created by prefacing the header title with one or more hashes (`#`). Add a typical paper structure to your document like the one below.
```
## Abstract
My abstract here...
## Introduction
What's the question; why is it interesting?
## Methods
### Participants
How many participants and why? Do your power calculation here.
### Procedure
What will they do?
### Analysis
Describe the analysis plan...
## Results
Demo results for simulated data...
## Discussion
What does it all mean?
## References
```
### 10\.4\.4 Code Chunks
You can include [code chunks](https://psyteachr.github.io/glossary/c#chunk "A section of code in an R Markdown file") that create and display images, tables, or computations to include in your text. Let’s start by simulating some data.
First, create a code chunk in your document. You can put this before the abstract, since we won’t be showing the code in this document. We’ll use a modified version of the `two_sample` function from the [GLM lecture](09_glm.html) to create two groups with a difference of 0\.75 and 100 observations per group.
This function was modified to add sex and effect\-code both sex and group. Using the `recode` function to set effect or difference coding makes it clearer which value corresponds to which level. There is no effect of sex or interaction with group in these simulated data.
```
two_sample <- function(diff = 0.5, n_per_group = 20) {
tibble(Y = c(rnorm(n_per_group, -.5 * diff, sd = 1),
rnorm(n_per_group, .5 * diff, sd = 1)),
grp = factor(rep(c("a", "b"), each = n_per_group)),
sex = factor(rep(c("female", "male"), times = n_per_group))
) %>%
mutate(
grp_e = recode(grp, "a" = -0.5, "b" = 0.5),
sex_e = recode(sex, "female" = -0.5, "male" = 0.5)
)
}
```
This function requires the `tibble` and `dplyr` packages, so remember to load the whole tidyverse package at the top of this script (e.g., in the setup chunk).
Now we can make a separate code chunk to create our simulated dataset `dat`.
```
dat <- two_sample(diff = 0.75, n_per_group = 100)
```
#### 10\.4\.4\.1 Tables
Next, create a code chunk where you want to display a table of the descriptives (e.g., Participants section of the Methods). We’ll use tidyverse functions you learned in the [data wrangling lectures](04_wrangling.html) to create summary statistics for each group.
```
```{r, results='asis'}
dat %>%
group_by(grp, sex) %>%
summarise(n = n(),
Mean = mean(Y),
SD = sd(Y)) %>%
rename(group = grp) %>%
mutate_if(is.numeric, round, 3) %>%
knitr::kable()
```
```
```
## `summarise()` has grouped output by 'grp'. You can override using the `.groups` argument.
```
```
## `mutate_if()` ignored the following grouping variables:
## Column `group`
```
| group | sex | n | Mean | SD |
| --- | --- | --- | --- | --- |
| a | female | 50 | \-0\.361 | 0\.796 |
| a | male | 50 | \-0\.284 | 1\.052 |
| b | female | 50 | 0\.335 | 1\.080 |
| b | male | 50 | 0\.313 | 0\.904 |
Notice that the r chunk specifies the option `results='asis'`. This lets you format the table using the `kable()` function from `knitr`. You can also use more specialised functions from [papaja](https://crsh.github.io/papaja_man/reporting.html#tables) or [kableExtra](https://haozhu233.github.io/kableExtra/awesome_table_in_html.html) to format your tables.
#### 10\.4\.4\.2 Images
Next, create a code chunk where you want to display the image in your document. Let’s put it in the Results section. Use what you learned in the [data visualisation lecture](03_ggplot.html) to show violin\-boxplots for the two groups.
```
```{r, fig1, fig.cap="Figure 1. Scores by group and sex."}
ggplot(dat, aes(grp, Y, fill = sex)) +
geom_violin(alpha = 0.5) +
geom_boxplot(width = 0.25,
position = position_dodge(width = 0.9),
show.legend = FALSE) +
scale_fill_manual(values = c("orange", "purple")) +
xlab("Group") +
ylab("Score") +
theme(text = element_text(size = 30, family = "Times"))
```
```
The last line changes the default text size and font, which can be useful for generating figures that meet a journal’s requirements.
Figure 10\.1: Figure 1\. Scores by group and sex.
You can also include images that you did not create in R using the typical markdown syntax for images:
```
```
All the Things by [Hyperbole and a Half](http://hyperboleandahalf.blogspot.com/)
#### 10\.4\.4\.3 In\-line R
Now let’s use what you learned in the [GLM lecture](09_glm.html) to analyse our simulated data. The document is getting a little cluttered, so let’s move this code to external scripts.
* Create a new R script called “functions.R”
* Move the `library(tidyverse)` line and the `two_sample()` function definition to this file.
* Create a new R script called “analysis.R”
* Move the code for creating `dat` to this file.
* Add the following code to the end of the setup chunk:
```
source("functions.R")
source("analysis.R")
```
The `source` function lets you include code from an external file. This is really useful for making your documents readable. Just make sure you call your source files in the right order (e.g., include function definitions before you use the functions).
In the “analysis.R” file, we’re going to run the analysis code and save any numbers we might want to use in our manuscript to variables.
```
grp_lm <- lm(Y ~ grp_e * sex_e, data = dat)
stats <- grp_lm %>%
broom::tidy() %>%
mutate_if(is.numeric, round, 3)
```
The code above runs our analysis predicting `Y` from the effect\-coded group variable `grp_e`, the effect\-coded sex variable `sex_e` and their intereaction. The `tidy` function from the `broom` package turns the output into a tidy table. The `mutate_if` function uses the function `is.numeric` to check if each column should be mutated, adn if it is numeric, applies the `round` function with the digits argument set to 3\.
If you want to report the results of the analysis in a paragraph istead of a table, you need to know how to refer to each number in the table. Like with everything in R, there are many wways to do this. One is by specifying the column and row number like this:
```
stats$p.value[2]
```
```
## [1] 0
```
Another way is to create variables for each row like this:
```
grp_stats <- filter(stats, term == "grp_e")
sex_stats <- filter(stats, term == "sex_e")
ixn_stats <- filter(stats, term == "grp_e:sex_e")
```
Add the above code to the end of your analysis.R file. Then you can refer to columns by name like this:
```
grp_stats$p.value
sex_stats$statistic
ixn_stats$estimate
```
```
## [1] 0
## [1] 0.197
## [1] -0.099
```
You can insert these numbers into a paragraph with inline R code that looks like this:
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
p = `r grp_stats$p.value`).
There was no significant difference between men and women
(B = `r sex_statsestimate`,
t = `r sex_stats$statistic`,
p = `r sex_stats$p.value`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
p = `r ixn_stats$p.value`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \= 0\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
Remember, line breaks are ignored when you render the file (unless you add two spaces at the end of lines), so you can use line breaks to make it easier to read your text with inline R code.
The p\-values aren’t formatted in APA style. We wrote a function to deal with this in the [function lecture](07_functions.html). Add this function to the “functions.R” file and change the inline text to use the `report_p` function.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
fmt <- paste0("p = %.", digits, "f")
reported = sprintf(fmt, roundp)
}
reported
}
```
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
`r report_p(grp_stats$p.value, 3)`).
There was no significant difference between men and women
(B = `r sex_stats$estimate`,
t = `r sex_stats$statistic`,
`r report_p(sex_stats$p.value, 3)`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
`r report_p(ixn_stats$p.value, 3)`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \< .001\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
You might also want to report the statistics for the regression. There are a lot of numbers to format and insert, so it is easier to do this in the analysis script using `sprintf` for formatting.
```
s <- summary(grp_lm)
# calculate p value from fstatistic
fstat.p <- pf(s$fstatistic[1],
s$fstatistic[2],
s$fstatistic[3],
lower=FALSE)
adj_r <- sprintf(
"The regression equation had an adjusted $R^{2}$ of %.3f ($F_{(%i, %i)}$ = %.3f, %s).",
round(s$adj.r.squared, 3),
s$fstatistic[2],
s$fstatistic[3],
round(s$fstatistic[1], 3),
report_p(fstat.p, 3)
)
```
Then you can just insert the text in your manuscript like this: \` adj\_r\`:
The regression equation had an adjusted \\(R^{2}\\) of 0\.090 (\\(F\_{(3, 196\)}\\) \= 7\.546, p \< .001\).
#### 10\.4\.4\.1 Tables
Next, create a code chunk where you want to display a table of the descriptives (e.g., Participants section of the Methods). We’ll use tidyverse functions you learned in the [data wrangling lectures](04_wrangling.html) to create summary statistics for each group.
```
```{r, results='asis'}
dat %>%
group_by(grp, sex) %>%
summarise(n = n(),
Mean = mean(Y),
SD = sd(Y)) %>%
rename(group = grp) %>%
mutate_if(is.numeric, round, 3) %>%
knitr::kable()
```
```
```
## `summarise()` has grouped output by 'grp'. You can override using the `.groups` argument.
```
```
## `mutate_if()` ignored the following grouping variables:
## Column `group`
```
| group | sex | n | Mean | SD |
| --- | --- | --- | --- | --- |
| a | female | 50 | \-0\.361 | 0\.796 |
| a | male | 50 | \-0\.284 | 1\.052 |
| b | female | 50 | 0\.335 | 1\.080 |
| b | male | 50 | 0\.313 | 0\.904 |
Notice that the r chunk specifies the option `results='asis'`. This lets you format the table using the `kable()` function from `knitr`. You can also use more specialised functions from [papaja](https://crsh.github.io/papaja_man/reporting.html#tables) or [kableExtra](https://haozhu233.github.io/kableExtra/awesome_table_in_html.html) to format your tables.
#### 10\.4\.4\.2 Images
Next, create a code chunk where you want to display the image in your document. Let’s put it in the Results section. Use what you learned in the [data visualisation lecture](03_ggplot.html) to show violin\-boxplots for the two groups.
```
```{r, fig1, fig.cap="Figure 1. Scores by group and sex."}
ggplot(dat, aes(grp, Y, fill = sex)) +
geom_violin(alpha = 0.5) +
geom_boxplot(width = 0.25,
position = position_dodge(width = 0.9),
show.legend = FALSE) +
scale_fill_manual(values = c("orange", "purple")) +
xlab("Group") +
ylab("Score") +
theme(text = element_text(size = 30, family = "Times"))
```
```
The last line changes the default text size and font, which can be useful for generating figures that meet a journal’s requirements.
Figure 10\.1: Figure 1\. Scores by group and sex.
You can also include images that you did not create in R using the typical markdown syntax for images:
```
```
All the Things by [Hyperbole and a Half](http://hyperboleandahalf.blogspot.com/)
#### 10\.4\.4\.3 In\-line R
Now let’s use what you learned in the [GLM lecture](09_glm.html) to analyse our simulated data. The document is getting a little cluttered, so let’s move this code to external scripts.
* Create a new R script called “functions.R”
* Move the `library(tidyverse)` line and the `two_sample()` function definition to this file.
* Create a new R script called “analysis.R”
* Move the code for creating `dat` to this file.
* Add the following code to the end of the setup chunk:
```
source("functions.R")
source("analysis.R")
```
The `source` function lets you include code from an external file. This is really useful for making your documents readable. Just make sure you call your source files in the right order (e.g., include function definitions before you use the functions).
In the “analysis.R” file, we’re going to run the analysis code and save any numbers we might want to use in our manuscript to variables.
```
grp_lm <- lm(Y ~ grp_e * sex_e, data = dat)
stats <- grp_lm %>%
broom::tidy() %>%
mutate_if(is.numeric, round, 3)
```
The code above runs our analysis predicting `Y` from the effect\-coded group variable `grp_e`, the effect\-coded sex variable `sex_e` and their intereaction. The `tidy` function from the `broom` package turns the output into a tidy table. The `mutate_if` function uses the function `is.numeric` to check if each column should be mutated, adn if it is numeric, applies the `round` function with the digits argument set to 3\.
If you want to report the results of the analysis in a paragraph istead of a table, you need to know how to refer to each number in the table. Like with everything in R, there are many wways to do this. One is by specifying the column and row number like this:
```
stats$p.value[2]
```
```
## [1] 0
```
Another way is to create variables for each row like this:
```
grp_stats <- filter(stats, term == "grp_e")
sex_stats <- filter(stats, term == "sex_e")
ixn_stats <- filter(stats, term == "grp_e:sex_e")
```
Add the above code to the end of your analysis.R file. Then you can refer to columns by name like this:
```
grp_stats$p.value
sex_stats$statistic
ixn_stats$estimate
```
```
## [1] 0
## [1] 0.197
## [1] -0.099
```
You can insert these numbers into a paragraph with inline R code that looks like this:
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
p = `r grp_stats$p.value`).
There was no significant difference between men and women
(B = `r sex_statsestimate`,
t = `r sex_stats$statistic`,
p = `r sex_stats$p.value`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
p = `r ixn_stats$p.value`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \= 0\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
Remember, line breaks are ignored when you render the file (unless you add two spaces at the end of lines), so you can use line breaks to make it easier to read your text with inline R code.
The p\-values aren’t formatted in APA style. We wrote a function to deal with this in the [function lecture](07_functions.html). Add this function to the “functions.R” file and change the inline text to use the `report_p` function.
```
report_p <- function(p, digits = 3) {
if (!is.numeric(p)) stop("p must be a number")
if (p <= 0) warning("p-values are normally greater than 0")
if (p >= 1) warning("p-values are normally less than 1")
if (p < .001) {
reported = "p < .001"
} else {
roundp <- round(p, digits)
fmt <- paste0("p = %.", digits, "f")
reported = sprintf(fmt, roundp)
}
reported
}
```
```
Scores were higher in group B than group A
(B = `r grp_stats$estimate`,
t = `r grp_stats$statistic`,
`r report_p(grp_stats$p.value, 3)`).
There was no significant difference between men and women
(B = `r sex_stats$estimate`,
t = `r sex_stats$statistic`,
`r report_p(sex_stats$p.value, 3)`)
and the effect of group was not qualified by an interaction with sex
(B = `r ixn_stats$estimate`,
t = `r ixn_stats$statistic`,
`r report_p(ixn_stats$p.value, 3)`).
```
**Rendered text:**
Scores were higher in group B than group A
(B \= 0\.647,
t \= 4\.74,
p \< .001\).
There was no significant difference between men and women
(B \= 0\.027,
t \= 0\.197,
p \= 0\.844\)
and the effect of group was not qualified by an interaction with sex
(B \= \-0\.099,
t \= \-0\.363,
p \= 0\.717\).
You might also want to report the statistics for the regression. There are a lot of numbers to format and insert, so it is easier to do this in the analysis script using `sprintf` for formatting.
```
s <- summary(grp_lm)
# calculate p value from fstatistic
fstat.p <- pf(s$fstatistic[1],
s$fstatistic[2],
s$fstatistic[3],
lower=FALSE)
adj_r <- sprintf(
"The regression equation had an adjusted $R^{2}$ of %.3f ($F_{(%i, %i)}$ = %.3f, %s).",
round(s$adj.r.squared, 3),
s$fstatistic[2],
s$fstatistic[3],
round(s$fstatistic[1], 3),
report_p(fstat.p, 3)
)
```
Then you can just insert the text in your manuscript like this: \` adj\_r\`:
The regression equation had an adjusted \\(R^{2}\\) of 0\.090 (\\(F\_{(3, 196\)}\\) \= 7\.546, p \< .001\).
### 10\.4\.5 Bibliography
There are several ways to do in\-text citations and automatically generate a [bibliography](https://rmarkdown.rstudio.com/authoring_bibliographies_and_citations.html#bibliographies) in RMarkdown.
#### 10\.4\.5\.1 Create a BibTeX File Manually
You can just make a BibTeX file and add citations manually. Make a new Text File in RStudio called “bibliography.bib.”
Next, add the line `bibliography: bibliography.bib` to your YAML header.
You can add citations in the following format:
```
@article{shortname,
author = {Author One and Author Two and Author Three},
title = {Paper Title},
journal = {Journal Title},
volume = {vol},
number = {issue},
pages = {startpage--endpage},
year = {year},
doi = {doi}
}
```
#### 10\.4\.5\.2 Citing R packages
You can get the citation for an R package using the function `citation`. You can paste the bibtex entry into your bibliography.bib file. Make sure to add a short name (e.g., “rmarkdown”) before the first comma to refer to the reference.
```
citation(package="rmarkdown")
```
```
##
## To cite the 'rmarkdown' package in publications, please use:
##
## JJ Allaire and Yihui Xie and Jonathan McPherson and Javier Luraschi
## and Kevin Ushey and Aron Atkins and Hadley Wickham and Joe Cheng and
## Winston Chang and Richard Iannone (2021). rmarkdown: Dynamic
## Documents for R. R package version 2.9.4. URL
## https://rmarkdown.rstudio.com.
##
## Yihui Xie and J.J. Allaire and Garrett Grolemund (2018). R Markdown:
## The Definitive Guide. Chapman and Hall/CRC. ISBN 9781138359338. URL
## https://bookdown.org/yihui/rmarkdown.
##
## Yihui Xie and Christophe Dervieux and Emily Riederer (2020). R
## Markdown Cookbook. Chapman and Hall/CRC. ISBN 9780367563837. URL
## https://bookdown.org/yihui/rmarkdown-cookbook.
##
## To see these entries in BibTeX format, use 'print(<citation>,
## bibtex=TRUE)', 'toBibtex(.)', or set
## 'options(citation.bibtex.max=999)'.
```
#### 10\.4\.5\.3 Download Citation Info
You can get the BibTeX formatted citation from most publisher websites. For example, go to the publisher’s page for [Equivalence Testing for Psychological Research: A Tutorial](https://journals.sagepub.com/doi/abs/10.1177/2515245918770963), click on the Cite button (in the sidebar or under the bottom Explore More menu), choose BibTeX format, and download the citation. You can open up the file in a text editor and copy the text. It should look like this:
```
@article{doi:10.1177/2515245918770963,
author = {Daniël Lakens and Anne M. Scheel and Peder M. Isager},
title ={Equivalence Testing for Psychological Research: A Tutorial},
journal = {Advances in Methods and Practices in Psychological Science},
volume = {1},
number = {2},
pages = {259-269},
year = {2018},
doi = {10.1177/2515245918770963},
URL = {
https://doi.org/10.1177/2515245918770963
},
eprint = {
https://doi.org/10.1177/2515245918770963
}
,
abstract = { Psychologists must be able to test both for the presence of an effect and for the absence of an effect. In addition to testing against zero, researchers can use the two one-sided tests (TOST) procedure to test for equivalence and reject the presence of a smallest effect size of interest (SESOI). The TOST procedure can be used to determine if an observed effect is surprisingly small, given that a true effect at least as extreme as the SESOI exists. We explain a range of approaches to determine the SESOI in psychological science and provide detailed examples of how equivalence tests should be performed and reported. Equivalence tests are an important extension of the statistical tools psychologists currently use and enable researchers to falsify predictions about the presence, and declare the absence, of meaningful effects. }
}
```
Paste the reference into your bibliography.bib file. Change `doi:10.1177/2515245918770963` in the first line of the reference to a short string you will use to cite the reference in your manuscript. We’ll use `TOSTtutorial`.
#### 10\.4\.5\.4 Converting from reference software
Most reference software like EndNote, Zotero or mendeley have exporting options that can export to BibTeX format. You just need to check the shortnames in the resulting file.
#### 10\.4\.5\.5 In\-text citations
You can cite reference in text like this:
```
This tutorial uses several R packages [@tidyverse;@rmarkdown].
```
This tutorial uses several R packages ([Wickham 2017](#ref-tidyverse); [Allaire et al. 2018](#ref-rmarkdown)).
Put a minus in front of the @ if you just want the year:
```
Lakens, Scheel and Isengar [-@TOSTtutorial] wrote a tutorial explaining how to test for the absence of an effect.
```
Lakens, Scheel and Isengar ([2018](#ref-TOSTtutorial)) wrote a tutorial explaining how to test for the absence of an effect.
#### 10\.4\.5\.6 Citation Styles
You can search a [list of style files](https://www.zotero.org/styles) for various journals and download a file that will format your bibliography for a specific journal’s style. You’ll need to add the line `csl: filename.csl` to your YAML header.
Add some citations to your bibliography.bib file, reference them in your text, and render your manuscript to see the automatically generated reference section. Try a few different citation style files.
#### 10\.4\.5\.1 Create a BibTeX File Manually
You can just make a BibTeX file and add citations manually. Make a new Text File in RStudio called “bibliography.bib.”
Next, add the line `bibliography: bibliography.bib` to your YAML header.
You can add citations in the following format:
```
@article{shortname,
author = {Author One and Author Two and Author Three},
title = {Paper Title},
journal = {Journal Title},
volume = {vol},
number = {issue},
pages = {startpage--endpage},
year = {year},
doi = {doi}
}
```
#### 10\.4\.5\.2 Citing R packages
You can get the citation for an R package using the function `citation`. You can paste the bibtex entry into your bibliography.bib file. Make sure to add a short name (e.g., “rmarkdown”) before the first comma to refer to the reference.
```
citation(package="rmarkdown")
```
```
##
## To cite the 'rmarkdown' package in publications, please use:
##
## JJ Allaire and Yihui Xie and Jonathan McPherson and Javier Luraschi
## and Kevin Ushey and Aron Atkins and Hadley Wickham and Joe Cheng and
## Winston Chang and Richard Iannone (2021). rmarkdown: Dynamic
## Documents for R. R package version 2.9.4. URL
## https://rmarkdown.rstudio.com.
##
## Yihui Xie and J.J. Allaire and Garrett Grolemund (2018). R Markdown:
## The Definitive Guide. Chapman and Hall/CRC. ISBN 9781138359338. URL
## https://bookdown.org/yihui/rmarkdown.
##
## Yihui Xie and Christophe Dervieux and Emily Riederer (2020). R
## Markdown Cookbook. Chapman and Hall/CRC. ISBN 9780367563837. URL
## https://bookdown.org/yihui/rmarkdown-cookbook.
##
## To see these entries in BibTeX format, use 'print(<citation>,
## bibtex=TRUE)', 'toBibtex(.)', or set
## 'options(citation.bibtex.max=999)'.
```
#### 10\.4\.5\.3 Download Citation Info
You can get the BibTeX formatted citation from most publisher websites. For example, go to the publisher’s page for [Equivalence Testing for Psychological Research: A Tutorial](https://journals.sagepub.com/doi/abs/10.1177/2515245918770963), click on the Cite button (in the sidebar or under the bottom Explore More menu), choose BibTeX format, and download the citation. You can open up the file in a text editor and copy the text. It should look like this:
```
@article{doi:10.1177/2515245918770963,
author = {Daniël Lakens and Anne M. Scheel and Peder M. Isager},
title ={Equivalence Testing for Psychological Research: A Tutorial},
journal = {Advances in Methods and Practices in Psychological Science},
volume = {1},
number = {2},
pages = {259-269},
year = {2018},
doi = {10.1177/2515245918770963},
URL = {
https://doi.org/10.1177/2515245918770963
},
eprint = {
https://doi.org/10.1177/2515245918770963
}
,
abstract = { Psychologists must be able to test both for the presence of an effect and for the absence of an effect. In addition to testing against zero, researchers can use the two one-sided tests (TOST) procedure to test for equivalence and reject the presence of a smallest effect size of interest (SESOI). The TOST procedure can be used to determine if an observed effect is surprisingly small, given that a true effect at least as extreme as the SESOI exists. We explain a range of approaches to determine the SESOI in psychological science and provide detailed examples of how equivalence tests should be performed and reported. Equivalence tests are an important extension of the statistical tools psychologists currently use and enable researchers to falsify predictions about the presence, and declare the absence, of meaningful effects. }
}
```
Paste the reference into your bibliography.bib file. Change `doi:10.1177/2515245918770963` in the first line of the reference to a short string you will use to cite the reference in your manuscript. We’ll use `TOSTtutorial`.
#### 10\.4\.5\.4 Converting from reference software
Most reference software like EndNote, Zotero or mendeley have exporting options that can export to BibTeX format. You just need to check the shortnames in the resulting file.
#### 10\.4\.5\.5 In\-text citations
You can cite reference in text like this:
```
This tutorial uses several R packages [@tidyverse;@rmarkdown].
```
This tutorial uses several R packages ([Wickham 2017](#ref-tidyverse); [Allaire et al. 2018](#ref-rmarkdown)).
Put a minus in front of the @ if you just want the year:
```
Lakens, Scheel and Isengar [-@TOSTtutorial] wrote a tutorial explaining how to test for the absence of an effect.
```
Lakens, Scheel and Isengar ([2018](#ref-TOSTtutorial)) wrote a tutorial explaining how to test for the absence of an effect.
#### 10\.4\.5\.6 Citation Styles
You can search a [list of style files](https://www.zotero.org/styles) for various journals and download a file that will format your bibliography for a specific journal’s style. You’ll need to add the line `csl: filename.csl` to your YAML header.
Add some citations to your bibliography.bib file, reference them in your text, and render your manuscript to see the automatically generated reference section. Try a few different citation style files.
### 10\.4\.6 Output Formats
You can knit your file to PDF or Word if you have the right packages installed on your computer.
### 10\.4\.7 Computational Reproducibility
Computational reproducibility refers to making all aspects of your analysis reproducible, including specifics of the software you used to run the code you wrote. R packages get updated periodically and some of these updates may break your code. Using a computational reproducibility platform guards against this by always running your code in the same environment.
[Code Ocean](https://codeocean.com/) is a new platform that lets you run your code in the cloud via a web browser.
10\.5 Glossary
--------------
| term | definition |
| --- | --- |
| [chunk](https://psyteachr.github.io/glossary/c#chunk) | A section of code in an R Markdown file |
| [markdown](https://psyteachr.github.io/glossary/m#markdown) | A way to specify formatting, such as headers, paragraphs, lists, bolding, and links. |
| [reproducibility](https://psyteachr.github.io/glossary/r#reproducibility) | The extent to which the findings of a study can be repeated in some other context |
| [yaml](https://psyteachr.github.io/glossary/y#yaml) | A structured format for information |
| Field Specific |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-02-03-summary-statistics-1D.html |
5\.2 Central tendency and dispersion
------------------------------------
This section will look at two types of summary statistics: measures of central tendency and measures of dispersion.
**Measures of central tendency** map a vector of observations onto a single number that represents, roughly put, “the center”. Since what counts as a “center” is ambiguous, there are several measures of central tendencies. Different measures of central tendencies can be more or less adequate for one purpose or another. The type of variable (nominal, ordinal or metric, for instance) will also influence the choice of measure. We will visit three prominent measures of central tendency here: *(arithmetic) mean*, *median* and *mode*.
**Measures of dispersion** indicate how much the observations are spread out around, let’s say, “a center”. We will visit three prominent measures of dispersion: the *variance*, the *standard deviation* and *quantiles*.
To illustrate these ideas, consider the case of a numeric vector of observations. Central tendency and dispersion together describe a (numeric) vector by giving indicative information about the point around which the observations spread, and how far away from that middle point they tend to lie. Fictitious examples of observation vectors with higher or lower central tendency and higher or lower dispersion are given in Figure [5\.2](Chap-02-03-summary-statistics-1D.html#fig:ch-02-03-dispersion-central-tendency).
Figure 5\.2: Fictitious data points with higher/lower central tendencies and higher/lower dispersion. NB: The points are ‘jittered’ along the vertical dimension for better visibility; only the horizontal dimension is relevant here.
### 5\.2\.1 The data for the remainder of the chapter
In the remainder of this chapter, we will use the [avocado data set](app-93-data-sets-avocado.html#app-93-data-sets-avocado), a very simple and theory\-free example in which we can explore two metric variables: the average price at which avocados were sold during specific intervals of time and the total amount of avocados sold.
We load the (pre\-processed) data into a variable named `avocado_data` (see Appendix [D.5](app-93-data-sets-avocado.html#app-93-data-sets-avocado) for more information on this data set):
```
avocado_data <- aida::data_avocado
```
We can then take a glimpse:
```
glimpse(avocado_data)
```
```
## Rows: 18,249
## Columns: 7
## $ Date <date> 2015-12-27, 2015-12-20, 2015-12-13, 2015-12-06, 201…
## $ average_price <dbl> 1.33, 1.35, 0.93, 1.08, 1.28, 1.26, 0.99, 0.98, 1.02…
## $ total_volume_sold <dbl> 64236.62, 54876.98, 118220.22, 78992.15, 51039.60, 5…
## $ small <dbl> 1036.74, 674.28, 794.70, 1132.00, 941.48, 1184.27, 1…
## $ medium <dbl> 54454.85, 44638.81, 109149.67, 71976.41, 43838.39, 4…
## $ large <dbl> 48.16, 58.33, 130.50, 72.58, 75.78, 43.61, 93.26, 80…
## $ type <chr> "conventional", "conventional", "conventional", "con…
```
The columns that will interest us the most in this chapter are:
* `average_price` \- average price of a single avocado
* `total_volume_sold` \- total number of avocados sold
* `type` \- whether the price/amount is for a conventional or an organic avocado
In particular, we will look at summary statistics for `average_price` and `total_volume_sold`, either for the whole data set or independently for each type of avocado. Notice that both of these variables are numeric. They are vectors of numbers, each representing an observation.
### 5\.2\.2 Measures of central tendency
#### 5\.2\.2\.1 The (arithmetic) mean
If \\(\\vec{x} \= \\langle x\_1, \\dots , x\_n \\rangle\\) is a vector of \\(n\\) observations with \\(x\_i \\in \\mathbb{R}\\) for all \\(1 \\le i \\le n\\), the (arithmetic) **mean** of \\(x\\), written \\(\\mu\_{\\vec{x}}\\), is defined as
\\\[\\mu\_{\\vec{x}} \= \\frac{1}{n}\\sum\_{i\=1}^n x\_i\\,.\\]
The arithmetic mean can be understood intuitively as **the center of gravity**. If we place a marble on a wooden board for every \\(x\_i\\), such that every marble is equally heavy and the differences between all data measurements are identical to the distances between the marbles, the arithmetic mean is where you can balance the board with the tip of your finger.
**Example.** The mean of the vector \\(\\vec{x} \= \\langle 0, 3, 6, 7\\rangle\\) is \\(\\mu\_{\\vec{x}} \= \\frac{0 \+ 3 \+ 6 \+ 7}{4} \= \\frac{16}{4} \= 4\\,.\\) The black dots in the graph below show the data observations, and the red cross indicates the mean. Notice that the mean is clearly *not* the mid\-point between the maximum and the minimum (which here would be 3\.5\).
To calculate the mean of a large vector, R has a built\-in function `mean`, which we have in fact used frequently before. Let’s use it to calculate the mean of the variable `average_price` for different types of avocados:
```
avocado_data %>%
group_by(type) %>%
summarise(
mean_price = mean(average_price)
)
```
```
## # A tibble: 2 × 2
## type mean_price
## <chr> <dbl>
## 1 conventional 1.16
## 2 organic 1.65
```
Unsurprisingly, the overall mean of the observed prices is (numerically) higher for organic avocados than for conventional ones.
**Excursion.** It is also possible to conceptualize the arithmetic mean as the **expected value** when sampling from the observed data. This is useful for linking the mean of a data sample to the expected value of a random variable, a concept we will introduce in Chapter [7](Chap-03-01-probability.html#Chap-03-01-probability). Suppose you have gathered the data \\(\\vec{x} \= \\langle 0, 3, 6, 7\\rangle\\). What is the expected value that you think you will obtain if you sample from this data vector once? – Wait! What does that even mean? Expected value? Sampling once?
Suppose that some joker from around town invites you for a game. The game goes like this: The joker puts a ball in an urn, one for each data observation. The joker writes the observed value on the ball corresponding to that value. You pay the joker a certain amount of money to be allowed to draw one ball from the urn. The balls are indistinguishable and the process of drawing is entirely fair. You receive the number corresponding to the ball you drew paid out in silver coins. (For simplicity, we assume that all numbers are non\-negative, but that is not crucial. If a negative number is drawn, you just have to pay the joker that amount.)
How many silver coins would you maximally pay to play one round? Well, of course, no more than four (unless you value gaming on top of silver)! This is because 4 is the expected value of drawing once. This, in turn, is because every ball has a chance of 0\.25 of being drawn. So you can expect to earn 0 silver with a 25% chance, 3 with a 25% chance, 6 with a 25% chance and 7 with a 25% chance. In this sense, the mean is the expected value of sampling once from the observed data.
#### 5\.2\.2\.2 The median
If \\(\\vec{x} \= \\langle x\_1, \\dots , x\_n \\rangle\\) is a vector of \\(n\\) data observations from an at least ordinal measure and if \\(\\vec{x}\\) is ordered such that for all \\(1 \\le i \< n\\) we have \\(x\_i \\le x\_{i\+1}\\), the **median** is the value \\(x\_i\\) such that the number of data observations that are bigger or equal to \\(x\_i\\) and the number of data observations that are smaller or equal to \\(x\_i\\) are equal. Notice that this definition may yield no unique median. In that case, different alternative strategies are used, depending on the data type at hand (ordinal or metric). (See also the example below.) The median corresponds to the 50% quartile, a concept introduced below.
**Example.** The median of the vector \\(\\vec{x} \= \\langle 0, 3, 6, 7 \\rangle\\) does not exist by the definition given above. However, for metric measures, where distances between measurements are meaningful, it is customary to take the two values “closest to where the median should be” and average them. In the example at hand, this would be \\(\\frac{3 \+ 6}{2} \= 4\.5\\). The plot below shows the data points in black, the mean as a red cross (as before) and the median as a blue circle.
The function `median` from base R computes the median of a vector. It also takes an ordered factor as an argument.
```
median(c(0, 3, 6, 7 ))
```
```
## [1] 4.5
```
To please the avocados, let’s also calculate the median price of both types of avocados and compare these to the means we calculated earlier already:
```
avocado_data %>%
group_by(type) %>%
summarise(
mean_price = mean(average_price),
median_price = median(average_price)
)
```
```
## # A tibble: 2 × 3
## type mean_price median_price
## <chr> <dbl> <dbl>
## 1 conventional 1.16 1.13
## 2 organic 1.65 1.63
```
#### 5\.2\.2\.3 The mode
The **mode** is the unique value that occurred most frequently in the data. If there is no unique value with that property, there is no mode. While the mean is only applicable to metric variables, and the median only to variables that are at least ordinal, the mode is only reasonable for variables that have a finite set of different possible observations (nominal or ordinal).
There is no built\-in function in R to return the mode of a (suitable) vector, but it is easily retrieved by obtaining counts.
**Exercise 5\.1: Mean, median, mode**
1. Compute the mean, median and mode of data vector \\(\\vec{x} \= \\langle1,2,4,10\\rangle\\).
Solution
Mean: \\(\\frac{1\+2\+4\+10}{4}\=\\frac{17}{4}\=4\.25\\)
Median: \\(\\frac{2\+4}{2}\=3\\)
Mode: all values are equally frequent, so there is no (unique) mode.
2. Now add two numbers to the vector such that the median stays the same, but mode and mean change.
Solution
Numbers to add: \\(1, 10 \\to \\vec{x} \= \\langle1,1,2,4,10,10\\rangle\\)
New mean: \\(\\frac{1\+1\+2\+4\+10\+10}{6}\=\\frac{28}{6}\\approx4\.67\\)
New mode: Both \\(1\\) and \\(10\\) are equally frequent and more frequent than all other numbers. Consequently, there is no (unique) mode.
3. Decide for the following statements whether they are true or false:
1. The mean is a measure of central tendency, which can be quite sensitive to even single outliers in the data.
2. If \\(\\vec{x}\\) is a vector of binary Boolean outcomes, we can retrieve the proportion of occurrences of TRUE in \\(\\vec{x}\\) by the R function `mean(x)`.
Solution
Both statements are correct.
### 5\.2\.3 Measures of dispersion
Measures of dispersion indicate how much the observed data is spread out around a measure of central tendency. Intuitively put, they provide a measure for how diverse, variable, clustered, concentrated or smeared out the data observations are. In the following, we will cover three common notions: variance, standard deviation and quantiles.
#### 5\.2\.3\.1 Variance
The variance is a common and very useful measure of dispersion for metric data. The variance \\(\\text{Var}(\\vec{x})\\) of a vector of metric observations \\(\\vec{x}\\) of length \\(n\\) is defined as the average of the squared distances from the mean:
\\\[\\text{Var}(\\vec{x}) \= \\frac{1}{n} \\sum\_{i\=1}^n (x\_i \- \\mu\_{\\vec{x}})^2\\]
**Example.** The variance of the vector \\(\\vec{x} \= \\langle 0, 3, 6, 7 \\rangle\\) is computed as:
\\\[\\text{Var}(\\vec{x}) \= \\frac{1}{4} \\ \\left ( (0\-4\)^2 \+ (3\-4\)^2 \+ (6\-4\)^2 \+ (7\-4\)^2 \\right ) \= \\]
\\\[ \\frac{1}{4} \\ (16 \+ 1 \+ 4 \+ 9\) \= \\frac{30}{4} \= 7\.5\\]
Figure [5\.3](Chap-02-03-summary-statistics-1D.html#fig:ch-02-03-variance-rectangles) shows a geometric interpretation of the variance for the running example of vector \\(\\vec{x} \= \\langle 0, 3, 6, 7 \\rangle\\).
Figure 5\.3: Geometrical interpretation of variance. Four data points (orange dots) and their mean (red cross) are shown, together with the squares whose sides are the differences between the observed data points and the mean. The numbers in white give the area of each square, which is also indicated by the coloring of each rectangle.
We can calculate the variance in R explicitly:
```
x <- c(0, 3, 6, 7)
sum((x - mean(x))^2) / length(x)
```
```
## [1] 7.5
```
There is also a built\-in function `var` from base R. Using this we get a different result though:
```
x <- c(0, 3, 6, 7)
var(x)
```
```
## [1] 10
```
This is because `var` computes the variance by a slightly different formula to obtain an **unbiased estimator** of the variance for the case that the mean is not known but also estimated from the data. The formula for the unbiased estimator that R uses, simply replaces the \\(n\\) in the denominator by \\(n\-1\\):[21](#fn21)
\\\[\\text{Var}(\\vec{x}) \= \\frac{1}{n\-1} \\sum\_{i\=1}^n (x\_i \- \\mu\_{\\vec{x}})^2\\]
#### 5\.2\.3\.2 Standard deviation
The standard deviation \\(\\text{SD}(\\vec{x})\\) of numeric vector \\(\\vec{x}\\) is just the square root of the variance:
\\\[ \\text{SD}(\\vec{x}) \= \\sqrt{\\text{Var}(\\vec{x})} \= \\sqrt{\\frac{1}{n} \\sum\_{i\=1}^n (x\_i \- \\mu\_{\\vec{x}})^2}\\]
Let’s calculate the (unbiased) variance and standard deviation for the `average_price` of different types of avocados:
```
avocado_data %>%
group_by(type) %>%
summarize(
variance_price = var(average_price),
stddev_price = sd(average_price),
)
```
```
## # A tibble: 2 × 3
## type variance_price stddev_price
## <chr> <dbl> <dbl>
## 1 conventional 0.0692 0.263
## 2 organic 0.132 0.364
```
#### 5\.2\.3\.3 Quantiles
For a vector \\(\\vec{x}\\) of at least ordinal measures, we can generalize the concept of a median to an arbitrary quantile. A \\(k\\)% quantile is the element \\(x\_i\\) in \\(\\vec{x}\\), such that \\(k\\)% of the data in \\(\\vec{x}\\) lies below \\(x\_i\\). If this definition does not yield a unique element for some \\(k\\)% threshold, similar methods to what we saw for the median are applied.
We can use the base R function `quantile` to obtain the 10%, 25%, 50% and 85% quantiles (just arbitrary picks) for the `average_price` in the avocado data set:
```
quantile(
# vector of observations
x = avocado_data$average_price,
# which quantiles
probs = c(0.1, 0.25, 0.5, 0.85)
)
```
```
## 10% 25% 50% 85%
## 0.93 1.10 1.37 1.83
```
This tells us, for instance, that only about ten percent of the data observations had prices lower than $0\.93\.
**Exercise 5\.2: Variance, standard deviation, quantiles**
1. Compute the unbiased variance and standard deviation of data vector \\(\\vec{y} \= \\langle4,2,6,8\\rangle\\).
Solution
\\\[
\\begin{align}
\\mu\_{\\vec{y}} \&\= \\frac{4\+2\+6\+8}{4}\=5 \\\\
Var(\\vec{y}) \&\= \\frac{1}{n\-1}\\sum\_{i \= 1}^n (y\_i\-\\mu\_{\\vec{y}})^2 \\\\
\&\= \\frac{1}{4\-1}((4\-5\)^2\+(2\-5\)^2\+(6\-5\)^2\+(8\-5\)^2\) \\\\
\&\= \\frac{1}{3}(1\+9\+1\+9\) \= \\frac{20}{3} \\approx 6\.67 \\\\
SD(\\vec{y}) \&\= \\sqrt{Var(\\vec{y})} \= \\sqrt{6\.67} \\approx 2\.58
\\end{align}
\\]
2. Decide for the following statements whether they are true or false:
1. The median is the 50% quantile.
2. A 10% quantile of 0\.2 indicates that 10% of the data observations are above 0\.2\.
3. The 85% quantile of a vector with unequal numbers always has a larger value than the 25% quantile.
Solution
Statements a. and c. are correct.
### 5\.2\.4 Excursion: Quantifying confidence with bootstrapping
[Bootstrapping](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)) is an elegant way to obtain measures of confidence for summary statistics. These measures of confidence can be used for parameter inference, too. We will discuss parameter inference at length in Chapter [9](ch-03-04-parameter-estimation.html#ch-03-04-parameter-estimation). In this course, we will not use bootstrapping as an alternative approach to parameter inference. We will, however, follow a common practice (at least in some areas of Cognitive Psychology) to use **bootstrapped 95% confidence intervals of the mean** as part of descriptive statistics, i.e., in summaries and plots of the data.
The bootstrap is a method from a more general class of algorithms, namely so\-called **resampling methods**. The general idea is, roughly put, that we treat the data at hand as the true representation of reality. We then imagine that we run an experiment on that (restricted, hypothetical) reality. We then ask ourselves: What would we estimate (e.g., as a mean) in any such hypothetical experiment? The more these hypothetical measures derived from hypothetical experiments based on a hypothetical reality differ, the less confident we are in the estimate. Sounds weird, but it’s mind\-blowingly elegant.
An algorithm for constructing a 95% confidence interval of the mean of vector \\(D\\) of numeric data with length \\(k\\) looks as follows:
1. take \\(k\\) samples from \\(D\\) with replacement, call this \\(D^{\\textrm{rep}}\\)[22](#fn22)
2. calculate the mean \\(\\mu(D^{\\textrm{rep}})\\) of the newly sampled data
3. repeat steps 1 and 2 to gather \\(r\\) means of different resamples of \\(D\\); call the result vector \\(\\mu\_{\\textrm{sampled}}\\)
4. the boundaries of the 95% inner quantile of \\(\\mu\_{\\textrm{sampled}}\\) are the bootstrapped 95% confidence interval of the mean
The higher \\(r\\), i.e., the more samples we take, the better the estimate. The higher \\(k\\), i.e., the more observations we have to begin with, the less variable the means \\(\\mu(D^{\\textrm{rep}})\\) of the resampled data will usually be. Hence, usually, the higher \\(k\\), the smaller the bootstrapped 95% confidence interval of the mean.
Here is a convenience function that we will use throughout the book to produce bootstrapped 95% confidence intervals of the mean (the functions is also supplied directly as part of the `aida` package):
```
## takes a vector of numbers and returns bootstrapped 95% ConfInt
## of the mean, based on `n_resamples` re-samples (default: 1000)
bootstrapped_CI <- function(data_vector, n_resamples = 1000) {
resampled_means <- map_dbl(seq(n_resamples), function(i) {
mean(sample(x = data_vector,
size = length(data_vector),
replace = T)
)
}
)
tibble(
'lower' = quantile(resampled_means, 0.025),
'mean' = mean(data_vector),
'upper' = quantile(resampled_means, 0.975)
)
}
```
Applying this method to the vector of average avocado prices, we get:
```
bootstrapped_CI(avocado_data$average_price)
```
```
## # A tibble: 1 × 3
## lower mean upper
## <dbl> <dbl> <dbl>
## 1 1.40 1.41 1.41
```
Notice that, since `average_price` has length 18249, i.e., we have \\(k \= 18249\\) observations in the data, the bootstrapped 95% confidence interval is rather narrow. Compare this against a case of \\(k \= 300\\), obtained by only looking at the first 300 entries in `average_price`:
```
# first 300 observations of `average price` only
smaller_data <- avocado_data$average_price[1:300]
bootstrapped_CI(smaller_data)
```
```
## # A tibble: 1 × 3
## lower mean upper
## <dbl> <dbl> <dbl>
## 1 1.14 1.16 1.17
```
The mean is different (because we are looking at earlier time points) but, importantly, the interval is larger because with only 300 observations, we have less confidence in the estimate.
**Exercise 5\.3: Bootstrapped Confidence Intervals**
1. Explain in your own words how the bootstrapping\-algorithm works for obtaining 95% confidence intervals of the mean (2\-3 sentences).
Solution
To get the 95% CI of the mean, we repeatedly take samples from a data vector (with replacement) and calculate the mean of each sample. After taking \\(k\\) samples and calculating each mean \\(\\mu\\), we get a vector of means \\(\\mu\_{sampled}\\). The 95% CI ranges between the boundaries of the 95% inner quantile of \\(\\mu\_{sampled}\\).
2. Decide for the following statements whether they are true or false:
1. The more samples we take from our data, the larger the 95% confidence interval gets.
2. A larger 95% confidence interval of the mean indicates higher uncertainty regarding the mean.
3. The 95% confidence interval of the mean contains 95% of the values of \\(\\mu\_{sampled}\\).
Solution
Statements b. and c. are correct.
#### 5\.2\.4\.1 Summary functions with multiple outputs, using nested tibbles
To obtain summary statistics for different groups of a variable, we can use the function `bootstrapped_CI` conveniently in concert with **nested tibbles**, as demonstrated here:
```
avocado_data %>%
group_by(type) %>%
# nest all columns except grouping-column 'type' in a tibble
# the name of the new column is 'price_tibbles'
nest(.key = "price_tibbles") %>%
# collect the summary statistics for each nested tibble
# the outcome is a new column with nested tibbles
summarise(
CIs = map(price_tibbles, function(d) bootstrapped_CI(d$average_price))
) %>%
# unnest the newly created nested tibble
unnest(CIs)
```
```
## # A tibble: 2 × 4
## type lower mean upper
## <chr> <dbl> <dbl> <dbl>
## 1 conventional 1.15 1.16 1.16
## 2 organic 1.65 1.65 1.66
```
Using nesting in this case is helpful because we only want to run the bootstrap function once, but we need both of the numbers it returns. The following explains nesting based on this example.
To understand what is going on with nested tibbles, notice that the `nest` function in this example creates a nested tibble with just two rows, one for each value of the variable `type`, each of which contains a tibble that contains all the data. The column `price_tibbles` in the first row contains the whole data for all observations for conventional avocados:
```
avocado_data %>%
group_by(type) %>%
# nest all columns except grouping-column 'type' in a tibble
# the name of the new column is 'price_tibbles'
nest(.key = "price_tibbles") %>%
# extract new column with tibble
pull(price_tibbles) %>%
# peak at the first entry in this vector
.[1] %>% head()
```
```
## [[1]]
## # A tibble: 9,126 × 6
## Date average_price total_volume_sold small medium large
## <date> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 2015-12-27 1.33 64237. 1037. 54455. 48.2
## 2 2015-12-20 1.35 54877. 674. 44639. 58.3
## 3 2015-12-13 0.93 118220. 795. 109150. 130.
## 4 2015-12-06 1.08 78992. 1132 71976. 72.6
## 5 2015-11-29 1.28 51040. 941. 43838. 75.8
## 6 2015-11-22 1.26 55980. 1184. 48068. 43.6
## 7 2015-11-15 0.99 83454. 1369. 73673. 93.3
## 8 2015-11-08 0.98 109428. 704. 101815. 80
## 9 2015-11-01 1.02 99811. 1022. 87316. 85.3
## 10 2015-10-25 1.07 74339. 842. 64757. 113
## # … with 9,116 more rows
```
After nesting, we call the custom function `bootstrapped_CI` on the variable `average_price` inside of every nested tibble, so first for the conventional, then the organic avocados. The result is a nested tibble. If we now look inside the new column `CI`, we see that its cells contain tibbles with the output of each call of `bootstrapped_CI`:
```
avocado_data %>%
group_by(type) %>%
# nest all columns except grouping-column 'type' in a tibble
# the name of the new column is 'price_tibbles'
nest(.key = "price_tibbles") %>%
# collect the summary statistics for each nested tibble
# the outcome is a new column with nested tibbles
summarise(
CIs = map(price_tibbles, function(d) bootstrapped_CI(d$average_price))
) %>%
# extract new column vector with nested tibbles
pull(CIs) %>%
# peak at the first entry
.[1] %>% head()
```
```
## [[1]]
## # A tibble: 1 × 3
## lower mean upper
## <dbl> <dbl> <dbl>
## 1 1.15 1.16 1.16
```
Finally, we unnest the new column `CIs` to obtain the final result (code repeated from above):
```
avocado_data %>%
group_by(type) %>%
# nest all columns except grouping-column 'type' in a tibble
# the name of the new column is 'price_tibbles'
nest(.key = "price_tibbles") %>%
# collect the summary statistics for each nested tibble
# the outcome is a new column with nested tibbles
summarise(
CIs = map(price_tibbles, function(d) bootstrapped_CI(d$average_price))
) %>%
# unnest the newly created nested tibble
unnest(CIs)
```
```
## # A tibble: 2 × 4
## type lower mean upper
## <chr> <dbl> <dbl> <dbl>
## 1 conventional 1.15 1.16 1.16
## 2 organic 1.65 1.65 1.66
```
### 5\.2\.1 The data for the remainder of the chapter
In the remainder of this chapter, we will use the [avocado data set](app-93-data-sets-avocado.html#app-93-data-sets-avocado), a very simple and theory\-free example in which we can explore two metric variables: the average price at which avocados were sold during specific intervals of time and the total amount of avocados sold.
We load the (pre\-processed) data into a variable named `avocado_data` (see Appendix [D.5](app-93-data-sets-avocado.html#app-93-data-sets-avocado) for more information on this data set):
```
avocado_data <- aida::data_avocado
```
We can then take a glimpse:
```
glimpse(avocado_data)
```
```
## Rows: 18,249
## Columns: 7
## $ Date <date> 2015-12-27, 2015-12-20, 2015-12-13, 2015-12-06, 201…
## $ average_price <dbl> 1.33, 1.35, 0.93, 1.08, 1.28, 1.26, 0.99, 0.98, 1.02…
## $ total_volume_sold <dbl> 64236.62, 54876.98, 118220.22, 78992.15, 51039.60, 5…
## $ small <dbl> 1036.74, 674.28, 794.70, 1132.00, 941.48, 1184.27, 1…
## $ medium <dbl> 54454.85, 44638.81, 109149.67, 71976.41, 43838.39, 4…
## $ large <dbl> 48.16, 58.33, 130.50, 72.58, 75.78, 43.61, 93.26, 80…
## $ type <chr> "conventional", "conventional", "conventional", "con…
```
The columns that will interest us the most in this chapter are:
* `average_price` \- average price of a single avocado
* `total_volume_sold` \- total number of avocados sold
* `type` \- whether the price/amount is for a conventional or an organic avocado
In particular, we will look at summary statistics for `average_price` and `total_volume_sold`, either for the whole data set or independently for each type of avocado. Notice that both of these variables are numeric. They are vectors of numbers, each representing an observation.
### 5\.2\.2 Measures of central tendency
#### 5\.2\.2\.1 The (arithmetic) mean
If \\(\\vec{x} \= \\langle x\_1, \\dots , x\_n \\rangle\\) is a vector of \\(n\\) observations with \\(x\_i \\in \\mathbb{R}\\) for all \\(1 \\le i \\le n\\), the (arithmetic) **mean** of \\(x\\), written \\(\\mu\_{\\vec{x}}\\), is defined as
\\\[\\mu\_{\\vec{x}} \= \\frac{1}{n}\\sum\_{i\=1}^n x\_i\\,.\\]
The arithmetic mean can be understood intuitively as **the center of gravity**. If we place a marble on a wooden board for every \\(x\_i\\), such that every marble is equally heavy and the differences between all data measurements are identical to the distances between the marbles, the arithmetic mean is where you can balance the board with the tip of your finger.
**Example.** The mean of the vector \\(\\vec{x} \= \\langle 0, 3, 6, 7\\rangle\\) is \\(\\mu\_{\\vec{x}} \= \\frac{0 \+ 3 \+ 6 \+ 7}{4} \= \\frac{16}{4} \= 4\\,.\\) The black dots in the graph below show the data observations, and the red cross indicates the mean. Notice that the mean is clearly *not* the mid\-point between the maximum and the minimum (which here would be 3\.5\).
To calculate the mean of a large vector, R has a built\-in function `mean`, which we have in fact used frequently before. Let’s use it to calculate the mean of the variable `average_price` for different types of avocados:
```
avocado_data %>%
group_by(type) %>%
summarise(
mean_price = mean(average_price)
)
```
```
## # A tibble: 2 × 2
## type mean_price
## <chr> <dbl>
## 1 conventional 1.16
## 2 organic 1.65
```
Unsurprisingly, the overall mean of the observed prices is (numerically) higher for organic avocados than for conventional ones.
**Excursion.** It is also possible to conceptualize the arithmetic mean as the **expected value** when sampling from the observed data. This is useful for linking the mean of a data sample to the expected value of a random variable, a concept we will introduce in Chapter [7](Chap-03-01-probability.html#Chap-03-01-probability). Suppose you have gathered the data \\(\\vec{x} \= \\langle 0, 3, 6, 7\\rangle\\). What is the expected value that you think you will obtain if you sample from this data vector once? – Wait! What does that even mean? Expected value? Sampling once?
Suppose that some joker from around town invites you for a game. The game goes like this: The joker puts a ball in an urn, one for each data observation. The joker writes the observed value on the ball corresponding to that value. You pay the joker a certain amount of money to be allowed to draw one ball from the urn. The balls are indistinguishable and the process of drawing is entirely fair. You receive the number corresponding to the ball you drew paid out in silver coins. (For simplicity, we assume that all numbers are non\-negative, but that is not crucial. If a negative number is drawn, you just have to pay the joker that amount.)
How many silver coins would you maximally pay to play one round? Well, of course, no more than four (unless you value gaming on top of silver)! This is because 4 is the expected value of drawing once. This, in turn, is because every ball has a chance of 0\.25 of being drawn. So you can expect to earn 0 silver with a 25% chance, 3 with a 25% chance, 6 with a 25% chance and 7 with a 25% chance. In this sense, the mean is the expected value of sampling once from the observed data.
#### 5\.2\.2\.2 The median
If \\(\\vec{x} \= \\langle x\_1, \\dots , x\_n \\rangle\\) is a vector of \\(n\\) data observations from an at least ordinal measure and if \\(\\vec{x}\\) is ordered such that for all \\(1 \\le i \< n\\) we have \\(x\_i \\le x\_{i\+1}\\), the **median** is the value \\(x\_i\\) such that the number of data observations that are bigger or equal to \\(x\_i\\) and the number of data observations that are smaller or equal to \\(x\_i\\) are equal. Notice that this definition may yield no unique median. In that case, different alternative strategies are used, depending on the data type at hand (ordinal or metric). (See also the example below.) The median corresponds to the 50% quartile, a concept introduced below.
**Example.** The median of the vector \\(\\vec{x} \= \\langle 0, 3, 6, 7 \\rangle\\) does not exist by the definition given above. However, for metric measures, where distances between measurements are meaningful, it is customary to take the two values “closest to where the median should be” and average them. In the example at hand, this would be \\(\\frac{3 \+ 6}{2} \= 4\.5\\). The plot below shows the data points in black, the mean as a red cross (as before) and the median as a blue circle.
The function `median` from base R computes the median of a vector. It also takes an ordered factor as an argument.
```
median(c(0, 3, 6, 7 ))
```
```
## [1] 4.5
```
To please the avocados, let’s also calculate the median price of both types of avocados and compare these to the means we calculated earlier already:
```
avocado_data %>%
group_by(type) %>%
summarise(
mean_price = mean(average_price),
median_price = median(average_price)
)
```
```
## # A tibble: 2 × 3
## type mean_price median_price
## <chr> <dbl> <dbl>
## 1 conventional 1.16 1.13
## 2 organic 1.65 1.63
```
#### 5\.2\.2\.3 The mode
The **mode** is the unique value that occurred most frequently in the data. If there is no unique value with that property, there is no mode. While the mean is only applicable to metric variables, and the median only to variables that are at least ordinal, the mode is only reasonable for variables that have a finite set of different possible observations (nominal or ordinal).
There is no built\-in function in R to return the mode of a (suitable) vector, but it is easily retrieved by obtaining counts.
**Exercise 5\.1: Mean, median, mode**
1. Compute the mean, median and mode of data vector \\(\\vec{x} \= \\langle1,2,4,10\\rangle\\).
Solution
Mean: \\(\\frac{1\+2\+4\+10}{4}\=\\frac{17}{4}\=4\.25\\)
Median: \\(\\frac{2\+4}{2}\=3\\)
Mode: all values are equally frequent, so there is no (unique) mode.
2. Now add two numbers to the vector such that the median stays the same, but mode and mean change.
Solution
Numbers to add: \\(1, 10 \\to \\vec{x} \= \\langle1,1,2,4,10,10\\rangle\\)
New mean: \\(\\frac{1\+1\+2\+4\+10\+10}{6}\=\\frac{28}{6}\\approx4\.67\\)
New mode: Both \\(1\\) and \\(10\\) are equally frequent and more frequent than all other numbers. Consequently, there is no (unique) mode.
3. Decide for the following statements whether they are true or false:
1. The mean is a measure of central tendency, which can be quite sensitive to even single outliers in the data.
2. If \\(\\vec{x}\\) is a vector of binary Boolean outcomes, we can retrieve the proportion of occurrences of TRUE in \\(\\vec{x}\\) by the R function `mean(x)`.
Solution
Both statements are correct.
#### 5\.2\.2\.1 The (arithmetic) mean
If \\(\\vec{x} \= \\langle x\_1, \\dots , x\_n \\rangle\\) is a vector of \\(n\\) observations with \\(x\_i \\in \\mathbb{R}\\) for all \\(1 \\le i \\le n\\), the (arithmetic) **mean** of \\(x\\), written \\(\\mu\_{\\vec{x}}\\), is defined as
\\\[\\mu\_{\\vec{x}} \= \\frac{1}{n}\\sum\_{i\=1}^n x\_i\\,.\\]
The arithmetic mean can be understood intuitively as **the center of gravity**. If we place a marble on a wooden board for every \\(x\_i\\), such that every marble is equally heavy and the differences between all data measurements are identical to the distances between the marbles, the arithmetic mean is where you can balance the board with the tip of your finger.
**Example.** The mean of the vector \\(\\vec{x} \= \\langle 0, 3, 6, 7\\rangle\\) is \\(\\mu\_{\\vec{x}} \= \\frac{0 \+ 3 \+ 6 \+ 7}{4} \= \\frac{16}{4} \= 4\\,.\\) The black dots in the graph below show the data observations, and the red cross indicates the mean. Notice that the mean is clearly *not* the mid\-point between the maximum and the minimum (which here would be 3\.5\).
To calculate the mean of a large vector, R has a built\-in function `mean`, which we have in fact used frequently before. Let’s use it to calculate the mean of the variable `average_price` for different types of avocados:
```
avocado_data %>%
group_by(type) %>%
summarise(
mean_price = mean(average_price)
)
```
```
## # A tibble: 2 × 2
## type mean_price
## <chr> <dbl>
## 1 conventional 1.16
## 2 organic 1.65
```
Unsurprisingly, the overall mean of the observed prices is (numerically) higher for organic avocados than for conventional ones.
**Excursion.** It is also possible to conceptualize the arithmetic mean as the **expected value** when sampling from the observed data. This is useful for linking the mean of a data sample to the expected value of a random variable, a concept we will introduce in Chapter [7](Chap-03-01-probability.html#Chap-03-01-probability). Suppose you have gathered the data \\(\\vec{x} \= \\langle 0, 3, 6, 7\\rangle\\). What is the expected value that you think you will obtain if you sample from this data vector once? – Wait! What does that even mean? Expected value? Sampling once?
Suppose that some joker from around town invites you for a game. The game goes like this: The joker puts a ball in an urn, one for each data observation. The joker writes the observed value on the ball corresponding to that value. You pay the joker a certain amount of money to be allowed to draw one ball from the urn. The balls are indistinguishable and the process of drawing is entirely fair. You receive the number corresponding to the ball you drew paid out in silver coins. (For simplicity, we assume that all numbers are non\-negative, but that is not crucial. If a negative number is drawn, you just have to pay the joker that amount.)
How many silver coins would you maximally pay to play one round? Well, of course, no more than four (unless you value gaming on top of silver)! This is because 4 is the expected value of drawing once. This, in turn, is because every ball has a chance of 0\.25 of being drawn. So you can expect to earn 0 silver with a 25% chance, 3 with a 25% chance, 6 with a 25% chance and 7 with a 25% chance. In this sense, the mean is the expected value of sampling once from the observed data.
#### 5\.2\.2\.2 The median
If \\(\\vec{x} \= \\langle x\_1, \\dots , x\_n \\rangle\\) is a vector of \\(n\\) data observations from an at least ordinal measure and if \\(\\vec{x}\\) is ordered such that for all \\(1 \\le i \< n\\) we have \\(x\_i \\le x\_{i\+1}\\), the **median** is the value \\(x\_i\\) such that the number of data observations that are bigger or equal to \\(x\_i\\) and the number of data observations that are smaller or equal to \\(x\_i\\) are equal. Notice that this definition may yield no unique median. In that case, different alternative strategies are used, depending on the data type at hand (ordinal or metric). (See also the example below.) The median corresponds to the 50% quartile, a concept introduced below.
**Example.** The median of the vector \\(\\vec{x} \= \\langle 0, 3, 6, 7 \\rangle\\) does not exist by the definition given above. However, for metric measures, where distances between measurements are meaningful, it is customary to take the two values “closest to where the median should be” and average them. In the example at hand, this would be \\(\\frac{3 \+ 6}{2} \= 4\.5\\). The plot below shows the data points in black, the mean as a red cross (as before) and the median as a blue circle.
The function `median` from base R computes the median of a vector. It also takes an ordered factor as an argument.
```
median(c(0, 3, 6, 7 ))
```
```
## [1] 4.5
```
To please the avocados, let’s also calculate the median price of both types of avocados and compare these to the means we calculated earlier already:
```
avocado_data %>%
group_by(type) %>%
summarise(
mean_price = mean(average_price),
median_price = median(average_price)
)
```
```
## # A tibble: 2 × 3
## type mean_price median_price
## <chr> <dbl> <dbl>
## 1 conventional 1.16 1.13
## 2 organic 1.65 1.63
```
#### 5\.2\.2\.3 The mode
The **mode** is the unique value that occurred most frequently in the data. If there is no unique value with that property, there is no mode. While the mean is only applicable to metric variables, and the median only to variables that are at least ordinal, the mode is only reasonable for variables that have a finite set of different possible observations (nominal or ordinal).
There is no built\-in function in R to return the mode of a (suitable) vector, but it is easily retrieved by obtaining counts.
**Exercise 5\.1: Mean, median, mode**
1. Compute the mean, median and mode of data vector \\(\\vec{x} \= \\langle1,2,4,10\\rangle\\).
Solution
Mean: \\(\\frac{1\+2\+4\+10}{4}\=\\frac{17}{4}\=4\.25\\)
Median: \\(\\frac{2\+4}{2}\=3\\)
Mode: all values are equally frequent, so there is no (unique) mode.
2. Now add two numbers to the vector such that the median stays the same, but mode and mean change.
Solution
Numbers to add: \\(1, 10 \\to \\vec{x} \= \\langle1,1,2,4,10,10\\rangle\\)
New mean: \\(\\frac{1\+1\+2\+4\+10\+10}{6}\=\\frac{28}{6}\\approx4\.67\\)
New mode: Both \\(1\\) and \\(10\\) are equally frequent and more frequent than all other numbers. Consequently, there is no (unique) mode.
3. Decide for the following statements whether they are true or false:
1. The mean is a measure of central tendency, which can be quite sensitive to even single outliers in the data.
2. If \\(\\vec{x}\\) is a vector of binary Boolean outcomes, we can retrieve the proportion of occurrences of TRUE in \\(\\vec{x}\\) by the R function `mean(x)`.
Solution
Both statements are correct.
### 5\.2\.3 Measures of dispersion
Measures of dispersion indicate how much the observed data is spread out around a measure of central tendency. Intuitively put, they provide a measure for how diverse, variable, clustered, concentrated or smeared out the data observations are. In the following, we will cover three common notions: variance, standard deviation and quantiles.
#### 5\.2\.3\.1 Variance
The variance is a common and very useful measure of dispersion for metric data. The variance \\(\\text{Var}(\\vec{x})\\) of a vector of metric observations \\(\\vec{x}\\) of length \\(n\\) is defined as the average of the squared distances from the mean:
\\\[\\text{Var}(\\vec{x}) \= \\frac{1}{n} \\sum\_{i\=1}^n (x\_i \- \\mu\_{\\vec{x}})^2\\]
**Example.** The variance of the vector \\(\\vec{x} \= \\langle 0, 3, 6, 7 \\rangle\\) is computed as:
\\\[\\text{Var}(\\vec{x}) \= \\frac{1}{4} \\ \\left ( (0\-4\)^2 \+ (3\-4\)^2 \+ (6\-4\)^2 \+ (7\-4\)^2 \\right ) \= \\]
\\\[ \\frac{1}{4} \\ (16 \+ 1 \+ 4 \+ 9\) \= \\frac{30}{4} \= 7\.5\\]
Figure [5\.3](Chap-02-03-summary-statistics-1D.html#fig:ch-02-03-variance-rectangles) shows a geometric interpretation of the variance for the running example of vector \\(\\vec{x} \= \\langle 0, 3, 6, 7 \\rangle\\).
Figure 5\.3: Geometrical interpretation of variance. Four data points (orange dots) and their mean (red cross) are shown, together with the squares whose sides are the differences between the observed data points and the mean. The numbers in white give the area of each square, which is also indicated by the coloring of each rectangle.
We can calculate the variance in R explicitly:
```
x <- c(0, 3, 6, 7)
sum((x - mean(x))^2) / length(x)
```
```
## [1] 7.5
```
There is also a built\-in function `var` from base R. Using this we get a different result though:
```
x <- c(0, 3, 6, 7)
var(x)
```
```
## [1] 10
```
This is because `var` computes the variance by a slightly different formula to obtain an **unbiased estimator** of the variance for the case that the mean is not known but also estimated from the data. The formula for the unbiased estimator that R uses, simply replaces the \\(n\\) in the denominator by \\(n\-1\\):[21](#fn21)
\\\[\\text{Var}(\\vec{x}) \= \\frac{1}{n\-1} \\sum\_{i\=1}^n (x\_i \- \\mu\_{\\vec{x}})^2\\]
#### 5\.2\.3\.2 Standard deviation
The standard deviation \\(\\text{SD}(\\vec{x})\\) of numeric vector \\(\\vec{x}\\) is just the square root of the variance:
\\\[ \\text{SD}(\\vec{x}) \= \\sqrt{\\text{Var}(\\vec{x})} \= \\sqrt{\\frac{1}{n} \\sum\_{i\=1}^n (x\_i \- \\mu\_{\\vec{x}})^2}\\]
Let’s calculate the (unbiased) variance and standard deviation for the `average_price` of different types of avocados:
```
avocado_data %>%
group_by(type) %>%
summarize(
variance_price = var(average_price),
stddev_price = sd(average_price),
)
```
```
## # A tibble: 2 × 3
## type variance_price stddev_price
## <chr> <dbl> <dbl>
## 1 conventional 0.0692 0.263
## 2 organic 0.132 0.364
```
#### 5\.2\.3\.3 Quantiles
For a vector \\(\\vec{x}\\) of at least ordinal measures, we can generalize the concept of a median to an arbitrary quantile. A \\(k\\)% quantile is the element \\(x\_i\\) in \\(\\vec{x}\\), such that \\(k\\)% of the data in \\(\\vec{x}\\) lies below \\(x\_i\\). If this definition does not yield a unique element for some \\(k\\)% threshold, similar methods to what we saw for the median are applied.
We can use the base R function `quantile` to obtain the 10%, 25%, 50% and 85% quantiles (just arbitrary picks) for the `average_price` in the avocado data set:
```
quantile(
# vector of observations
x = avocado_data$average_price,
# which quantiles
probs = c(0.1, 0.25, 0.5, 0.85)
)
```
```
## 10% 25% 50% 85%
## 0.93 1.10 1.37 1.83
```
This tells us, for instance, that only about ten percent of the data observations had prices lower than $0\.93\.
**Exercise 5\.2: Variance, standard deviation, quantiles**
1. Compute the unbiased variance and standard deviation of data vector \\(\\vec{y} \= \\langle4,2,6,8\\rangle\\).
Solution
\\\[
\\begin{align}
\\mu\_{\\vec{y}} \&\= \\frac{4\+2\+6\+8}{4}\=5 \\\\
Var(\\vec{y}) \&\= \\frac{1}{n\-1}\\sum\_{i \= 1}^n (y\_i\-\\mu\_{\\vec{y}})^2 \\\\
\&\= \\frac{1}{4\-1}((4\-5\)^2\+(2\-5\)^2\+(6\-5\)^2\+(8\-5\)^2\) \\\\
\&\= \\frac{1}{3}(1\+9\+1\+9\) \= \\frac{20}{3} \\approx 6\.67 \\\\
SD(\\vec{y}) \&\= \\sqrt{Var(\\vec{y})} \= \\sqrt{6\.67} \\approx 2\.58
\\end{align}
\\]
2. Decide for the following statements whether they are true or false:
1. The median is the 50% quantile.
2. A 10% quantile of 0\.2 indicates that 10% of the data observations are above 0\.2\.
3. The 85% quantile of a vector with unequal numbers always has a larger value than the 25% quantile.
Solution
Statements a. and c. are correct.
#### 5\.2\.3\.1 Variance
The variance is a common and very useful measure of dispersion for metric data. The variance \\(\\text{Var}(\\vec{x})\\) of a vector of metric observations \\(\\vec{x}\\) of length \\(n\\) is defined as the average of the squared distances from the mean:
\\\[\\text{Var}(\\vec{x}) \= \\frac{1}{n} \\sum\_{i\=1}^n (x\_i \- \\mu\_{\\vec{x}})^2\\]
**Example.** The variance of the vector \\(\\vec{x} \= \\langle 0, 3, 6, 7 \\rangle\\) is computed as:
\\\[\\text{Var}(\\vec{x}) \= \\frac{1}{4} \\ \\left ( (0\-4\)^2 \+ (3\-4\)^2 \+ (6\-4\)^2 \+ (7\-4\)^2 \\right ) \= \\]
\\\[ \\frac{1}{4} \\ (16 \+ 1 \+ 4 \+ 9\) \= \\frac{30}{4} \= 7\.5\\]
Figure [5\.3](Chap-02-03-summary-statistics-1D.html#fig:ch-02-03-variance-rectangles) shows a geometric interpretation of the variance for the running example of vector \\(\\vec{x} \= \\langle 0, 3, 6, 7 \\rangle\\).
Figure 5\.3: Geometrical interpretation of variance. Four data points (orange dots) and their mean (red cross) are shown, together with the squares whose sides are the differences between the observed data points and the mean. The numbers in white give the area of each square, which is also indicated by the coloring of each rectangle.
We can calculate the variance in R explicitly:
```
x <- c(0, 3, 6, 7)
sum((x - mean(x))^2) / length(x)
```
```
## [1] 7.5
```
There is also a built\-in function `var` from base R. Using this we get a different result though:
```
x <- c(0, 3, 6, 7)
var(x)
```
```
## [1] 10
```
This is because `var` computes the variance by a slightly different formula to obtain an **unbiased estimator** of the variance for the case that the mean is not known but also estimated from the data. The formula for the unbiased estimator that R uses, simply replaces the \\(n\\) in the denominator by \\(n\-1\\):[21](#fn21)
\\\[\\text{Var}(\\vec{x}) \= \\frac{1}{n\-1} \\sum\_{i\=1}^n (x\_i \- \\mu\_{\\vec{x}})^2\\]
#### 5\.2\.3\.2 Standard deviation
The standard deviation \\(\\text{SD}(\\vec{x})\\) of numeric vector \\(\\vec{x}\\) is just the square root of the variance:
\\\[ \\text{SD}(\\vec{x}) \= \\sqrt{\\text{Var}(\\vec{x})} \= \\sqrt{\\frac{1}{n} \\sum\_{i\=1}^n (x\_i \- \\mu\_{\\vec{x}})^2}\\]
Let’s calculate the (unbiased) variance and standard deviation for the `average_price` of different types of avocados:
```
avocado_data %>%
group_by(type) %>%
summarize(
variance_price = var(average_price),
stddev_price = sd(average_price),
)
```
```
## # A tibble: 2 × 3
## type variance_price stddev_price
## <chr> <dbl> <dbl>
## 1 conventional 0.0692 0.263
## 2 organic 0.132 0.364
```
#### 5\.2\.3\.3 Quantiles
For a vector \\(\\vec{x}\\) of at least ordinal measures, we can generalize the concept of a median to an arbitrary quantile. A \\(k\\)% quantile is the element \\(x\_i\\) in \\(\\vec{x}\\), such that \\(k\\)% of the data in \\(\\vec{x}\\) lies below \\(x\_i\\). If this definition does not yield a unique element for some \\(k\\)% threshold, similar methods to what we saw for the median are applied.
We can use the base R function `quantile` to obtain the 10%, 25%, 50% and 85% quantiles (just arbitrary picks) for the `average_price` in the avocado data set:
```
quantile(
# vector of observations
x = avocado_data$average_price,
# which quantiles
probs = c(0.1, 0.25, 0.5, 0.85)
)
```
```
## 10% 25% 50% 85%
## 0.93 1.10 1.37 1.83
```
This tells us, for instance, that only about ten percent of the data observations had prices lower than $0\.93\.
**Exercise 5\.2: Variance, standard deviation, quantiles**
1. Compute the unbiased variance and standard deviation of data vector \\(\\vec{y} \= \\langle4,2,6,8\\rangle\\).
Solution
\\\[
\\begin{align}
\\mu\_{\\vec{y}} \&\= \\frac{4\+2\+6\+8}{4}\=5 \\\\
Var(\\vec{y}) \&\= \\frac{1}{n\-1}\\sum\_{i \= 1}^n (y\_i\-\\mu\_{\\vec{y}})^2 \\\\
\&\= \\frac{1}{4\-1}((4\-5\)^2\+(2\-5\)^2\+(6\-5\)^2\+(8\-5\)^2\) \\\\
\&\= \\frac{1}{3}(1\+9\+1\+9\) \= \\frac{20}{3} \\approx 6\.67 \\\\
SD(\\vec{y}) \&\= \\sqrt{Var(\\vec{y})} \= \\sqrt{6\.67} \\approx 2\.58
\\end{align}
\\]
2. Decide for the following statements whether they are true or false:
1. The median is the 50% quantile.
2. A 10% quantile of 0\.2 indicates that 10% of the data observations are above 0\.2\.
3. The 85% quantile of a vector with unequal numbers always has a larger value than the 25% quantile.
Solution
Statements a. and c. are correct.
### 5\.2\.4 Excursion: Quantifying confidence with bootstrapping
[Bootstrapping](https://en.wikipedia.org/wiki/Bootstrapping_(statistics)) is an elegant way to obtain measures of confidence for summary statistics. These measures of confidence can be used for parameter inference, too. We will discuss parameter inference at length in Chapter [9](ch-03-04-parameter-estimation.html#ch-03-04-parameter-estimation). In this course, we will not use bootstrapping as an alternative approach to parameter inference. We will, however, follow a common practice (at least in some areas of Cognitive Psychology) to use **bootstrapped 95% confidence intervals of the mean** as part of descriptive statistics, i.e., in summaries and plots of the data.
The bootstrap is a method from a more general class of algorithms, namely so\-called **resampling methods**. The general idea is, roughly put, that we treat the data at hand as the true representation of reality. We then imagine that we run an experiment on that (restricted, hypothetical) reality. We then ask ourselves: What would we estimate (e.g., as a mean) in any such hypothetical experiment? The more these hypothetical measures derived from hypothetical experiments based on a hypothetical reality differ, the less confident we are in the estimate. Sounds weird, but it’s mind\-blowingly elegant.
An algorithm for constructing a 95% confidence interval of the mean of vector \\(D\\) of numeric data with length \\(k\\) looks as follows:
1. take \\(k\\) samples from \\(D\\) with replacement, call this \\(D^{\\textrm{rep}}\\)[22](#fn22)
2. calculate the mean \\(\\mu(D^{\\textrm{rep}})\\) of the newly sampled data
3. repeat steps 1 and 2 to gather \\(r\\) means of different resamples of \\(D\\); call the result vector \\(\\mu\_{\\textrm{sampled}}\\)
4. the boundaries of the 95% inner quantile of \\(\\mu\_{\\textrm{sampled}}\\) are the bootstrapped 95% confidence interval of the mean
The higher \\(r\\), i.e., the more samples we take, the better the estimate. The higher \\(k\\), i.e., the more observations we have to begin with, the less variable the means \\(\\mu(D^{\\textrm{rep}})\\) of the resampled data will usually be. Hence, usually, the higher \\(k\\), the smaller the bootstrapped 95% confidence interval of the mean.
Here is a convenience function that we will use throughout the book to produce bootstrapped 95% confidence intervals of the mean (the functions is also supplied directly as part of the `aida` package):
```
## takes a vector of numbers and returns bootstrapped 95% ConfInt
## of the mean, based on `n_resamples` re-samples (default: 1000)
bootstrapped_CI <- function(data_vector, n_resamples = 1000) {
resampled_means <- map_dbl(seq(n_resamples), function(i) {
mean(sample(x = data_vector,
size = length(data_vector),
replace = T)
)
}
)
tibble(
'lower' = quantile(resampled_means, 0.025),
'mean' = mean(data_vector),
'upper' = quantile(resampled_means, 0.975)
)
}
```
Applying this method to the vector of average avocado prices, we get:
```
bootstrapped_CI(avocado_data$average_price)
```
```
## # A tibble: 1 × 3
## lower mean upper
## <dbl> <dbl> <dbl>
## 1 1.40 1.41 1.41
```
Notice that, since `average_price` has length 18249, i.e., we have \\(k \= 18249\\) observations in the data, the bootstrapped 95% confidence interval is rather narrow. Compare this against a case of \\(k \= 300\\), obtained by only looking at the first 300 entries in `average_price`:
```
# first 300 observations of `average price` only
smaller_data <- avocado_data$average_price[1:300]
bootstrapped_CI(smaller_data)
```
```
## # A tibble: 1 × 3
## lower mean upper
## <dbl> <dbl> <dbl>
## 1 1.14 1.16 1.17
```
The mean is different (because we are looking at earlier time points) but, importantly, the interval is larger because with only 300 observations, we have less confidence in the estimate.
**Exercise 5\.3: Bootstrapped Confidence Intervals**
1. Explain in your own words how the bootstrapping\-algorithm works for obtaining 95% confidence intervals of the mean (2\-3 sentences).
Solution
To get the 95% CI of the mean, we repeatedly take samples from a data vector (with replacement) and calculate the mean of each sample. After taking \\(k\\) samples and calculating each mean \\(\\mu\\), we get a vector of means \\(\\mu\_{sampled}\\). The 95% CI ranges between the boundaries of the 95% inner quantile of \\(\\mu\_{sampled}\\).
2. Decide for the following statements whether they are true or false:
1. The more samples we take from our data, the larger the 95% confidence interval gets.
2. A larger 95% confidence interval of the mean indicates higher uncertainty regarding the mean.
3. The 95% confidence interval of the mean contains 95% of the values of \\(\\mu\_{sampled}\\).
Solution
Statements b. and c. are correct.
#### 5\.2\.4\.1 Summary functions with multiple outputs, using nested tibbles
To obtain summary statistics for different groups of a variable, we can use the function `bootstrapped_CI` conveniently in concert with **nested tibbles**, as demonstrated here:
```
avocado_data %>%
group_by(type) %>%
# nest all columns except grouping-column 'type' in a tibble
# the name of the new column is 'price_tibbles'
nest(.key = "price_tibbles") %>%
# collect the summary statistics for each nested tibble
# the outcome is a new column with nested tibbles
summarise(
CIs = map(price_tibbles, function(d) bootstrapped_CI(d$average_price))
) %>%
# unnest the newly created nested tibble
unnest(CIs)
```
```
## # A tibble: 2 × 4
## type lower mean upper
## <chr> <dbl> <dbl> <dbl>
## 1 conventional 1.15 1.16 1.16
## 2 organic 1.65 1.65 1.66
```
Using nesting in this case is helpful because we only want to run the bootstrap function once, but we need both of the numbers it returns. The following explains nesting based on this example.
To understand what is going on with nested tibbles, notice that the `nest` function in this example creates a nested tibble with just two rows, one for each value of the variable `type`, each of which contains a tibble that contains all the data. The column `price_tibbles` in the first row contains the whole data for all observations for conventional avocados:
```
avocado_data %>%
group_by(type) %>%
# nest all columns except grouping-column 'type' in a tibble
# the name of the new column is 'price_tibbles'
nest(.key = "price_tibbles") %>%
# extract new column with tibble
pull(price_tibbles) %>%
# peak at the first entry in this vector
.[1] %>% head()
```
```
## [[1]]
## # A tibble: 9,126 × 6
## Date average_price total_volume_sold small medium large
## <date> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 2015-12-27 1.33 64237. 1037. 54455. 48.2
## 2 2015-12-20 1.35 54877. 674. 44639. 58.3
## 3 2015-12-13 0.93 118220. 795. 109150. 130.
## 4 2015-12-06 1.08 78992. 1132 71976. 72.6
## 5 2015-11-29 1.28 51040. 941. 43838. 75.8
## 6 2015-11-22 1.26 55980. 1184. 48068. 43.6
## 7 2015-11-15 0.99 83454. 1369. 73673. 93.3
## 8 2015-11-08 0.98 109428. 704. 101815. 80
## 9 2015-11-01 1.02 99811. 1022. 87316. 85.3
## 10 2015-10-25 1.07 74339. 842. 64757. 113
## # … with 9,116 more rows
```
After nesting, we call the custom function `bootstrapped_CI` on the variable `average_price` inside of every nested tibble, so first for the conventional, then the organic avocados. The result is a nested tibble. If we now look inside the new column `CI`, we see that its cells contain tibbles with the output of each call of `bootstrapped_CI`:
```
avocado_data %>%
group_by(type) %>%
# nest all columns except grouping-column 'type' in a tibble
# the name of the new column is 'price_tibbles'
nest(.key = "price_tibbles") %>%
# collect the summary statistics for each nested tibble
# the outcome is a new column with nested tibbles
summarise(
CIs = map(price_tibbles, function(d) bootstrapped_CI(d$average_price))
) %>%
# extract new column vector with nested tibbles
pull(CIs) %>%
# peak at the first entry
.[1] %>% head()
```
```
## [[1]]
## # A tibble: 1 × 3
## lower mean upper
## <dbl> <dbl> <dbl>
## 1 1.15 1.16 1.16
```
Finally, we unnest the new column `CIs` to obtain the final result (code repeated from above):
```
avocado_data %>%
group_by(type) %>%
# nest all columns except grouping-column 'type' in a tibble
# the name of the new column is 'price_tibbles'
nest(.key = "price_tibbles") %>%
# collect the summary statistics for each nested tibble
# the outcome is a new column with nested tibbles
summarise(
CIs = map(price_tibbles, function(d) bootstrapped_CI(d$average_price))
) %>%
# unnest the newly created nested tibble
unnest(CIs)
```
```
## # A tibble: 2 × 4
## type lower mean upper
## <chr> <dbl> <dbl> <dbl>
## 1 conventional 1.15 1.16 1.16
## 2 organic 1.65 1.65 1.66
```
#### 5\.2\.4\.1 Summary functions with multiple outputs, using nested tibbles
To obtain summary statistics for different groups of a variable, we can use the function `bootstrapped_CI` conveniently in concert with **nested tibbles**, as demonstrated here:
```
avocado_data %>%
group_by(type) %>%
# nest all columns except grouping-column 'type' in a tibble
# the name of the new column is 'price_tibbles'
nest(.key = "price_tibbles") %>%
# collect the summary statistics for each nested tibble
# the outcome is a new column with nested tibbles
summarise(
CIs = map(price_tibbles, function(d) bootstrapped_CI(d$average_price))
) %>%
# unnest the newly created nested tibble
unnest(CIs)
```
```
## # A tibble: 2 × 4
## type lower mean upper
## <chr> <dbl> <dbl> <dbl>
## 1 conventional 1.15 1.16 1.16
## 2 organic 1.65 1.65 1.66
```
Using nesting in this case is helpful because we only want to run the bootstrap function once, but we need both of the numbers it returns. The following explains nesting based on this example.
To understand what is going on with nested tibbles, notice that the `nest` function in this example creates a nested tibble with just two rows, one for each value of the variable `type`, each of which contains a tibble that contains all the data. The column `price_tibbles` in the first row contains the whole data for all observations for conventional avocados:
```
avocado_data %>%
group_by(type) %>%
# nest all columns except grouping-column 'type' in a tibble
# the name of the new column is 'price_tibbles'
nest(.key = "price_tibbles") %>%
# extract new column with tibble
pull(price_tibbles) %>%
# peak at the first entry in this vector
.[1] %>% head()
```
```
## [[1]]
## # A tibble: 9,126 × 6
## Date average_price total_volume_sold small medium large
## <date> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 2015-12-27 1.33 64237. 1037. 54455. 48.2
## 2 2015-12-20 1.35 54877. 674. 44639. 58.3
## 3 2015-12-13 0.93 118220. 795. 109150. 130.
## 4 2015-12-06 1.08 78992. 1132 71976. 72.6
## 5 2015-11-29 1.28 51040. 941. 43838. 75.8
## 6 2015-11-22 1.26 55980. 1184. 48068. 43.6
## 7 2015-11-15 0.99 83454. 1369. 73673. 93.3
## 8 2015-11-08 0.98 109428. 704. 101815. 80
## 9 2015-11-01 1.02 99811. 1022. 87316. 85.3
## 10 2015-10-25 1.07 74339. 842. 64757. 113
## # … with 9,116 more rows
```
After nesting, we call the custom function `bootstrapped_CI` on the variable `average_price` inside of every nested tibble, so first for the conventional, then the organic avocados. The result is a nested tibble. If we now look inside the new column `CI`, we see that its cells contain tibbles with the output of each call of `bootstrapped_CI`:
```
avocado_data %>%
group_by(type) %>%
# nest all columns except grouping-column 'type' in a tibble
# the name of the new column is 'price_tibbles'
nest(.key = "price_tibbles") %>%
# collect the summary statistics for each nested tibble
# the outcome is a new column with nested tibbles
summarise(
CIs = map(price_tibbles, function(d) bootstrapped_CI(d$average_price))
) %>%
# extract new column vector with nested tibbles
pull(CIs) %>%
# peak at the first entry
.[1] %>% head()
```
```
## [[1]]
## # A tibble: 1 × 3
## lower mean upper
## <dbl> <dbl> <dbl>
## 1 1.15 1.16 1.16
```
Finally, we unnest the new column `CIs` to obtain the final result (code repeated from above):
```
avocado_data %>%
group_by(type) %>%
# nest all columns except grouping-column 'type' in a tibble
# the name of the new column is 'price_tibbles'
nest(.key = "price_tibbles") %>%
# collect the summary statistics for each nested tibble
# the outcome is a new column with nested tibbles
summarise(
CIs = map(price_tibbles, function(d) bootstrapped_CI(d$average_price))
) %>%
# unnest the newly created nested tibble
unnest(CIs)
```
```
## # A tibble: 2 × 4
## type lower mean upper
## <chr> <dbl> <dbl> <dbl>
## 1 conventional 1.15 1.16 1.16
## 2 organic 1.65 1.65 1.66
```
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-02-03-summary-statistics-2D.html |
5\.3 Covariance and correlation
-------------------------------
### 5\.3\.1 Covariance
Let \\(\\vec{x}\\) and \\(\\vec{y}\\) be two vectors of numeric data of the same length, such that all pairs of \\(x\_i\\) and \\(y\_i\\) are associated observations. For example, the vectors `avocado_data$total_volume_sold` and `avocado_data$average_price` would be such vectors. The covariance between \\(\\vec{x}\\) and \\(\\vec{y}\\) measures, intuitively put, the degree to which changes in one vector correspond with changes in the other. Formally, covariance is defined as follows (notice that we use \\(n\-1\\) in the denominator to obtain an unbiased estimator if the means are unknown):
\\\[\\text{Cov}(\\vec{x},\\vec{y}) \= \\frac{1}{n\-1} \\ \\sum\_{i\=1}^n (x\_i \- \\mu\_{\\vec{x}}) \\ (y\_i \- \\mu\_{\\vec{y}})\\]
There is a visually intuitive geometric interpretation of covariance. To see this, let’s look at a small contrived example.
```
contrived_example <-
tribble(
~x, ~y,
2, 2,
2.5, 4,
3.5, 2.5,
4, 3.5
)
```
First, notice that the mean of `x` and `y` is 3:
```
# NB: `map_df` here iterates over the columns of the tibble in its
# first argument slot
means_contr_example <- map_df(contrived_example, mean)
means_contr_example
```
```
## # A tibble: 1 × 2
## x y
## <dbl> <dbl>
## 1 3 3
```
We can then compute the covariance as follows:
```
contrived_example <- contrived_example %>%
mutate(
area_rectangle = (x - mean(x)) * (y - mean(y)),
covariance = 1 / (n() - 1) * sum((x - mean(x)) * (y - mean(y)))
)
contrived_example
```
```
## # A tibble: 4 × 4
## x y area_rectangle covariance
## <dbl> <dbl> <dbl> <dbl>
## 1 2 2 1 0.25
## 2 2.5 4 -0.5 0.25
## 3 3.5 2.5 -0.25 0.25
## 4 4 3.5 0.5 0.25
```
Similar to what we did with the variance, we can give a geometrical interpretation of covariance. Figure [5\.4](Chap-02-03-summary-statistics-2D.html#fig:chap-02-03-covariance) shows the four summands contributing to the covariance of the `contrived_example`. What this graph clearly shows is that summands can have different signs. If \\(x\_i\\) and \\(y\_i\\) are both bigger than the mean, or if both are smaller than the mean, then the corresponding summand is positive. Otherwise, the corresponding summand is negative. This means that the covariance captures the degree to which pairs of \\(x\_i\\) and \\(y\_i\\) tend to deviate from the mean in the same general direction. A positive covariance is indicative of a positive general association between \\(\\vec{x}\\) and \\(\\vec{y}\\), while a negative covariance suggests that as you increase \\(x\_i\\), the associated \\(y\_i\\) becomes smaller.
Figure 5\.4: Geometrical interpretation of covariance. Four data points (orange dots) and their mean (white dot) are shown, together with the squares whose sides are the differences between the observed data points and the mean. The numbers in white give the area of each square, which is also indicated by the coloring of each rectangle.
We can, of course, also calculate the covariance just with the built\-in base R function `cov`:
```
with(contrived_example, cov(x, y))
```
```
## [1] 0.25
```
And, using this function, we can calculate the covariance between the logarithm of `total_volume_sold` and `average_price` in the avocado data:[23](#fn23)
```
with(avocado_data, cov(log(total_volume_sold), average_price))
```
```
## [1] -0.5388084
```
Interestingly, the negative covariance in this example suggests that across all associated data pairs, the larger `total_volume_sold`, the lower `average_price`. It is important that this is a descriptive statistics, and that this is not to be interpreted as evidence of a causal relation between the two measures of interest. Not in this example, not in any other. The covariance describes associated data points; it alone does not provide any evidence for causal relationships.
### 5\.3\.2 Correlation
Covariance is a very useful notion to show how two variables, well, co\-vary. But the problem with this notion of covariance is that it is not invariant under linear transformation. Consider the `contrived_example` from above once more. The original data had the following covariance:
```
with(contrived_example, cov(x, y))
```
```
## [1] 0.25
```
But if we just linearly transform, say, vector `y` to `1000 * y + 500` (e.g., because we switch to an equivalent, but numerically different measuring scale, such as going from Celcius to Fahrenheit), we obtain:
```
with(contrived_example, cov(x, 1000 * y + 500))
```
```
## [1] 250
```
This is a problem in so far as that we would like to have a measure of how much two variables co\-vary that is robust against linear changes, say in measurement scale, like the difference between Celcius and Fahrenheit.
To compensate for this problem, we can look at **Bravais\-Pearson correlation**, which is covariance standardized by standard deviations:
\\\[r\_{\\vec{x}\\vec{y}} \= \\frac{\\text{Cov}(\\vec{x}, \\vec{y})}{\\text{SD}(\\vec{x}) \\ \\text{SD}(\\vec{y})}\\]
Let’s check invariance under linear transformation, using the built\-in function `cor`. The correlation coefficient for the original data is:
```
with(contrived_example, cor(x, y))
```
```
## [1] 0.3
```
The correlation coefficient for the data with linearly transformed `y` is:
```
with(contrived_example, cor(x, 1000 * y + 500))
```
```
## [1] 0.3
```
Indeed, the correlation coefficient is nicely bounded to lie between \-1 and 1\. A correlation coefficient of 0 is to be interpreted as the absence of any correlation. A correlation coefficient of 1 is a perfect positive correlation (the higher \\(x\_i\\), the higher \\(y\_i\\)), and \-1 indicates a perfect negative correlation (the higher \\(x\_i\\), the lower \\(y\_i\\)). Again, pronounced positive or negative correlations are *not* to be confused with strong evidence for a causal relation. It is just a descriptive statistic capturing a property of associated measurements.
In the avocado data, the logarithm of `total_volume_sold` shows a noteworthy correlation with `average_price`. This is also visible in Figure [5\.5](Chap-02-03-summary-statistics-2D.html#fig:chap-02-03-avocado-scatter).
```
with(avocado_data, cor(log(total_volume_sold), average_price))
```
```
## [1] -0.5834087
```
Figure 5\.5: Scatter plot of avocado prices, plotted against (logarithms of) the total amount sold. The black line is a linear regression line indicating the (negative) correlation between these measures (more on this later).
**Exercise 5\.4: Covariance and Correlation**
1. Given two vectors of paired metric measurements \\(\\vec{x}\\) and \\(\\vec{y}\\), you are given the covariance \\(Cov(\\vec{x},\\vec{y}) \= 1\\) and the variance of each vector \\(Var(\\vec{x}) \= 25\\) and \\(Var(\\vec{y}) \= 36\\). Compute Pearson’s correlation coefficient for \\(\\vec{x}\\) and \\(\\vec{y}\\).
Solution
\\(r\_{\\vec{x}\\vec{y}}\=\\frac{1}{\\sqrt{25}\\sqrt{36}}\=\\frac{1}{30}\\)
2. Decide for the following statements whether they are true or false:
1. The covariance is bounded between \-100 and 100\.
2. The Pearson correlation coefficient is bounded between 0 and 1\.
3. For any (non\-trivial) vector \\(\\vec{x}\\) of metric measurements, \\(Cor(\\vec{x},\\vec{x}) \= 1\\).
Solution
Statement c. is correct.
### 5\.3\.1 Covariance
Let \\(\\vec{x}\\) and \\(\\vec{y}\\) be two vectors of numeric data of the same length, such that all pairs of \\(x\_i\\) and \\(y\_i\\) are associated observations. For example, the vectors `avocado_data$total_volume_sold` and `avocado_data$average_price` would be such vectors. The covariance between \\(\\vec{x}\\) and \\(\\vec{y}\\) measures, intuitively put, the degree to which changes in one vector correspond with changes in the other. Formally, covariance is defined as follows (notice that we use \\(n\-1\\) in the denominator to obtain an unbiased estimator if the means are unknown):
\\\[\\text{Cov}(\\vec{x},\\vec{y}) \= \\frac{1}{n\-1} \\ \\sum\_{i\=1}^n (x\_i \- \\mu\_{\\vec{x}}) \\ (y\_i \- \\mu\_{\\vec{y}})\\]
There is a visually intuitive geometric interpretation of covariance. To see this, let’s look at a small contrived example.
```
contrived_example <-
tribble(
~x, ~y,
2, 2,
2.5, 4,
3.5, 2.5,
4, 3.5
)
```
First, notice that the mean of `x` and `y` is 3:
```
# NB: `map_df` here iterates over the columns of the tibble in its
# first argument slot
means_contr_example <- map_df(contrived_example, mean)
means_contr_example
```
```
## # A tibble: 1 × 2
## x y
## <dbl> <dbl>
## 1 3 3
```
We can then compute the covariance as follows:
```
contrived_example <- contrived_example %>%
mutate(
area_rectangle = (x - mean(x)) * (y - mean(y)),
covariance = 1 / (n() - 1) * sum((x - mean(x)) * (y - mean(y)))
)
contrived_example
```
```
## # A tibble: 4 × 4
## x y area_rectangle covariance
## <dbl> <dbl> <dbl> <dbl>
## 1 2 2 1 0.25
## 2 2.5 4 -0.5 0.25
## 3 3.5 2.5 -0.25 0.25
## 4 4 3.5 0.5 0.25
```
Similar to what we did with the variance, we can give a geometrical interpretation of covariance. Figure [5\.4](Chap-02-03-summary-statistics-2D.html#fig:chap-02-03-covariance) shows the four summands contributing to the covariance of the `contrived_example`. What this graph clearly shows is that summands can have different signs. If \\(x\_i\\) and \\(y\_i\\) are both bigger than the mean, or if both are smaller than the mean, then the corresponding summand is positive. Otherwise, the corresponding summand is negative. This means that the covariance captures the degree to which pairs of \\(x\_i\\) and \\(y\_i\\) tend to deviate from the mean in the same general direction. A positive covariance is indicative of a positive general association between \\(\\vec{x}\\) and \\(\\vec{y}\\), while a negative covariance suggests that as you increase \\(x\_i\\), the associated \\(y\_i\\) becomes smaller.
Figure 5\.4: Geometrical interpretation of covariance. Four data points (orange dots) and their mean (white dot) are shown, together with the squares whose sides are the differences between the observed data points and the mean. The numbers in white give the area of each square, which is also indicated by the coloring of each rectangle.
We can, of course, also calculate the covariance just with the built\-in base R function `cov`:
```
with(contrived_example, cov(x, y))
```
```
## [1] 0.25
```
And, using this function, we can calculate the covariance between the logarithm of `total_volume_sold` and `average_price` in the avocado data:[23](#fn23)
```
with(avocado_data, cov(log(total_volume_sold), average_price))
```
```
## [1] -0.5388084
```
Interestingly, the negative covariance in this example suggests that across all associated data pairs, the larger `total_volume_sold`, the lower `average_price`. It is important that this is a descriptive statistics, and that this is not to be interpreted as evidence of a causal relation between the two measures of interest. Not in this example, not in any other. The covariance describes associated data points; it alone does not provide any evidence for causal relationships.
### 5\.3\.2 Correlation
Covariance is a very useful notion to show how two variables, well, co\-vary. But the problem with this notion of covariance is that it is not invariant under linear transformation. Consider the `contrived_example` from above once more. The original data had the following covariance:
```
with(contrived_example, cov(x, y))
```
```
## [1] 0.25
```
But if we just linearly transform, say, vector `y` to `1000 * y + 500` (e.g., because we switch to an equivalent, but numerically different measuring scale, such as going from Celcius to Fahrenheit), we obtain:
```
with(contrived_example, cov(x, 1000 * y + 500))
```
```
## [1] 250
```
This is a problem in so far as that we would like to have a measure of how much two variables co\-vary that is robust against linear changes, say in measurement scale, like the difference between Celcius and Fahrenheit.
To compensate for this problem, we can look at **Bravais\-Pearson correlation**, which is covariance standardized by standard deviations:
\\\[r\_{\\vec{x}\\vec{y}} \= \\frac{\\text{Cov}(\\vec{x}, \\vec{y})}{\\text{SD}(\\vec{x}) \\ \\text{SD}(\\vec{y})}\\]
Let’s check invariance under linear transformation, using the built\-in function `cor`. The correlation coefficient for the original data is:
```
with(contrived_example, cor(x, y))
```
```
## [1] 0.3
```
The correlation coefficient for the data with linearly transformed `y` is:
```
with(contrived_example, cor(x, 1000 * y + 500))
```
```
## [1] 0.3
```
Indeed, the correlation coefficient is nicely bounded to lie between \-1 and 1\. A correlation coefficient of 0 is to be interpreted as the absence of any correlation. A correlation coefficient of 1 is a perfect positive correlation (the higher \\(x\_i\\), the higher \\(y\_i\\)), and \-1 indicates a perfect negative correlation (the higher \\(x\_i\\), the lower \\(y\_i\\)). Again, pronounced positive or negative correlations are *not* to be confused with strong evidence for a causal relation. It is just a descriptive statistic capturing a property of associated measurements.
In the avocado data, the logarithm of `total_volume_sold` shows a noteworthy correlation with `average_price`. This is also visible in Figure [5\.5](Chap-02-03-summary-statistics-2D.html#fig:chap-02-03-avocado-scatter).
```
with(avocado_data, cor(log(total_volume_sold), average_price))
```
```
## [1] -0.5834087
```
Figure 5\.5: Scatter plot of avocado prices, plotted against (logarithms of) the total amount sold. The black line is a linear regression line indicating the (negative) correlation between these measures (more on this later).
**Exercise 5\.4: Covariance and Correlation**
1. Given two vectors of paired metric measurements \\(\\vec{x}\\) and \\(\\vec{y}\\), you are given the covariance \\(Cov(\\vec{x},\\vec{y}) \= 1\\) and the variance of each vector \\(Var(\\vec{x}) \= 25\\) and \\(Var(\\vec{y}) \= 36\\). Compute Pearson’s correlation coefficient for \\(\\vec{x}\\) and \\(\\vec{y}\\).
Solution
\\(r\_{\\vec{x}\\vec{y}}\=\\frac{1}{\\sqrt{25}\\sqrt{36}}\=\\frac{1}{30}\\)
2. Decide for the following statements whether they are true or false:
1. The covariance is bounded between \-100 and 100\.
2. The Pearson correlation coefficient is bounded between 0 and 1\.
3. For any (non\-trivial) vector \\(\\vec{x}\\) of metric measurements, \\(Cor(\\vec{x},\\vec{x}) \= 1\\).
Solution
Statement c. is correct.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-02-04-Anscombe-example.html |
6\.1 Motivating example: Anscombe’s quartet
-------------------------------------------
To see how summary statistics can be highly misleading, and how a simple plot can reveal a lot more, consider a famous dataset available in R ([Anscombe 1973](#ref-anscombe1973)):
```
anscombe %>% as_tibble
```
```
## # A tibble: 11 × 8
## x1 x2 x3 x4 y1 y2 y3 y4
## <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 10 10 10 8 8.04 9.14 7.46 6.58
## 2 8 8 8 8 6.95 8.14 6.77 5.76
## 3 13 13 13 8 7.58 8.74 12.7 7.71
## 4 9 9 9 8 8.81 8.77 7.11 8.84
## 5 11 11 11 8 8.33 9.26 7.81 8.47
## 6 14 14 14 8 9.96 8.1 8.84 7.04
## 7 6 6 6 8 7.24 6.13 6.08 5.25
## 8 4 4 4 19 4.26 3.1 5.39 12.5
## 9 12 12 12 8 10.8 9.13 8.15 5.56
## 10 7 7 7 8 4.82 7.26 6.42 7.91
## 11 5 5 5 8 5.68 4.74 5.73 6.89
```
There are four pairs of \\(x\\) and \\(y\\) coordinates. Unfortunately, these are stored in long format with two pieces of information buried inside of the column name: for instance, the name `x3` contains the information that this column contains the \\(x\\) coordinates for the 3rd pair. This is rather untidy. But, using tools from the `dplyr` package, we can tidy up quickly:
```
tidy_anscombe <- anscombe %>% as_tibble %>%
pivot_longer(
## we want to pivot every column
everything(),
## use reg-exps to capture 1st and 2nd character
names_pattern = "(.)(.)",
## assign names to new cols, using 1st part of
## what reg-exp captures as new column names
names_to = c(".value", "grp")
) %>%
mutate(grp = paste0("Group ", grp))
tidy_anscombe
```
```
## # A tibble: 44 × 3
## grp x y
## <chr> <dbl> <dbl>
## 1 Group 1 10 8.04
## 2 Group 2 10 9.14
## 3 Group 3 10 7.46
## 4 Group 4 8 6.58
## 5 Group 1 8 6.95
## 6 Group 2 8 8.14
## 7 Group 3 8 6.77
## 8 Group 4 8 5.76
## 9 Group 1 13 7.58
## 10 Group 2 13 8.74
## # … with 34 more rows
```
Here are some summary statistics for each of the four pairs:
```
tidy_anscombe %>%
group_by(grp) %>%
summarise(
mean_x = mean(x),
mean_y = mean(y),
min_x = min(x),
min_y = min(y),
max_x = max(x),
max_y = max(y),
crrltn = cor(x, y)
)
```
```
## # A tibble: 4 × 8
## grp mean_x mean_y min_x min_y max_x max_y crrltn
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Group 1 9 7.50 4 4.26 14 10.8 0.816
## 2 Group 2 9 7.50 4 3.1 14 9.26 0.816
## 3 Group 3 9 7.5 4 5.39 14 12.7 0.816
## 4 Group 4 9 7.50 8 5.25 19 12.5 0.817
```
These numeric indicators suggest that each pair of \\(x\\) and \\(y\\) values is very similar. Only the ranges seem to differ. A brilliant example of how misleading numeric statistics can be, as compared to a plot of the data:[25](#fn25)
```
tidy_anscombe %>%
ggplot(aes(x, y)) +
geom_smooth(method = lm, se = F, color = "darkorange") +
geom_point(color = project_colors[3], size = 2) +
scale_y_continuous(breaks = scales::pretty_breaks()) +
scale_x_continuous(breaks = scales::pretty_breaks()) +
labs(
title = "Anscombe's Quartet", x = NULL, y = NULL,
subtitle = bquote(y == 0.5 * x + 3 ~ (r %~~% .82) ~ "for all groups")
) +
facet_wrap(~grp, ncol = 2, scales = "free_x") +
theme(strip.background = element_rect(fill = "#f2f2f2", colour = "white"))
```
Figure 6\.1: Anscombe’s Quartet: four different data sets, all of which receive the same correlation score.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-02-04-good-visualization.html |
6\.2 Visualization: the good, the bad and the infographic
---------------------------------------------------------
Producing good data visualization is very difficult. There are no uncontroversial criteria for what a good visualization should be. There are, unfortunately, quite clear examples of really bad visualizations. We will look at some of these examples in the following.
An absolute classic on data visualization is an early book by Edward Tufte ([1983](#ref-Tufte1983:The-Visual-Disp)) entitled “The Visual Display of Quantitative Information”. A distilled and over\-simplified summary of Tufte’s proposal is that we should eliminate *chart junk* and increase the *data\-ink ratio*, a concept which Tufte defines formally. The more information (\= data) a plot conveys, the higher the data\-ink ratio. The more ink it requires, the lower it is.
However, not all information in the data is equally relevant. Also, spending extra ink to reduce the recipient’s mental effort of retrieving the relevant information can be justified. Essentially, I would here propose to consider a special case of data visualization, common to scientific presentations. I want to speak of **hypothesis\-driven visualization** as a way of communicating a clear message, the message we care most about at the current moment of (scientific) exchange. Though merely a special instance of all the goals one could pursue with data visualization, focusing on this special case is helpful because it allows us to formulate a (defeasible) rule of thumb for good visualization in analogy to how natural language ought to be used in order to achieve optimal cooperative information flow (at least as conceived by authors):
**The vague \& defeasible rule of thumb of good data
visualization (according to the author).**
“Communicate a maximal degree of relevant true information in a way
that minimizes the recipient’s effort of retrieving this
information.”
Interestingly, just like natural language also needs to rely on a conventional medium for expressing ideas which might put additional constraints on what counts as optimal communication (e.g., we might not be allowed to drop a pronoun in English even though it is clearly recoverable from the context, and Italian speakers would happily omit it), so do certain unarticulated conventions in each specific scientific field.[26](#fn26)
Here are a few examples of bad plotting.[27](#fn27) To begin with, check out this fictitious data set:
```
large_contrast_data <- tribble(
~group, ~treatment, ~measurement,
"A", "on", 1000,
"A", "off", 1002,
"B", "on", 992,
"B", "off", 990
)
```
If we are interested in any potential influence of variables `group` and `treatment` on the measurement in question, the following graph is ruinously unhelpful because the large size of the bars renders the relatively small differences between them almost entirely unspottable.
```
large_contrast_data %>%
ggplot(aes(x = group, y = measurement, fill = treatment)) +
geom_bar(stat = "identity", position = "dodge")
```
A better visualization would be this:
```
large_contrast_data %>%
ggplot(aes(
x = group,
y = measurement,
shape = treatment,
color = treatment,
group = treatment
)
) +
geom_point() +
geom_line() +
scale_y_continuous(breaks = scales::pretty_breaks())
```
The following examples use the [Bio\-Logic Jazz\-Metal data set](app-93-data-sets-BLJM.html#app-93-data-sets-BLJM), in particular the following derived table of counts or the derived table of proportions:
```
BLJM_associated_counts
```
```
## # A tibble: 4 × 3
## JM LB n
## <chr> <chr> <int>
## 1 Jazz Biology 38
## 2 Jazz Logic 26
## 3 Metal Biology 20
## 4 Metal Logic 18
```
It is probably hard to believe but Figure [6\.2](Chap-02-04-good-visualization.html#fig:chap-02-04-bar-plot-3d-BLJM) was obtained without further intentional uglification just by choosing a default 3D bar plot display in Microsoft’s Excel. It does actually show the relevant information but it is entirely useless for a human observer without a magnifying glass, professional measuring tools and a calculator.
Figure 6\.2: Example of a frontrunner for the prize of today’s most complete disaster in the visual communication of information.
It gets slightly better with the following pie chart of the same numerical information, also generated with Microsoft’s Excel. Subjectively, Figure [6\.3](Chap-02-04-good-visualization.html#fig:chap-02-04-pie-chart-BLJM) is pretty much anything but pretty. Objectively, it is better than the previous visualization in terms of 3D bar plots shown in Figure [6\.2](Chap-02-04-good-visualization.html#fig:chap-02-04-bar-plot-3d-BLJM) but the pie chart is still not useful for answering the question which we care about, namely whether logicians are more likely to prefer Jazz over Metal than biologists.
Figure 6\.3: Example of a rather unhelpful visual representation of the BLJM data (when the research question is whether logicians are more likely to prefer Jazz over Metal than biologists).
We can produce a much more useful representation with the code below. (A similar visualization also appeared as Figure [5\.1](Chap-02-03-summary-statistics-counts.html#fig:chap-02-03-BLJM-proportions) in the previous chapter.)
```
BLJM_associated_counts %>%
ggplot(
aes(
x = LB,
y = n,
color = JM,
shape = JM,
group = JM
)
) +
geom_point(size = 3) +
geom_line() +
labs(
title = "Counts of choices of each music+subject pair",
x = "",
y = ""
)
```
**Infographics.** Scientific communication with visualized data is different from other modes of communication with visualized data. These other contexts come with different requirements for good data visualization. Good examples of highly successful *infographics* are produced by the famous illustrator Nigel Holmes, for instance. Figure [6\.4](Chap-02-04-good-visualization.html#fig:chap-02-04-vampire-energy) is an example from Holmes’ website showing different amounts of energy consumption for different household appliances. The purpose of this visualization is not (only) to communicate information about which of the listed household appliances is most energy\-intensive. Its main purpose is to raise awareness for the unexpectedly large energy consumption of household appliances in general (in standby mode).[28](#fn28)
Figure 6\.4: Example of an infographic. While possibly considered ‘chart junk’ in a scientific context, the eye\-catching and highly memorable (and pretty!) artwork serves a strong secondary purpose in contexts other than scientific ones where hypothesis\-driven precise communication with visually presented data is key.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-02-04-ggplot.html |
6\.3 Basics of `ggplot`
-----------------------
In this section, we will work towards a first plot with `ggplot`. It will be a scatter plot (more on different kinds of plots in Section [6\.4](Chap-02-04-geoms.html#Chap-02-04-geoms)) for the [avocado price data](app-93-data-sets-avocado.html#app-93-data-sets-avocado). Check out the [ggplot cheat sheet](https://rstudio.com/wp-content/uploads/2015/03/ggplot2-cheatsheet.pdf) for a quick overview of the nuts and bolts of ggplot.
The following paragraphs introduce the key concepts of `ggplot`:
* **incremental composition**: adding elements or changing attributes of a plot incrementally
* **convenience functions \& defaults**: a closer look at high\-level convenience functions (like `geom_point`) and what they actually do
* **layers**: seeing how layers are stacked when we call, e.g. different `geom_` functions in sequence
* **grouping**: what happens when we use grouping information (e.g., for color, shape or in facets)
The section finishes with a first full example of a plot that has different layers, uses grouping, and customizes a few other things.
To get started, let’s first load the (preprocessed) avocado data set used for plotting:
```
avocado_data <- aida::data_avocado
```
### 6\.3\.1 Incremental composition of a plot
The “gg” in the package name `ggplot` is short for “grammar of graphs”. It provides functions for describing scientific data plots in a compositional manner, i.e., for dealing with different recurrent elements in a plot in an additive way. As a result of this approach, we will use the symbol `+` to *add* more and more elements (or to override the implicit defaults in previously evoked elements) to build a plot. For example, we can obtain a scatter plot for the [avocado price data](app-93-data-sets-avocado.html#app-93-data-sets-avocado) simply by first calling the function `ggplot`, which just creates an empty plot:
```
incrementally_built_plot <- ggplot()
```
The plot stored in variable `incrementally_built_plot` is very boring. Take a look:
```
incrementally_built_plot
```
As you can see, you do not see anything except a (white) canvas. But we can add some stuff. Don’t get hung up on the details right now, just notice that we use `+` to add stuff to our plot:[29](#fn29)
```
incrementally_built_plot +
# add a geom of type `point` (=> scatter plot)
geom_point(
# what data to use
data = avocado_data,
# supply a mapping (in the form of an 'aesthetic' (see below))
mapping = aes(
# which variable to map onto the x-axis
x = total_volume_sold,
# which variable to map onto the y-axis
y = average_price
)
)
```
You see that the function `geom_point` is what makes the points appear. You tell it which data to use and which mapping of variables from the data set to elements in the plot you like. That’s it, at least to begin with.
We can also supply the information about the data to use and the aesthetic mapping in the `ggplot` function call. Doing so will make this information the default for any subsequently added layer. Notice also that the `data` argument in function `ggplot` is the first argument, so we will frequently make use of piping, like in the following code which is equivalent to the previous in terms of output:
```
avocado_data %>%
ggplot(aes(x = total_volume_sold, y = average_price)) +
geom_point()
```
### 6\.3\.2 Elements in the layered grammar of graphs
Let’s take a step back. Actually, the function `geom_point` is a convenience function that does a lot of things automatically for us. It helps to understand subsequent code if we peek under the hood at least for a brief moment initially, if only to just realize where some of the terminology in and around the “grammar of graphs” comes from.
The `ggplot` package defines a **layered grammar of graphs** ([Wickham 2010](#ref-Wickham2010:A-Layered-Gramm)). This is a structured description language for plots (relevant for data science). It uses a smart system of defaults so that it suffices to often just call a convenience wrapper like `geom_point`. But underneath, there is the possibility of tinkering with (almost?) all of the (layered) elements and changing the defaults if need be.
The process of mapping data onto a visualization essentially follows this route:
> data \-\> statistical transformation \-\> geom. object \-\> aesthetics
You supply (tidy) data. The data is then transformed (e.g., by computing a summary statistic) in some way or another. This could just be an “identity map” in which case you will visualize the data exactly as it is. The resulting data representation is mapped onto some spatial (geometric) appearance, like a line, a dot, or a geometric shape. Finally, there is room to alter the specific aesthetics of this mapping from data to visual object, like adjusting the size or the color of a geometric object, possibly depending on some other properties it has (e.g., whether it is an observation for a conventional or an organically grown avocado).
To make explicit the steps which are implicitly carried out by `geom_point` in the example above, here is a fully verbose but output\-equivalent sequence of commands that builds the same plot by defining all the basic components manually:
```
avocado_data %>%
ggplot() +
# plot consists of layers (more on this soon)
layer(
# how to map columns onto ingredients in the plot
mapping = aes(x = total_volume_sold, y = average_price),
# what statistical transformation should be used? - here: none
stat = "identity",
# how should the transformed data be visually represented? - here: as points
geom = "point",
# should we tinker in any other way with the positioning of each element?
# - here: no, thank you!
position = "identity"
) +
# x and y axes are non-transformed continuous
scale_x_continuous() +
scale_y_continuous() +
# we use a cartesian coordinate system (not a polar or a geographical map)
coord_cartesian()
```
In this explicit call, we still need to specify the data and the mapping (which variable to map onto which axis). But we need to specify much more. We tell `ggplot` that we want standard (e.g., not log\-transformed) axes. We also tell it that our axes are continuous, that the data should not be transformed and that the visual shape (\= geom) to which the data is to be mapped is a point (hence the name `geom_point`).
It is not important to understand all of these components right now. It is important to have seen them once, and to understand that `geom_point` is a wrapper around this call which assumes reasonable defaults (such as non\-transformed axes, points for representation etc.).
### 6\.3\.3 Layers and groups
`ggplot` is the “grammar of *layered* graphs”. Plots are compositionally built by combining different layers, if need be. For example, we can use another function from the `geom_` family of functions to display a different visualization derived from the same data on top of our previous scatter plot.[30](#fn30)
```
avocado_data %>%
ggplot(
mapping = aes(
# notice that we use the log (try without it to understand why)
x = log(total_volume_sold),
y = average_price
)
) +
# add a scatter plot
geom_point() +
# add a linear regression line
geom_smooth(method = "lm")
```
Notice that layering is really sequential. To see this, just check what happens when we reverse the calls of the `geom_` functions in the previous example:
```
avocado_data %>%
ggplot(
mapping = aes(
# notice that we use the log (try without it to understand why)
x = log(total_volume_sold),
y = average_price
)
) +
# FIRST: add a linear regression line
geom_smooth(method = "lm") +
# THEN: add a scatter plot
geom_point()
```
If you want lower layers to be visible behind layers added later, one possibility is to tinker with opacity, via the `alpha` parameter. Notice that the example below also changes the colors. The result is quite toxic, but at least you see the line underneath the semi\-transparent points.
```
avocado_data %>%
ggplot(
mapping = aes(
# notice that we use the log (try without it to understand why)
x = log(total_volume_sold),
y = average_price
)
) +
# FIRST: add a linear regression line
geom_smooth(method = "lm", color = "darkgreen") +
# THEN: add a scatter plot
geom_point(alpha = 0.1, color = "orange")
```
The aesthetics defined in the initial call to `ggplot` are global defaults for all layers to follow, unless they are overwritten. This also holds for the data supplied to `ggplot`. For example, we can create a second layer using another call to `geom_point` from a second data set (e.g., with a summary statistic), like so:
```
# create a small tibble with the means of both
# variables of interest
avocado_data_means <-
avocado_data %>%
summarize(
mean_volume = mean(log(total_volume_sold)),
mean_price = mean(average_price)
)
avocado_data_means
```
```
## # A tibble: 1 × 2
## mean_volume mean_price
## <dbl> <dbl>
## 1 11.3 1.41
```
```
avocado_data %>%
ggplot(
aes(x = log(total_volume_sold),
y = average_price)
) +
# first layer uses globally declared data & mapping
geom_point() +
# second layer uses different data set & mapping
geom_point(
data = avocado_data_means,
mapping = aes(
x = mean_volume,
y = mean_price
),
# change shape of element to display (see below)
shape = 9,
# change size of element to display
size = 12,
color = "skyblue"
)
```
### 6\.3\.4 Grouping
Categorical distinction is frequently important in data analysis. Just think of the different combinations of factor levels in a factorial design, or the difference between conventionally grown and organically grown avocados. `ggplot` understands grouping very well and acts on appropriately, if you tell it to in the right way.
Grouping can be relevant for different aspects of a plot: the color of points or lines, their shape, or even whether to plot everything together or separately. For instance, we might want to display different types of avocados in a different color. We can do this like so:
```
avocado_data %>%
ggplot(
aes(
x = log(total_volume_sold),
y = average_price,
# use a different color for each type of avocado
color = type
)
) +
geom_point()
```
Notice that we added the grouping information inside of `aes` to the call of `ggplot`. This way the grouping is the global default for the whole plot. Check what happens when we then add another layer, like `geom_smooth`:
```
avocado_data %>%
ggplot(
aes(
x = log(total_volume_sold),
y = average_price,
# use a different color for each type of avocado
color = type
)
) +
geom_point() +
geom_smooth(method = "lm")
```
The regression lines will also be shown in the colors of the underlying scatter plot. We can change this by overwriting the `color` attribute locally, but then we lose the grouping information:
```
avocado_data %>%
ggplot(
aes(
x = log(total_volume_sold),
y = average_price,
# use a different color for each type of avocado
color = type
)
) +
geom_point() +
geom_smooth(method = "lm", color = "black")
```
To retrieve the grouping information, we can change the explicit keyword `group` (which just treats data from the relevant factor levels differently without directly changing their appearance):
```
avocado_data %>%
ggplot(
aes(
x = log(total_volume_sold),
y = average_price,
# use a different color for each type of avocado
color = type
)
) +
geom_point() +
geom_smooth(
# tell the smoother to deal with avocados types separately
aes(group = type),
method = "lm",
color = "black"
)
```
Finally, we see that the lines are not uniquely associable with the avocado type, so we can also change the regression line’s `shape` attribute conditional on avocado type:
```
avocado_data %>%
ggplot(
aes(
x = log(total_volume_sold),
y = average_price,
# use a different color for each type of avocado
color = type
)
) +
geom_point() +
geom_smooth(
# tell the smoother to deal with avocados types separately
aes(group = type, linetype = type),
method = "lm",
color = "black"
)
```
### 6\.3\.5 Example of a customized plot
If done with the proper mind and heart, plots intended to share (and to communicate a point, following the idea of hypothesis\-driven visualization) will usually require a lot of tweaking. We will cover some of the most frequently relevant tweaks in Section [6\.6](Chap-02-04-customization.html#Chap-02-04-customization).
To nevertheless get a feeling of where the journey is going, at least roughly, here is an example of a plot of the avocado data which is much more tweaked and honed. No claim is intended regarding the false idea that this plot is in any sense optimal. There is not even a clear hypothesis or point to communicate. This just showcases some functionality. Notice, for instance, that this plot uses two layers, invoked by `geom_point` which shows the scatter plot of points and `geom_smooth` which layers on top the point cloud regression lines (one for each level in the grouping variable).
```
# pipe data set into function `ggplot`
avocado_data %>%
# reverse factor level so that horizontal legend entries align with
# the majority of observations of each group in the plot
mutate(
type = fct_rev(type)
) %>%
# initialize the plot
ggplot(
# defined mapping
mapping = aes(
# which variable goes on the x-axis
x = total_volume_sold,
# which variable goes on the y-axis
y = average_price,
# which groups of variables to distinguish
group = type,
# color and fill to change by grouping variable
fill = type,
linetype = type,
color = type
)
) +
# declare that we want a scatter plot
geom_point(
# set low opacity for each point
alpha = 0.1
) +
# add a linear model fit (for each group)
geom_smooth(
color = "black",
method = "lm"
) +
# change the default (normal) of x-axis to log-scale
scale_x_log10() +
# add dollar signs to y-axis labels
scale_y_continuous(labels = scales::dollar) +
# change axis labels and plot title & subtitle
labs(
x = 'Total volume sold (on a log scale)',
y = 'Average price',
title = "Avocado prices against amount sold",
subtitle = "With linear regression lines"
)
```
**Exercise 6\.1: Find the match**
Determine which graph was created with which code:
Code 1:
```
code_1 <- ggplot(avocado_data,
mapping = aes(
x = average_price,
y = log(total_volume_sold),
color = type
)
) +
geom_point() +
geom_smooth(method = "lm")
```
Code 2:
```
code_2 <- ggplot(avocado_data,
mapping = aes(
x = log(total_volume_sold),
y = average_price,
color = type
)
) +
geom_point() +
geom_smooth(method = "lm")
```
Code 3:
```
code_3 <- ggplot(avocado_data,
mapping = aes(
x = log(total_volume_sold),
y = average_price
)
) +
geom_smooth(method = "lm", color = "black") +
geom_point(alpha = 0.1, color = "blue") +
labs(
x = 'Total volume sold (on a log scale)',
y = 'Average price',
title = "Avocado prices against amount sold"
)
```
Code 4:
```
code_4 <- ggplot(avocado_data,
mapping = aes(
x = log(total_volume_sold),
y = average_price,
linetype = type
)
) +
geom_smooth(method = "lm", color = "black") +
geom_point(alpha = 0.1, color = "blue") +
labs(
x = 'Total volume sold (on a log scale)',
y = 'Average price',
title = "Avocado prices against amount sold"
)
```
Plot 1:
Plot 2:
Plot 3:
Solution
Plot 1: Code 4
Plot 2: Code 1
Plot 3: Code 2
### 6\.3\.1 Incremental composition of a plot
The “gg” in the package name `ggplot` is short for “grammar of graphs”. It provides functions for describing scientific data plots in a compositional manner, i.e., for dealing with different recurrent elements in a plot in an additive way. As a result of this approach, we will use the symbol `+` to *add* more and more elements (or to override the implicit defaults in previously evoked elements) to build a plot. For example, we can obtain a scatter plot for the [avocado price data](app-93-data-sets-avocado.html#app-93-data-sets-avocado) simply by first calling the function `ggplot`, which just creates an empty plot:
```
incrementally_built_plot <- ggplot()
```
The plot stored in variable `incrementally_built_plot` is very boring. Take a look:
```
incrementally_built_plot
```
As you can see, you do not see anything except a (white) canvas. But we can add some stuff. Don’t get hung up on the details right now, just notice that we use `+` to add stuff to our plot:[29](#fn29)
```
incrementally_built_plot +
# add a geom of type `point` (=> scatter plot)
geom_point(
# what data to use
data = avocado_data,
# supply a mapping (in the form of an 'aesthetic' (see below))
mapping = aes(
# which variable to map onto the x-axis
x = total_volume_sold,
# which variable to map onto the y-axis
y = average_price
)
)
```
You see that the function `geom_point` is what makes the points appear. You tell it which data to use and which mapping of variables from the data set to elements in the plot you like. That’s it, at least to begin with.
We can also supply the information about the data to use and the aesthetic mapping in the `ggplot` function call. Doing so will make this information the default for any subsequently added layer. Notice also that the `data` argument in function `ggplot` is the first argument, so we will frequently make use of piping, like in the following code which is equivalent to the previous in terms of output:
```
avocado_data %>%
ggplot(aes(x = total_volume_sold, y = average_price)) +
geom_point()
```
### 6\.3\.2 Elements in the layered grammar of graphs
Let’s take a step back. Actually, the function `geom_point` is a convenience function that does a lot of things automatically for us. It helps to understand subsequent code if we peek under the hood at least for a brief moment initially, if only to just realize where some of the terminology in and around the “grammar of graphs” comes from.
The `ggplot` package defines a **layered grammar of graphs** ([Wickham 2010](#ref-Wickham2010:A-Layered-Gramm)). This is a structured description language for plots (relevant for data science). It uses a smart system of defaults so that it suffices to often just call a convenience wrapper like `geom_point`. But underneath, there is the possibility of tinkering with (almost?) all of the (layered) elements and changing the defaults if need be.
The process of mapping data onto a visualization essentially follows this route:
> data \-\> statistical transformation \-\> geom. object \-\> aesthetics
You supply (tidy) data. The data is then transformed (e.g., by computing a summary statistic) in some way or another. This could just be an “identity map” in which case you will visualize the data exactly as it is. The resulting data representation is mapped onto some spatial (geometric) appearance, like a line, a dot, or a geometric shape. Finally, there is room to alter the specific aesthetics of this mapping from data to visual object, like adjusting the size or the color of a geometric object, possibly depending on some other properties it has (e.g., whether it is an observation for a conventional or an organically grown avocado).
To make explicit the steps which are implicitly carried out by `geom_point` in the example above, here is a fully verbose but output\-equivalent sequence of commands that builds the same plot by defining all the basic components manually:
```
avocado_data %>%
ggplot() +
# plot consists of layers (more on this soon)
layer(
# how to map columns onto ingredients in the plot
mapping = aes(x = total_volume_sold, y = average_price),
# what statistical transformation should be used? - here: none
stat = "identity",
# how should the transformed data be visually represented? - here: as points
geom = "point",
# should we tinker in any other way with the positioning of each element?
# - here: no, thank you!
position = "identity"
) +
# x and y axes are non-transformed continuous
scale_x_continuous() +
scale_y_continuous() +
# we use a cartesian coordinate system (not a polar or a geographical map)
coord_cartesian()
```
In this explicit call, we still need to specify the data and the mapping (which variable to map onto which axis). But we need to specify much more. We tell `ggplot` that we want standard (e.g., not log\-transformed) axes. We also tell it that our axes are continuous, that the data should not be transformed and that the visual shape (\= geom) to which the data is to be mapped is a point (hence the name `geom_point`).
It is not important to understand all of these components right now. It is important to have seen them once, and to understand that `geom_point` is a wrapper around this call which assumes reasonable defaults (such as non\-transformed axes, points for representation etc.).
### 6\.3\.3 Layers and groups
`ggplot` is the “grammar of *layered* graphs”. Plots are compositionally built by combining different layers, if need be. For example, we can use another function from the `geom_` family of functions to display a different visualization derived from the same data on top of our previous scatter plot.[30](#fn30)
```
avocado_data %>%
ggplot(
mapping = aes(
# notice that we use the log (try without it to understand why)
x = log(total_volume_sold),
y = average_price
)
) +
# add a scatter plot
geom_point() +
# add a linear regression line
geom_smooth(method = "lm")
```
Notice that layering is really sequential. To see this, just check what happens when we reverse the calls of the `geom_` functions in the previous example:
```
avocado_data %>%
ggplot(
mapping = aes(
# notice that we use the log (try without it to understand why)
x = log(total_volume_sold),
y = average_price
)
) +
# FIRST: add a linear regression line
geom_smooth(method = "lm") +
# THEN: add a scatter plot
geom_point()
```
If you want lower layers to be visible behind layers added later, one possibility is to tinker with opacity, via the `alpha` parameter. Notice that the example below also changes the colors. The result is quite toxic, but at least you see the line underneath the semi\-transparent points.
```
avocado_data %>%
ggplot(
mapping = aes(
# notice that we use the log (try without it to understand why)
x = log(total_volume_sold),
y = average_price
)
) +
# FIRST: add a linear regression line
geom_smooth(method = "lm", color = "darkgreen") +
# THEN: add a scatter plot
geom_point(alpha = 0.1, color = "orange")
```
The aesthetics defined in the initial call to `ggplot` are global defaults for all layers to follow, unless they are overwritten. This also holds for the data supplied to `ggplot`. For example, we can create a second layer using another call to `geom_point` from a second data set (e.g., with a summary statistic), like so:
```
# create a small tibble with the means of both
# variables of interest
avocado_data_means <-
avocado_data %>%
summarize(
mean_volume = mean(log(total_volume_sold)),
mean_price = mean(average_price)
)
avocado_data_means
```
```
## # A tibble: 1 × 2
## mean_volume mean_price
## <dbl> <dbl>
## 1 11.3 1.41
```
```
avocado_data %>%
ggplot(
aes(x = log(total_volume_sold),
y = average_price)
) +
# first layer uses globally declared data & mapping
geom_point() +
# second layer uses different data set & mapping
geom_point(
data = avocado_data_means,
mapping = aes(
x = mean_volume,
y = mean_price
),
# change shape of element to display (see below)
shape = 9,
# change size of element to display
size = 12,
color = "skyblue"
)
```
### 6\.3\.4 Grouping
Categorical distinction is frequently important in data analysis. Just think of the different combinations of factor levels in a factorial design, or the difference between conventionally grown and organically grown avocados. `ggplot` understands grouping very well and acts on appropriately, if you tell it to in the right way.
Grouping can be relevant for different aspects of a plot: the color of points or lines, their shape, or even whether to plot everything together or separately. For instance, we might want to display different types of avocados in a different color. We can do this like so:
```
avocado_data %>%
ggplot(
aes(
x = log(total_volume_sold),
y = average_price,
# use a different color for each type of avocado
color = type
)
) +
geom_point()
```
Notice that we added the grouping information inside of `aes` to the call of `ggplot`. This way the grouping is the global default for the whole plot. Check what happens when we then add another layer, like `geom_smooth`:
```
avocado_data %>%
ggplot(
aes(
x = log(total_volume_sold),
y = average_price,
# use a different color for each type of avocado
color = type
)
) +
geom_point() +
geom_smooth(method = "lm")
```
The regression lines will also be shown in the colors of the underlying scatter plot. We can change this by overwriting the `color` attribute locally, but then we lose the grouping information:
```
avocado_data %>%
ggplot(
aes(
x = log(total_volume_sold),
y = average_price,
# use a different color for each type of avocado
color = type
)
) +
geom_point() +
geom_smooth(method = "lm", color = "black")
```
To retrieve the grouping information, we can change the explicit keyword `group` (which just treats data from the relevant factor levels differently without directly changing their appearance):
```
avocado_data %>%
ggplot(
aes(
x = log(total_volume_sold),
y = average_price,
# use a different color for each type of avocado
color = type
)
) +
geom_point() +
geom_smooth(
# tell the smoother to deal with avocados types separately
aes(group = type),
method = "lm",
color = "black"
)
```
Finally, we see that the lines are not uniquely associable with the avocado type, so we can also change the regression line’s `shape` attribute conditional on avocado type:
```
avocado_data %>%
ggplot(
aes(
x = log(total_volume_sold),
y = average_price,
# use a different color for each type of avocado
color = type
)
) +
geom_point() +
geom_smooth(
# tell the smoother to deal with avocados types separately
aes(group = type, linetype = type),
method = "lm",
color = "black"
)
```
### 6\.3\.5 Example of a customized plot
If done with the proper mind and heart, plots intended to share (and to communicate a point, following the idea of hypothesis\-driven visualization) will usually require a lot of tweaking. We will cover some of the most frequently relevant tweaks in Section [6\.6](Chap-02-04-customization.html#Chap-02-04-customization).
To nevertheless get a feeling of where the journey is going, at least roughly, here is an example of a plot of the avocado data which is much more tweaked and honed. No claim is intended regarding the false idea that this plot is in any sense optimal. There is not even a clear hypothesis or point to communicate. This just showcases some functionality. Notice, for instance, that this plot uses two layers, invoked by `geom_point` which shows the scatter plot of points and `geom_smooth` which layers on top the point cloud regression lines (one for each level in the grouping variable).
```
# pipe data set into function `ggplot`
avocado_data %>%
# reverse factor level so that horizontal legend entries align with
# the majority of observations of each group in the plot
mutate(
type = fct_rev(type)
) %>%
# initialize the plot
ggplot(
# defined mapping
mapping = aes(
# which variable goes on the x-axis
x = total_volume_sold,
# which variable goes on the y-axis
y = average_price,
# which groups of variables to distinguish
group = type,
# color and fill to change by grouping variable
fill = type,
linetype = type,
color = type
)
) +
# declare that we want a scatter plot
geom_point(
# set low opacity for each point
alpha = 0.1
) +
# add a linear model fit (for each group)
geom_smooth(
color = "black",
method = "lm"
) +
# change the default (normal) of x-axis to log-scale
scale_x_log10() +
# add dollar signs to y-axis labels
scale_y_continuous(labels = scales::dollar) +
# change axis labels and plot title & subtitle
labs(
x = 'Total volume sold (on a log scale)',
y = 'Average price',
title = "Avocado prices against amount sold",
subtitle = "With linear regression lines"
)
```
**Exercise 6\.1: Find the match**
Determine which graph was created with which code:
Code 1:
```
code_1 <- ggplot(avocado_data,
mapping = aes(
x = average_price,
y = log(total_volume_sold),
color = type
)
) +
geom_point() +
geom_smooth(method = "lm")
```
Code 2:
```
code_2 <- ggplot(avocado_data,
mapping = aes(
x = log(total_volume_sold),
y = average_price,
color = type
)
) +
geom_point() +
geom_smooth(method = "lm")
```
Code 3:
```
code_3 <- ggplot(avocado_data,
mapping = aes(
x = log(total_volume_sold),
y = average_price
)
) +
geom_smooth(method = "lm", color = "black") +
geom_point(alpha = 0.1, color = "blue") +
labs(
x = 'Total volume sold (on a log scale)',
y = 'Average price',
title = "Avocado prices against amount sold"
)
```
Code 4:
```
code_4 <- ggplot(avocado_data,
mapping = aes(
x = log(total_volume_sold),
y = average_price,
linetype = type
)
) +
geom_smooth(method = "lm", color = "black") +
geom_point(alpha = 0.1, color = "blue") +
labs(
x = 'Total volume sold (on a log scale)',
y = 'Average price',
title = "Avocado prices against amount sold"
)
```
Plot 1:
Plot 2:
Plot 3:
Solution
Plot 1: Code 4
Plot 2: Code 1
Plot 3: Code 2
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-02-04-geoms.html |
6\.4 A rendezvous with popular geoms
------------------------------------
In the following, we will cover some of the more basic `geom_` functions relevant for our present purposes. It might be useful to read this section top\-to\-bottom at least once, not to think of it as a mere reference list. More information is provided by the [ggplot cheat sheet](https://rstudio.com/wp-content/uploads/2015/03/ggplot2-cheatsheet.pdf).
### 6\.4\.1 Scatter plots with `geom_point`
Scatter plots visualize pairs of associated observations as points in space. We have seen this for the avocado prize data above. Let’s look at some of the further arguments we can use to tweak the presentation by `geom_point`. The following example changes the shape of the objects displayed to tilted rectangles (sometimes called diamonds, e.g., in LaTeX `\diamond`) away from the default circles, the color of the shapes, their size and opacity.
```
avocado_data %>%
ggplot(aes(x = log(total_volume_sold), y = average_price)) +
geom_point(
# shape to display is number 23 (tilted rectangle, see below)
shape = 23,
# color of the surrounding line of the shape (for shapes 21-24)
color = "darkblue",
# color of the interior of each shape
fill = "lightblue",
# size of each shape (default is 1)
size = 5,
# level of opacity for each shape
alpha = 0.3
)
```
How do you know which shape is which number? \- By looking at the picture in Figure [6\.5](Chap-02-04-geoms.html#fig:02-04-ggplot-shapes), for instance.
Figure 6\.5: The numerical coding of different shapes in `ggplot`. Notice that objects 21\-24 are sensitive to both `color` and `fill`, but the others are only sensitive to `color`.
### 6\.4\.2 Smooth
The `geom_smooth` function operates on two\-dimensional metric data and outputs a smoothed line, using different kinds of fitting functions. It is possible to show an indicator of certainty for the fit. We will deal with model fits in later parts of the book. For illustration, just enjoy a few examples here:
```
avocado_data %>%
ggplot(aes(x = log(total_volume_sold), y = average_price)) +
geom_point(
shape = 23,
color = "darkblue",
fill = "lightblue",
size = 3,
alpha = 0.3
) +
geom_smooth(
# fitting a smoothed curve to the data
method = "loess",
# display standard error around smoothing curve
se = T,
color = "darkorange"
)
```
### 6\.4\.3 Line
Use `geom_line` to display a line for your data if that data has associated (ordered) metric values. You can use argument `linetype` to specify the kind of line to draw.
```
tibble(
x = seq(-4, 8, by = 2),
y = x^2
) %>%
ggplot(aes(x, y)) +
geom_line(
linetype = "dashed"
)
```
Sometimes you may want to draw lines between items that are grouped:
```
BLJM_associated_counts %>%
ggplot(
aes(
x = LB,
y = n,
color = JM,
group = JM
)
) +
geom_line(size = 3)
```
### 6\.4\.4 Bar plot
A bar plot, plotted with `geom_bar` or `geom_col`, displays a single number for each of several groups for visual comparison by length. The difference between these two functions is that `geom_bar` relies on implicit counting, while `geom_col` expects the numbers that translate into the length of the bars to be supplied for it. This book favors the use of `geom_col` by first wrangling the data to show the numbers to be visualized, since often this is the cleaner approach and the numbers are useful to have access to independently (e.g., for referring to in the text).
Here’s an example of how `bar_plot` works (implicitly counting numbers of occurrences):
```
tibble(
shopping_cart = c(
rep("chocolate", 2),
rep("ice-cream", 5),
rep("cookies", 8)
)
) %>%
ggplot(aes(x = shopping_cart)) +
geom_bar()
```
To display this data with `geom_col` we need to count occurrences first ourselves:
```
tibble(
shopping_cart = c(
rep("chocolate", 2),
rep("ice-cream", 5),
rep("cookies", 8)
)
) %>%
dplyr::count(shopping_cart) %>%
ggplot(aes (x = shopping_cart, y = n)) +
geom_col()
```
To be clear, `geom_col` is essentially `geom_bar` when we overwrite the default statistical transformation of counting to “identity”:
```
tibble(
shopping_cart = c(
rep("chocolate", 2),
rep("ice-cream", 5),
rep("cookies", 8)
)
) %>%
dplyr::count(shopping_cart) %>%
ggplot(aes (x = shopping_cart, y = n)) +
geom_bar(stat = "identity")
```
Bar plots are a frequent sight in psychology papers. They are also controversial. They often fare badly with respect to the data\-ink ratio. Especially, when what is plotted are means of grouped variables. For example, the following plot is rather uninformative (even if the research question is a comparison of means):
```
avocado_data %>%
group_by(type) %>%
summarise(
mean_price = mean(average_price)
) %>%
ggplot(aes(x = type, y = mean_price)) +
geom_col()
```
It makes sense to use the available space for a more informative report about the distribution of data points around the means, e.g., by using `geom_violin` or `geom_histogram` etc.
But bar plots may also be good enough if there is not more of immediate relevance, such as when we look at counts or proportions. Still, it might help to include a measure of certainty. For instance, using the [King of France data set](app-93-data-sets-king-of-france.html#app-93-data-sets-king-of-france), we can display proportions of ‘true’ answers with 95% bootstrapped confidence intervals like in the plot below. Notice the use of the `geom_errorbar` function to display the intervals in the following example.
We first load the preprocessed data set.
```
data_KoF_processed <- aida::data_KoF_preprocessed
```
```
data_KoF_processed %>%
# drop unused factor levels
droplevels() %>%
# get means and 95% bootstrapped CIs for each condition
group_by(condition) %>%
nest() %>%
summarise(
CIs = map(data, function(d) bootstrapped_CI(d$response == "TRUE"))
) %>%
unnest(CIs) %>%
# plot means and CIs
ggplot(aes(x = condition, y = mean, fill = condition)) +
geom_col() +
geom_errorbar(aes(ymin = lower, ymax = upper, width = 0.2)) +
ylim(0, 1) +
labs(
x = "",
y = "",
title = "Proportion of 'TRUE' responses per condition",
subtitle = "Error bars are bootstrapped 95% CIs"
) +
theme(legend.position = "none") +
scale_fill_manual(values = project_colors) +
theme(axis.text.x = element_text(angle = 30, hjust = 1))
```
**Exercise 6\.2: Create a bar plot**
The data set we will work with in this exercise is currently not part of the `aida` package. We need to load it like so:
```
url_prefix <- "https://raw.githubusercontent.com/michael-franke/intro-data-analysis/master/data_sets/"
WHO_data_url <- str_c(url_prefix, "WHO.csv")
dataWHO <- read_csv(WHO_data_url)
```
Take a look at the data set first, in order to get familiar with it.
Create a bar plot, in which `Region` is on the x\-axis and `LifeExpectancy_mean` is on the y\-axis. Each bar should represent the mean life expectancy rate for each region.
Solution
A minimally functional solution would be this:
```
dataWHO %>%
# get the mean for each region
group_by(Region) %>%
summarise(
LifeExpectancy_mean = mean(LifeExpectancy)
) %>%
# plot
ggplot(
aes(
x = Region,
y = LifeExpectancy_mean,
fill = Region
)
) +
geom_col()
```
A prettier version suppresses the legend, changes the axis labels, and sligtly tilts the tick labels:
```
dataWHO %>%
# get the mean for each region
group_by(Region) %>%
summarise(
LifeExpectancy_mean = mean(LifeExpectancy)
) %>%
# plot
ggplot(
aes(
x = Region,
y = LifeExpectancy_mean,
fill = Region
)
) +
geom_col() +
# nicer axis labels and a title
labs(
y = "Mean life expectancy",
title = "Mean life expectancy in different world regions"
) +
# hide legend (b/c redundant)
theme(legend.position = "none") +
# tilt tick labels by 30 degrees
theme(axis.text.x = element_text(angle = 30, hjust = 1))
```
### 6\.4\.5 Plotting distributions: histograms, boxplots, densities and violins
There are different ways for plotting the distribution of observations in a one\-dimensional vector, each with its own advantages and disadvantages: the histogram, a box plot, a density plot, and a violin plot. Let’s have a look at each, based on the `average_price` of different types of avocados.
The histogram displays the number of occurrences of observations inside of prespecified bins. By default, the function `geom_histogram` uses 30 equally spaced bins to display counts of your observations.
```
avocado_data %>%
ggplot(aes(x = average_price)) +
geom_histogram()
```
If we specify more bins, we get a more fine\-grained picture. (But notice that such a high number of bins works for the present data set, which has many observations, but it would not necessarily for a small data set.)
```
avocado_data %>%
ggplot(aes(x = average_price)) +
geom_histogram(bins = 75)
```
We can also layer histograms but this is usually a bad idea (even if we tinker with opacity) because a higher layer might block important information from a lower layer:
```
avocado_data %>%
ggplot(aes(x = average_price, fill = type)) +
geom_histogram(bins = 75)
```
An alternative display of distributional metric information is a **box plot**. Box plots are classics, also called *box\-and\-whiskers plots*, and they basically visually report key summary statistics of your metric data. These do work much better than histograms for direct comparison:
```
avocado_data %>%
ggplot(aes(x = type, y = average_price)) +
geom_boxplot()
```
What we see here is the median for each group (thick black line) and the 25% and 75% quantiles (boxes). The straight lines show the range from the 25% or 75% quantiles to the values given by median \+ 1\.58 \* IQR / sqrt(n), where the IQR is the “interquartile range”, i.e., the range between the 25% and 75% quantiles (boxes).
To get a better picture of the shape of the distribution, `geom_density` uses a kernel estimate to show a smoothed line, roughly indicating ranges of higher density of observations with higher numbers. Using opacity, `geom_density` is useful also for the close comparison of distributions across different groups:
```
avocado_data %>%
ggplot(aes(x = average_price, color = type, fill = type)) +
geom_density(alpha = 0.5)
```
For many groups to compare, density plots can become cluttered. **Violin plots** are like mirrored density plots and are better for comparison of multiple groups:
```
avocado_data %>%
ggplot(aes(x = type, y = average_price, fill = type)) +
geom_violin(alpha = 0.5)
```
A frequently seen method of visualization is to layer a jittered distribution of points under a violin plot, like so:
```
avocado_data %>%
ggplot(aes(x = type, y = average_price, fill = type)) +
geom_jitter(alpha = 0.3, width = 0.2) +
geom_violin(alpha = 0.5)
```
### 6\.4\.6 Rugs
Since plotting distributions, especially with high\-level abstract smoothing as in `geom_density` and `geom_violin` fails to give information about the actual quantity of the data points, rug plots are useful additions to such plots. `geom_rug` add marks along the axes where different points lie.
Here is an example of `geom_rug` combined with `geom_density`:
```
avocado_data %>%
filter(type == "organic") %>%
ggplot(aes(x = average_price)) +
geom_density(fill = "darkorange", alpha = 0.5) +
geom_rug()
```
Here are rugs on a two\-dimensional scatter plot:
```
avocado_data %>%
filter(type == "conventional") %>%
ggplot(aes(x = total_volume_sold, y = average_price)) +
geom_point(alpha = 0.3) +
geom_rug(alpha = 0.2)
```
### 6\.4\.7 Annotation
It can be useful to add further elements to a plot. We might want to add text, or specific geometrical shapes to highlight aspects of data. The most general function for doing this is `annotate`. The function `annotate` takes as a first argument a `geom` argument, e.g., `text` or `rectangle`. It is therefore not a wrapper function in the `geom_` family of functions, but the underlying function around which convenience functions like `geom_text` or `geom_rectangle` are wrapped. The further arguments that `annotate` expects depend on the geom it is supposed to realize.
Suppose we want to add textual information at a particular coordinate. We can do this with `annotate` as follows:
```
avocado_data %>%
filter(type == "conventional") %>%
ggplot(aes(x = total_volume_sold, y = average_price)) +
geom_point(alpha = 0.2) +
annotate(
geom = "text",
# x and y coordinates for the text
x = 2e7,
y = 2,
# text to be displayed
label = "Bravo avocado!",
color = "firebrick",
size = 8
)
```
We can also single out some data points, like so:
```
avocado_data %>%
filter(type == "conventional") %>%
ggplot(aes(x = total_volume_sold, y = average_price)) +
geom_point(alpha = 0.2) +
annotate(
geom = "rect",
# coordinates for the rectangle
xmin = 2.1e7,
xmax = max(avocado_data$total_volume_sold) + 100,
ymin = 0.7,
ymax = 1.7,
color = "firebrick",
alpha = 0,
size = 2
)
```
### 6\.4\.1 Scatter plots with `geom_point`
Scatter plots visualize pairs of associated observations as points in space. We have seen this for the avocado prize data above. Let’s look at some of the further arguments we can use to tweak the presentation by `geom_point`. The following example changes the shape of the objects displayed to tilted rectangles (sometimes called diamonds, e.g., in LaTeX `\diamond`) away from the default circles, the color of the shapes, their size and opacity.
```
avocado_data %>%
ggplot(aes(x = log(total_volume_sold), y = average_price)) +
geom_point(
# shape to display is number 23 (tilted rectangle, see below)
shape = 23,
# color of the surrounding line of the shape (for shapes 21-24)
color = "darkblue",
# color of the interior of each shape
fill = "lightblue",
# size of each shape (default is 1)
size = 5,
# level of opacity for each shape
alpha = 0.3
)
```
How do you know which shape is which number? \- By looking at the picture in Figure [6\.5](Chap-02-04-geoms.html#fig:02-04-ggplot-shapes), for instance.
Figure 6\.5: The numerical coding of different shapes in `ggplot`. Notice that objects 21\-24 are sensitive to both `color` and `fill`, but the others are only sensitive to `color`.
### 6\.4\.2 Smooth
The `geom_smooth` function operates on two\-dimensional metric data and outputs a smoothed line, using different kinds of fitting functions. It is possible to show an indicator of certainty for the fit. We will deal with model fits in later parts of the book. For illustration, just enjoy a few examples here:
```
avocado_data %>%
ggplot(aes(x = log(total_volume_sold), y = average_price)) +
geom_point(
shape = 23,
color = "darkblue",
fill = "lightblue",
size = 3,
alpha = 0.3
) +
geom_smooth(
# fitting a smoothed curve to the data
method = "loess",
# display standard error around smoothing curve
se = T,
color = "darkorange"
)
```
### 6\.4\.3 Line
Use `geom_line` to display a line for your data if that data has associated (ordered) metric values. You can use argument `linetype` to specify the kind of line to draw.
```
tibble(
x = seq(-4, 8, by = 2),
y = x^2
) %>%
ggplot(aes(x, y)) +
geom_line(
linetype = "dashed"
)
```
Sometimes you may want to draw lines between items that are grouped:
```
BLJM_associated_counts %>%
ggplot(
aes(
x = LB,
y = n,
color = JM,
group = JM
)
) +
geom_line(size = 3)
```
### 6\.4\.4 Bar plot
A bar plot, plotted with `geom_bar` or `geom_col`, displays a single number for each of several groups for visual comparison by length. The difference between these two functions is that `geom_bar` relies on implicit counting, while `geom_col` expects the numbers that translate into the length of the bars to be supplied for it. This book favors the use of `geom_col` by first wrangling the data to show the numbers to be visualized, since often this is the cleaner approach and the numbers are useful to have access to independently (e.g., for referring to in the text).
Here’s an example of how `bar_plot` works (implicitly counting numbers of occurrences):
```
tibble(
shopping_cart = c(
rep("chocolate", 2),
rep("ice-cream", 5),
rep("cookies", 8)
)
) %>%
ggplot(aes(x = shopping_cart)) +
geom_bar()
```
To display this data with `geom_col` we need to count occurrences first ourselves:
```
tibble(
shopping_cart = c(
rep("chocolate", 2),
rep("ice-cream", 5),
rep("cookies", 8)
)
) %>%
dplyr::count(shopping_cart) %>%
ggplot(aes (x = shopping_cart, y = n)) +
geom_col()
```
To be clear, `geom_col` is essentially `geom_bar` when we overwrite the default statistical transformation of counting to “identity”:
```
tibble(
shopping_cart = c(
rep("chocolate", 2),
rep("ice-cream", 5),
rep("cookies", 8)
)
) %>%
dplyr::count(shopping_cart) %>%
ggplot(aes (x = shopping_cart, y = n)) +
geom_bar(stat = "identity")
```
Bar plots are a frequent sight in psychology papers. They are also controversial. They often fare badly with respect to the data\-ink ratio. Especially, when what is plotted are means of grouped variables. For example, the following plot is rather uninformative (even if the research question is a comparison of means):
```
avocado_data %>%
group_by(type) %>%
summarise(
mean_price = mean(average_price)
) %>%
ggplot(aes(x = type, y = mean_price)) +
geom_col()
```
It makes sense to use the available space for a more informative report about the distribution of data points around the means, e.g., by using `geom_violin` or `geom_histogram` etc.
But bar plots may also be good enough if there is not more of immediate relevance, such as when we look at counts or proportions. Still, it might help to include a measure of certainty. For instance, using the [King of France data set](app-93-data-sets-king-of-france.html#app-93-data-sets-king-of-france), we can display proportions of ‘true’ answers with 95% bootstrapped confidence intervals like in the plot below. Notice the use of the `geom_errorbar` function to display the intervals in the following example.
We first load the preprocessed data set.
```
data_KoF_processed <- aida::data_KoF_preprocessed
```
```
data_KoF_processed %>%
# drop unused factor levels
droplevels() %>%
# get means and 95% bootstrapped CIs for each condition
group_by(condition) %>%
nest() %>%
summarise(
CIs = map(data, function(d) bootstrapped_CI(d$response == "TRUE"))
) %>%
unnest(CIs) %>%
# plot means and CIs
ggplot(aes(x = condition, y = mean, fill = condition)) +
geom_col() +
geom_errorbar(aes(ymin = lower, ymax = upper, width = 0.2)) +
ylim(0, 1) +
labs(
x = "",
y = "",
title = "Proportion of 'TRUE' responses per condition",
subtitle = "Error bars are bootstrapped 95% CIs"
) +
theme(legend.position = "none") +
scale_fill_manual(values = project_colors) +
theme(axis.text.x = element_text(angle = 30, hjust = 1))
```
**Exercise 6\.2: Create a bar plot**
The data set we will work with in this exercise is currently not part of the `aida` package. We need to load it like so:
```
url_prefix <- "https://raw.githubusercontent.com/michael-franke/intro-data-analysis/master/data_sets/"
WHO_data_url <- str_c(url_prefix, "WHO.csv")
dataWHO <- read_csv(WHO_data_url)
```
Take a look at the data set first, in order to get familiar with it.
Create a bar plot, in which `Region` is on the x\-axis and `LifeExpectancy_mean` is on the y\-axis. Each bar should represent the mean life expectancy rate for each region.
Solution
A minimally functional solution would be this:
```
dataWHO %>%
# get the mean for each region
group_by(Region) %>%
summarise(
LifeExpectancy_mean = mean(LifeExpectancy)
) %>%
# plot
ggplot(
aes(
x = Region,
y = LifeExpectancy_mean,
fill = Region
)
) +
geom_col()
```
A prettier version suppresses the legend, changes the axis labels, and sligtly tilts the tick labels:
```
dataWHO %>%
# get the mean for each region
group_by(Region) %>%
summarise(
LifeExpectancy_mean = mean(LifeExpectancy)
) %>%
# plot
ggplot(
aes(
x = Region,
y = LifeExpectancy_mean,
fill = Region
)
) +
geom_col() +
# nicer axis labels and a title
labs(
y = "Mean life expectancy",
title = "Mean life expectancy in different world regions"
) +
# hide legend (b/c redundant)
theme(legend.position = "none") +
# tilt tick labels by 30 degrees
theme(axis.text.x = element_text(angle = 30, hjust = 1))
```
### 6\.4\.5 Plotting distributions: histograms, boxplots, densities and violins
There are different ways for plotting the distribution of observations in a one\-dimensional vector, each with its own advantages and disadvantages: the histogram, a box plot, a density plot, and a violin plot. Let’s have a look at each, based on the `average_price` of different types of avocados.
The histogram displays the number of occurrences of observations inside of prespecified bins. By default, the function `geom_histogram` uses 30 equally spaced bins to display counts of your observations.
```
avocado_data %>%
ggplot(aes(x = average_price)) +
geom_histogram()
```
If we specify more bins, we get a more fine\-grained picture. (But notice that such a high number of bins works for the present data set, which has many observations, but it would not necessarily for a small data set.)
```
avocado_data %>%
ggplot(aes(x = average_price)) +
geom_histogram(bins = 75)
```
We can also layer histograms but this is usually a bad idea (even if we tinker with opacity) because a higher layer might block important information from a lower layer:
```
avocado_data %>%
ggplot(aes(x = average_price, fill = type)) +
geom_histogram(bins = 75)
```
An alternative display of distributional metric information is a **box plot**. Box plots are classics, also called *box\-and\-whiskers plots*, and they basically visually report key summary statistics of your metric data. These do work much better than histograms for direct comparison:
```
avocado_data %>%
ggplot(aes(x = type, y = average_price)) +
geom_boxplot()
```
What we see here is the median for each group (thick black line) and the 25% and 75% quantiles (boxes). The straight lines show the range from the 25% or 75% quantiles to the values given by median \+ 1\.58 \* IQR / sqrt(n), where the IQR is the “interquartile range”, i.e., the range between the 25% and 75% quantiles (boxes).
To get a better picture of the shape of the distribution, `geom_density` uses a kernel estimate to show a smoothed line, roughly indicating ranges of higher density of observations with higher numbers. Using opacity, `geom_density` is useful also for the close comparison of distributions across different groups:
```
avocado_data %>%
ggplot(aes(x = average_price, color = type, fill = type)) +
geom_density(alpha = 0.5)
```
For many groups to compare, density plots can become cluttered. **Violin plots** are like mirrored density plots and are better for comparison of multiple groups:
```
avocado_data %>%
ggplot(aes(x = type, y = average_price, fill = type)) +
geom_violin(alpha = 0.5)
```
A frequently seen method of visualization is to layer a jittered distribution of points under a violin plot, like so:
```
avocado_data %>%
ggplot(aes(x = type, y = average_price, fill = type)) +
geom_jitter(alpha = 0.3, width = 0.2) +
geom_violin(alpha = 0.5)
```
### 6\.4\.6 Rugs
Since plotting distributions, especially with high\-level abstract smoothing as in `geom_density` and `geom_violin` fails to give information about the actual quantity of the data points, rug plots are useful additions to such plots. `geom_rug` add marks along the axes where different points lie.
Here is an example of `geom_rug` combined with `geom_density`:
```
avocado_data %>%
filter(type == "organic") %>%
ggplot(aes(x = average_price)) +
geom_density(fill = "darkorange", alpha = 0.5) +
geom_rug()
```
Here are rugs on a two\-dimensional scatter plot:
```
avocado_data %>%
filter(type == "conventional") %>%
ggplot(aes(x = total_volume_sold, y = average_price)) +
geom_point(alpha = 0.3) +
geom_rug(alpha = 0.2)
```
### 6\.4\.7 Annotation
It can be useful to add further elements to a plot. We might want to add text, or specific geometrical shapes to highlight aspects of data. The most general function for doing this is `annotate`. The function `annotate` takes as a first argument a `geom` argument, e.g., `text` or `rectangle`. It is therefore not a wrapper function in the `geom_` family of functions, but the underlying function around which convenience functions like `geom_text` or `geom_rectangle` are wrapped. The further arguments that `annotate` expects depend on the geom it is supposed to realize.
Suppose we want to add textual information at a particular coordinate. We can do this with `annotate` as follows:
```
avocado_data %>%
filter(type == "conventional") %>%
ggplot(aes(x = total_volume_sold, y = average_price)) +
geom_point(alpha = 0.2) +
annotate(
geom = "text",
# x and y coordinates for the text
x = 2e7,
y = 2,
# text to be displayed
label = "Bravo avocado!",
color = "firebrick",
size = 8
)
```
We can also single out some data points, like so:
```
avocado_data %>%
filter(type == "conventional") %>%
ggplot(aes(x = total_volume_sold, y = average_price)) +
geom_point(alpha = 0.2) +
annotate(
geom = "rect",
# coordinates for the rectangle
xmin = 2.1e7,
xmax = max(avocado_data$total_volume_sold) + 100,
ymin = 0.7,
ymax = 1.7,
color = "firebrick",
alpha = 0,
size = 2
)
```
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-02-04-faceting.html |
6\.5 Faceting
-------------
If we have grouping information, sometimes it can just get too much to put all of the information in a single plot, even if we use colors, shapes or line types for disambiguation. Facets are a great way to separately repeat the same kind of plot for different levels of relevant factors.
The functions `facet_grid` and `facet_wrap` are used for faceting. They both expect a formula\-like syntax (we have not yet introduced formulas) using the notation `~` to separate factors. The difference between these functions shows most clearly when we have more than two factors. So let’s introduce a new factor `early` to the avocado price data, representing whether a recorded measurement was no later than the median date or not.
```
avocado_data_early_late <- avocado_data %>%
mutate(early = ifelse(Date <= median(Date), "early", "late"))
```
Using `facet_grid` we get a two\-dimensional grid, and we can specify along which axis of this grid the different factor levels are to range by putting the factors in the formula notation like this: `row_factor ~ col_factor`.
```
avocado_data_early_late %>%
ggplot(aes(x = log(total_volume_sold), y = average_price)) +
geom_point(alpha = 0.3, color = "skyblue") +
geom_smooth(method = "lm", color = "darkorange") +
facet_grid(type ~ early)
```
The same kind of plot realized with `facet_wrap` looks slightly different. The different factor level combinations are mushed together into a pair.
```
avocado_data_early_late %>%
ggplot(aes(x = log(total_volume_sold), y = average_price)) +
geom_point(alpha = 0.3, color = "skyblue") +
geom_smooth(method = "lm", color = "darkorange") +
facet_wrap(type ~ early)
```
**Exercise 6\.3: Faceting**
In your own words, describe what each line of the two code chunks above does.
Solution
For both:
1. Defining which information should be placed on which axis.
2. A scatter plot is created using `geom_point` to show data points. Furthermore, the alpha level is chosen and the color of the points is skyblue.
3. A line is added using `geom_smooth` and the method `lm`. The color of the line is dark orange.
4. Both `geom_point` and `geom_smooth` are currently following the mapping given at the beginning.
`facet_grid`:
5. Now the grid is created with `facet_grid`, which divides the plot into type and time (early or late). In each part of the plot, you now see the subplot, which contains only the data points that belong to the respective combination. Type and time are placed on different axes.
`facet_wrap`:
5. Now the grid is created with `facet_wrap`, which divides the plot into type and time (early or late). In each part of the plot, you now see the subplot, which contains only the data points that belong to the respective combination. Here, type and time are combined into pairs.
With `facet_wrap` it is possible to specify the desired number of columns or rows:
```
avocado_data_early_late %>%
ggplot(aes(x = log(total_volume_sold), y = average_price)) +
geom_point(alpha = 0.3, color = "skyblue") +
geom_smooth(method = "lm", color = "darkorange") +
facet_wrap(type ~ early, nrow = 1)
```
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-02-04-customization.html |
6\.6 Customization etc.
-----------------------
There are many ways in which graphs can (and often: ought to) be tweaked further. The following can only cover a small, but hopefully useful selection.
### 6\.6\.1 Themes
The general appearance of a plot is governed by its **theme**. There are many ready\-made themes already in the `ggplot` package, as listed [here](https://ggplot2.tidyverse.org/reference/ggtheme.html), and there are more in several other packages. If we store a plot in a variable we can look at how different themes affect it.
```
avocado_grid_plot <- avocado_data_early_late %>%
ggplot(aes(x = log(total_volume_sold), y = average_price)) +
geom_point(alpha = 0.3, color = "skyblue") +
geom_smooth(method = "lm", color = "darkorange") +
facet_grid(type ~ early)
```
```
avocado_grid_plot + theme_classic()
```
```
avocado_grid_plot + theme_void()
```
```
avocado_grid_plot + theme_dark()
```
The plots in this book use the theme `hrbrthemes::theme_ipsum` from the `hrbrthemes` package as a default. You can set the default theme for all subsequent plots using a command like this:
```
# set the 'void' theme as global default
theme_set(
theme_void()
)
```
More elaborate tweaking of a plot’s layout can be achieved by the `theme` function. There are [many options](https://ggplot2.tidyverse.org/reference/theme.html). Some let you do crazy things:
```
avocado_grid_plot + theme(plot.background = element_rect(fill = "darkgreen"))
```
### 6\.6\.2 Guides
When using grouped variables (by color, shape, linetype, group, …) `ggplot` creates a legend automatically.
```
avocado_data %>%
ggplot(
mapping = aes(
x = log(total_volume_sold),
y = average_price,
color = type
)
) +
geom_point(alpha = 0.5)
```
The legend can be suppressed with the `guides` command. It takes as arguments the different types of grouping variables (like `color`, `group`, etc.).
```
avocado_data %>%
ggplot(
mapping = aes(
x = log(total_volume_sold),
y = average_price,
color = type
)
) +
geom_point(alpha = 0.5) +
# no legend for grouping by color
guides(color = "none")
```
### 6\.6\.3 Axes, ticks and tick labels
If you need to use a non\-standard (Cartesian) axis, you can do so, e.g., by changing the \\(x\\)\-axis to a log scale (with base 10\):
```
avocado_data %>%
ggplot(
mapping = aes(
x = total_volume_sold,
y = average_price,
color = type
)
) +
geom_point(alpha = 0.5) +
scale_x_log10()
```
The `scales` package has a number of nice convenience functions for tweaking axis ticks (the places where axes are marked and possibly labeled) and tick labels (the labels applied to the tick marks). For example, we can add dollar signs to the price information, like so:
```
avocado_data %>%
ggplot(
mapping = aes(
x = total_volume_sold,
y = average_price,
color = type
)
) +
geom_point(alpha = 0.5) +
scale_x_log10() +
scale_y_continuous(labels = scales::dollar)
```
### 6\.6\.4 Labels
To change any other kind of labeling information (aside from tick mark labels on axes), the `labs` function can be used. It is rather self\-explanatory:
```
avocado_data %>%
ggplot(
mapping = aes(
x = total_volume_sold,
y = average_price,
color = type
)
) +
geom_point(alpha = 0.5) +
scale_x_log10() +
scale_y_continuous(labels = scales::dollar) +
# change axis labels and plot title & subtitle
labs(
x = 'Total volume sold (on a log scale)',
y = 'Average price',
title = "Avocado prices plotted against the amount sold per type",
subtitle = "With linear regression lines",
caption = "This plot shows the total volume of avocados sold against the average price for many different points in time."
)
```
### 6\.6\.5 Combining \& arranging plots
Presenting visual information in a tightly packed spatial arrangement can be helpful for the spectator. Everything is within a single easy saccade, so to speak. Therefore it can be useful to combine different plots into a single combined plot. The `cowplot` package helps with this, in particular the function `cowplot::plot_grid` as shown here:
```
# create an avocado plot
avocado_plot <- avocado_data %>%
ggplot(aes(x = total_volume_sold, y = average_price)) +
geom_point(alpha = 0.5)
# create a BLJM bar plot
BLJM_plot <- data_BLJM_processed %>%
ggplot(aes(x = response)) +
geom_bar()
# combine both into one
cowplot::plot_grid(
# plots to combine
avocado_plot,
BLJM_plot,
# number columns
ncol = 1
)
```
### 6\.6\.6 LaTeX expressions in plot labels
If you are enthusiastic about LaTeX, you can also use it inside of plot labels. The `latex2exp` package is useful here, which provides the function `latex2exp::TeX` to allow you to include LaTeX formulas. Just make sure that you double all backslashes, as in the following example:
```
avocado_data %>%
ggplot(aes(x = total_volume_sold, y = average_price)) +
geom_point(alpha = 0.5) +
labs(title = latex2exp::TeX("We can use $\\LaTeX$ here: $\\sum_{i = 0}^n \\alpha^i$"))
```
**Exercise 6\.4: Customization**
Feel free to play around with customizing your previously created plots or plots that you find in this book. Try to make annotations or try out different themes and colors. It will help you understand these kinds of plots a little better.
### 6\.6\.1 Themes
The general appearance of a plot is governed by its **theme**. There are many ready\-made themes already in the `ggplot` package, as listed [here](https://ggplot2.tidyverse.org/reference/ggtheme.html), and there are more in several other packages. If we store a plot in a variable we can look at how different themes affect it.
```
avocado_grid_plot <- avocado_data_early_late %>%
ggplot(aes(x = log(total_volume_sold), y = average_price)) +
geom_point(alpha = 0.3, color = "skyblue") +
geom_smooth(method = "lm", color = "darkorange") +
facet_grid(type ~ early)
```
```
avocado_grid_plot + theme_classic()
```
```
avocado_grid_plot + theme_void()
```
```
avocado_grid_plot + theme_dark()
```
The plots in this book use the theme `hrbrthemes::theme_ipsum` from the `hrbrthemes` package as a default. You can set the default theme for all subsequent plots using a command like this:
```
# set the 'void' theme as global default
theme_set(
theme_void()
)
```
More elaborate tweaking of a plot’s layout can be achieved by the `theme` function. There are [many options](https://ggplot2.tidyverse.org/reference/theme.html). Some let you do crazy things:
```
avocado_grid_plot + theme(plot.background = element_rect(fill = "darkgreen"))
```
### 6\.6\.2 Guides
When using grouped variables (by color, shape, linetype, group, …) `ggplot` creates a legend automatically.
```
avocado_data %>%
ggplot(
mapping = aes(
x = log(total_volume_sold),
y = average_price,
color = type
)
) +
geom_point(alpha = 0.5)
```
The legend can be suppressed with the `guides` command. It takes as arguments the different types of grouping variables (like `color`, `group`, etc.).
```
avocado_data %>%
ggplot(
mapping = aes(
x = log(total_volume_sold),
y = average_price,
color = type
)
) +
geom_point(alpha = 0.5) +
# no legend for grouping by color
guides(color = "none")
```
### 6\.6\.3 Axes, ticks and tick labels
If you need to use a non\-standard (Cartesian) axis, you can do so, e.g., by changing the \\(x\\)\-axis to a log scale (with base 10\):
```
avocado_data %>%
ggplot(
mapping = aes(
x = total_volume_sold,
y = average_price,
color = type
)
) +
geom_point(alpha = 0.5) +
scale_x_log10()
```
The `scales` package has a number of nice convenience functions for tweaking axis ticks (the places where axes are marked and possibly labeled) and tick labels (the labels applied to the tick marks). For example, we can add dollar signs to the price information, like so:
```
avocado_data %>%
ggplot(
mapping = aes(
x = total_volume_sold,
y = average_price,
color = type
)
) +
geom_point(alpha = 0.5) +
scale_x_log10() +
scale_y_continuous(labels = scales::dollar)
```
### 6\.6\.4 Labels
To change any other kind of labeling information (aside from tick mark labels on axes), the `labs` function can be used. It is rather self\-explanatory:
```
avocado_data %>%
ggplot(
mapping = aes(
x = total_volume_sold,
y = average_price,
color = type
)
) +
geom_point(alpha = 0.5) +
scale_x_log10() +
scale_y_continuous(labels = scales::dollar) +
# change axis labels and plot title & subtitle
labs(
x = 'Total volume sold (on a log scale)',
y = 'Average price',
title = "Avocado prices plotted against the amount sold per type",
subtitle = "With linear regression lines",
caption = "This plot shows the total volume of avocados sold against the average price for many different points in time."
)
```
### 6\.6\.5 Combining \& arranging plots
Presenting visual information in a tightly packed spatial arrangement can be helpful for the spectator. Everything is within a single easy saccade, so to speak. Therefore it can be useful to combine different plots into a single combined plot. The `cowplot` package helps with this, in particular the function `cowplot::plot_grid` as shown here:
```
# create an avocado plot
avocado_plot <- avocado_data %>%
ggplot(aes(x = total_volume_sold, y = average_price)) +
geom_point(alpha = 0.5)
# create a BLJM bar plot
BLJM_plot <- data_BLJM_processed %>%
ggplot(aes(x = response)) +
geom_bar()
# combine both into one
cowplot::plot_grid(
# plots to combine
avocado_plot,
BLJM_plot,
# number columns
ncol = 1
)
```
### 6\.6\.6 LaTeX expressions in plot labels
If you are enthusiastic about LaTeX, you can also use it inside of plot labels. The `latex2exp` package is useful here, which provides the function `latex2exp::TeX` to allow you to include LaTeX formulas. Just make sure that you double all backslashes, as in the following example:
```
avocado_data %>%
ggplot(aes(x = total_volume_sold, y = average_price)) +
geom_point(alpha = 0.5) +
labs(title = latex2exp::TeX("We can use $\\LaTeX$ here: $\\sum_{i = 0}^n \\alpha^i$"))
```
**Exercise 6\.4: Customization**
Feel free to play around with customizing your previously created plots or plots that you find in this book. Try to make annotations or try out different themes and colors. It will help you understand these kinds of plots a little better.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-03-01-probability-basics.html |
7\.1 Probability
----------------
Intuitively put, a probability distribution is a formal construct that captures an agent’s belief state.
In Bayesian data analysis, that agent of interest is the analyst themselves or a hypothetical model of the analyst.
More concretely, a probability distribution assigns numerical values (conveniently scaled to lie between 0 and 1\) to a number of different contingencies, i.e., different ways the world could be.
These numbers can be interpreted as the weight of belief (also referred to as “degree of credence” in the philosophical literature) that the agent assigns to each contingency: the higher the number assigned to a contingency, the more likely the agent considers this way the world could be.
### 7\.1\.1 Outcomes, events, observations
To define the notion of probability, we first consider the space of relevant contingencies (ways the world could be) \\(\\Omega\\) containing all **elementary outcomes** \\(\\omega\_1, \\omega\_2, \\dots \\in \\Omega\\) of a process or an event whose execution is (partially) random or unknown.
Elementary outcomes are mutually exclusive (\\(\\omega\_i \\neq \\omega\_j \\; \\text{for} \\,\\forall i \\neq j\\)). The set \\(\\Omega\\) exhausts all possibilities.[31](#fn31)
**Example.** The set of elementary outcomes of a single coin flip is \\(\\Omega\_{\\text{coin flip}} \= \\left \\{ \\text{heads}, \\text{tails} \\right \\}\\). The elementary outcomes of tossing a six\-sided die are \\(\\Omega\_{\\text{standard die}} \= \\{\\)⚀, ⚁, ⚂, ⚃, ⚄, ⚅ \\(\\}\\).[32](#fn32)
An **event** \\(A\\) is a subset of \\(\\Omega\\). Think of an event as a (possibly partial)
observation. We might observe, for instance, not the full outcome of tossing a die, but only
that there is a dot in the middle. This would correspond to the event
\\(A \= \\{\\) ⚀, ⚂, ⚄ \\(\\}\\),
i.e., observing an odd\-numbered outcome. The *trivial observation* \\(A \= \\Omega\\) and the
*impossible observation* \\(A \= \\emptyset\\) are counted as events, too. The latter is included for
technical reasons that we don’t need to know for our purpose.
For any two events \\(A, B \\subseteq \\Omega\\), standard set operations correspond to logical
connectives in the usual way. For example, the conjunction \\(A \\cap B\\) is the observation of
both \\(A\\) and \\(B\\); the disjunction \\(A \\cup B\\) is the observation that it is either \\(A\\) or \\(B\\);
the negation of \\(A\\), \\(\\overline{A} \= \\left \\{ \\omega \\in \\Omega \\mid \\omega \\not \\in A \\right \\}\\), is the
observation that it is not \\(A\\).
### 7\.1\.2 Probability distributions
A **probability distribution** \\(P\\) over \\(\\Omega\\) is a function
\\(P \\ \\colon \\ \\mathfrak{P}(\\Omega) \\rightarrow \\mathbb{R}\\) [33](#fn33) that assigns to all events \\(A \\subseteq \\Omega\\) a real number, such that the following (so\-called Kolmogorov axioms) are satisfied:
A1\. \\(0 \\le P(A) \\le 1\\)
A2\. \\(P(\\Omega) \= 1\\)
A3\. \\(P(A\_1 \\cup A\_2 \\cup A\_3 \\cup \\dots) \= P(A\_1\) \+ P(A\_2\) \+ P(A\_3\) \+ \\dots\\) whenever \\(A\_1, A\_2, A\_3, \\dots\\) are mutually exclusive[34](#fn34)
Occasionally, we encounter the notation \\(P \\in \\Delta(\\Omega)\\) to express that \\(P\\) is a probability
distribution over \\(\\Omega\\). (E.g., in physics, theoretical economics or game theory. Less so in psychology or statistics.) If \\(\\omega \\in \\Omega\\) is an elementary event, we often write \\(P(\\omega)\\) as a shorthand for \\(P(\\left \\{ \\omega \\right \\})\\). In fact, if \\(\\Omega\\) is finite, it suffices to assign probabilities to elementary outcomes.
A number of rules follow immediately from the definition:
C1\. \\(P(\\emptyset) \= 0\\)
C2\. \\(P(\\overline{A}) \= 1 \- P(A)\\)
C3\. \\(P(A \\cup B) \= P(A) \+ P(B) \- P(A \\cap B)\\) for any \\(A, B \\subseteq \\Omega\\)
**Exercise 7\.1 \[optional]**
Prove C1, C2 and C3 using A1, A2 and A3\.
Solution
C1: \\(P(\\Omega \\cup \\emptyset) \= P(\\Omega) \+ P(\\emptyset) \\Leftrightarrow P(\\Omega) \= P(\\Omega) \+ P(\\emptyset) \\Leftrightarrow 0 \= P(\\emptyset)\\) following A3 since \\(\\Omega\\) and \\(\\emptyset\\) are mutually exclusive.
C2: \\(P(\\Omega) \= P(A \\cup \\overline{A}) \= P(A) \+ P(\\overline{A}) \= 1\\).
C3: \\(P(A \\cup B) \= P((A\-B) \\cup (A \\cap B) \\cup (B\-A)) \= P(A\-B) \+ P(A \\cap B) \+ P(B\-A) \= \\\\ (P(A \\cup B) \- P(B)) \+ P(A \\cap B) \+ (P(A \\cup B) \- P(A)) \= 2 P(A \\cup B) \- P(A) \- P(B) \+ P(A \\cap B) \\\\ \\Leftrightarrow P(A \\cup B) \= P(A) \+ P(B) \- P(A \\cap B)\\)
### 7\.1\.3 Interpretations of probability
It is reasonably safe to think of probability, as defined above, as a handy mathematical primitive which is useful for certain applications. There are at least three ways of thinking about where this primitive probability might come from:
1. **Frequentist:** Probabilities are generalizations of intuitions/facts about frequencies of events in repeated executions of a random event.
2. **Subjectivist:** Probabilities are subjective beliefs of a rational agent who is
uncertain about the outcome of a random event.
3. **Realist:** Probabilities are the property of an intrinsically random world.
While trying to stay away from philosophical quibbles, we will adopt a subjectivist interpretation of probabilities, but note that frequentist considerations should affect what a rational agent should believe.
### 7\.1\.4 Distributions as samples
No matter what your metaphysics of probability are, it is useful to realize that probability distributions can be approximately represented by sampling.
Think of an **urn** as a container with balls of different colors with different proportions (see Figure [7\.1](Chap-03-01-probability-basics.html#fig:03-01-single-urn)). In the simplest case, there is a number of \\(N \> 1\\) balls of which \\(k \> 0\\) are black and \\(N\-k \> 0\\) are white. (There are at least one black and one white ball.) For a single random draw from our urn we have: \\(\\Omega\_{\\text{our urn}} \= \\left \\{ \\text{white}, \\text{black} \\right \\}\\). We now draw from this urn with replacement. That is, we shake the urn, draw one ball, observe its color, take note of the color, and put it back into the urn. Each ball has the same chance of being sampled. If we imagine an infinite sequence of single draws from our urn with replacement, the limiting proportion with which we draw a black ball is \\(\\frac{k}{N}\\). This statement about frequency is what motivates saying that the probability of drawing a black ball on a single trial is (or should be[35](#fn35))
\\(P(\\text{black}) \= \\frac{k}{N}\\).
Figure 7\.1: An urn with seven black balls and three white balls. Imagine shaking this container, and then drawing blindly a single ball from it. If every ball has an equal probability of being drawn, what is the probability of drawing a black ball? That would be 0\.7\.
The plot below shows how the proportion of black balls drawn from an urn like in Figure [7\.1](Chap-03-01-probability-basics.html#fig:03-01-single-urn) with \\(k \= 7\\) black balls and \\(N \= 10\\) balls in total, gravitates to the probability 0\.7 when we keep drawing and drawing.
To sum this up concisely, we have a random process (drawing once from the urn) whose outcome is uncertain, and we convinced ourselves that the probability of an outcome corresponds to the relative frequency it occurs, in the limit of repeatedly executing the random process (i.e., sampling from the urn). From here, it requires only a small step to a crucial but ultimately very liberating realization. If the probability of an event occurring can be approximated by its frequency in a large sample, then we can approximately represent (say: internally in a computer) a probability distribution as one of two things:
1. a large set of (what is called: representative) samples; or even better as
2. an oracle (e.g., in the form of a clever algorithm) that quickly returns a representative sample.
This means that, for approximately computing with probability, we can represent distributions through samples or a sample\-generating function. We do not need to know precise probability or be able to express them in a mathematical formula. Samples or **sampling is often enough to approximate probability distributions**.
**Exercise 7\.2**
Explore how taking more or less samples affects the proportion of draws from an urn with the WebPPL code below. You can enter the number of black balls and the total number of balls for your urn. You can also enter the number of times you want to draw from your urn (with replacement \- meaning that after every draw, the ball you just picked is placed back into the urn).
You should execute the code several times in sequence with the same parameter values.
This is because each time you run the code, another different random result will be shown.
By inspecting what happens across several runs (each drawing `nr_draws` times from the urn), you can check the effect of varying the variable `nr_draws`.
E.g., what happens with a low sample size, e.g., `nr_draws = 20`, as opposed to a large sample size, e.g., `nr_draws = 100000`?
```
// how many balls are black? how many in total?
var nr_black = 7
var nr_total = 10
// how many draws from the urn (with replacement)?
var nr_draws = 20
///fold:
var model = function() {
flip(nr_black/nr_total) == 1 ? "black" : "white"
}
display('Proportion of balls sampled')
Infer({method: "forward", samples : nr_draws}, model)
///
```
Solution
With a small sample size, there is a lot of variation in the observed proportion. As the sample size gets larger and larger, the result converges to `nr_black / nr_total`.
### 7\.1\.1 Outcomes, events, observations
To define the notion of probability, we first consider the space of relevant contingencies (ways the world could be) \\(\\Omega\\) containing all **elementary outcomes** \\(\\omega\_1, \\omega\_2, \\dots \\in \\Omega\\) of a process or an event whose execution is (partially) random or unknown.
Elementary outcomes are mutually exclusive (\\(\\omega\_i \\neq \\omega\_j \\; \\text{for} \\,\\forall i \\neq j\\)). The set \\(\\Omega\\) exhausts all possibilities.[31](#fn31)
**Example.** The set of elementary outcomes of a single coin flip is \\(\\Omega\_{\\text{coin flip}} \= \\left \\{ \\text{heads}, \\text{tails} \\right \\}\\). The elementary outcomes of tossing a six\-sided die are \\(\\Omega\_{\\text{standard die}} \= \\{\\)⚀, ⚁, ⚂, ⚃, ⚄, ⚅ \\(\\}\\).[32](#fn32)
An **event** \\(A\\) is a subset of \\(\\Omega\\). Think of an event as a (possibly partial)
observation. We might observe, for instance, not the full outcome of tossing a die, but only
that there is a dot in the middle. This would correspond to the event
\\(A \= \\{\\) ⚀, ⚂, ⚄ \\(\\}\\),
i.e., observing an odd\-numbered outcome. The *trivial observation* \\(A \= \\Omega\\) and the
*impossible observation* \\(A \= \\emptyset\\) are counted as events, too. The latter is included for
technical reasons that we don’t need to know for our purpose.
For any two events \\(A, B \\subseteq \\Omega\\), standard set operations correspond to logical
connectives in the usual way. For example, the conjunction \\(A \\cap B\\) is the observation of
both \\(A\\) and \\(B\\); the disjunction \\(A \\cup B\\) is the observation that it is either \\(A\\) or \\(B\\);
the negation of \\(A\\), \\(\\overline{A} \= \\left \\{ \\omega \\in \\Omega \\mid \\omega \\not \\in A \\right \\}\\), is the
observation that it is not \\(A\\).
### 7\.1\.2 Probability distributions
A **probability distribution** \\(P\\) over \\(\\Omega\\) is a function
\\(P \\ \\colon \\ \\mathfrak{P}(\\Omega) \\rightarrow \\mathbb{R}\\) [33](#fn33) that assigns to all events \\(A \\subseteq \\Omega\\) a real number, such that the following (so\-called Kolmogorov axioms) are satisfied:
A1\. \\(0 \\le P(A) \\le 1\\)
A2\. \\(P(\\Omega) \= 1\\)
A3\. \\(P(A\_1 \\cup A\_2 \\cup A\_3 \\cup \\dots) \= P(A\_1\) \+ P(A\_2\) \+ P(A\_3\) \+ \\dots\\) whenever \\(A\_1, A\_2, A\_3, \\dots\\) are mutually exclusive[34](#fn34)
Occasionally, we encounter the notation \\(P \\in \\Delta(\\Omega)\\) to express that \\(P\\) is a probability
distribution over \\(\\Omega\\). (E.g., in physics, theoretical economics or game theory. Less so in psychology or statistics.) If \\(\\omega \\in \\Omega\\) is an elementary event, we often write \\(P(\\omega)\\) as a shorthand for \\(P(\\left \\{ \\omega \\right \\})\\). In fact, if \\(\\Omega\\) is finite, it suffices to assign probabilities to elementary outcomes.
A number of rules follow immediately from the definition:
C1\. \\(P(\\emptyset) \= 0\\)
C2\. \\(P(\\overline{A}) \= 1 \- P(A)\\)
C3\. \\(P(A \\cup B) \= P(A) \+ P(B) \- P(A \\cap B)\\) for any \\(A, B \\subseteq \\Omega\\)
**Exercise 7\.1 \[optional]**
Prove C1, C2 and C3 using A1, A2 and A3\.
Solution
C1: \\(P(\\Omega \\cup \\emptyset) \= P(\\Omega) \+ P(\\emptyset) \\Leftrightarrow P(\\Omega) \= P(\\Omega) \+ P(\\emptyset) \\Leftrightarrow 0 \= P(\\emptyset)\\) following A3 since \\(\\Omega\\) and \\(\\emptyset\\) are mutually exclusive.
C2: \\(P(\\Omega) \= P(A \\cup \\overline{A}) \= P(A) \+ P(\\overline{A}) \= 1\\).
C3: \\(P(A \\cup B) \= P((A\-B) \\cup (A \\cap B) \\cup (B\-A)) \= P(A\-B) \+ P(A \\cap B) \+ P(B\-A) \= \\\\ (P(A \\cup B) \- P(B)) \+ P(A \\cap B) \+ (P(A \\cup B) \- P(A)) \= 2 P(A \\cup B) \- P(A) \- P(B) \+ P(A \\cap B) \\\\ \\Leftrightarrow P(A \\cup B) \= P(A) \+ P(B) \- P(A \\cap B)\\)
### 7\.1\.3 Interpretations of probability
It is reasonably safe to think of probability, as defined above, as a handy mathematical primitive which is useful for certain applications. There are at least three ways of thinking about where this primitive probability might come from:
1. **Frequentist:** Probabilities are generalizations of intuitions/facts about frequencies of events in repeated executions of a random event.
2. **Subjectivist:** Probabilities are subjective beliefs of a rational agent who is
uncertain about the outcome of a random event.
3. **Realist:** Probabilities are the property of an intrinsically random world.
While trying to stay away from philosophical quibbles, we will adopt a subjectivist interpretation of probabilities, but note that frequentist considerations should affect what a rational agent should believe.
### 7\.1\.4 Distributions as samples
No matter what your metaphysics of probability are, it is useful to realize that probability distributions can be approximately represented by sampling.
Think of an **urn** as a container with balls of different colors with different proportions (see Figure [7\.1](Chap-03-01-probability-basics.html#fig:03-01-single-urn)). In the simplest case, there is a number of \\(N \> 1\\) balls of which \\(k \> 0\\) are black and \\(N\-k \> 0\\) are white. (There are at least one black and one white ball.) For a single random draw from our urn we have: \\(\\Omega\_{\\text{our urn}} \= \\left \\{ \\text{white}, \\text{black} \\right \\}\\). We now draw from this urn with replacement. That is, we shake the urn, draw one ball, observe its color, take note of the color, and put it back into the urn. Each ball has the same chance of being sampled. If we imagine an infinite sequence of single draws from our urn with replacement, the limiting proportion with which we draw a black ball is \\(\\frac{k}{N}\\). This statement about frequency is what motivates saying that the probability of drawing a black ball on a single trial is (or should be[35](#fn35))
\\(P(\\text{black}) \= \\frac{k}{N}\\).
Figure 7\.1: An urn with seven black balls and three white balls. Imagine shaking this container, and then drawing blindly a single ball from it. If every ball has an equal probability of being drawn, what is the probability of drawing a black ball? That would be 0\.7\.
The plot below shows how the proportion of black balls drawn from an urn like in Figure [7\.1](Chap-03-01-probability-basics.html#fig:03-01-single-urn) with \\(k \= 7\\) black balls and \\(N \= 10\\) balls in total, gravitates to the probability 0\.7 when we keep drawing and drawing.
To sum this up concisely, we have a random process (drawing once from the urn) whose outcome is uncertain, and we convinced ourselves that the probability of an outcome corresponds to the relative frequency it occurs, in the limit of repeatedly executing the random process (i.e., sampling from the urn). From here, it requires only a small step to a crucial but ultimately very liberating realization. If the probability of an event occurring can be approximated by its frequency in a large sample, then we can approximately represent (say: internally in a computer) a probability distribution as one of two things:
1. a large set of (what is called: representative) samples; or even better as
2. an oracle (e.g., in the form of a clever algorithm) that quickly returns a representative sample.
This means that, for approximately computing with probability, we can represent distributions through samples or a sample\-generating function. We do not need to know precise probability or be able to express them in a mathematical formula. Samples or **sampling is often enough to approximate probability distributions**.
**Exercise 7\.2**
Explore how taking more or less samples affects the proportion of draws from an urn with the WebPPL code below. You can enter the number of black balls and the total number of balls for your urn. You can also enter the number of times you want to draw from your urn (with replacement \- meaning that after every draw, the ball you just picked is placed back into the urn).
You should execute the code several times in sequence with the same parameter values.
This is because each time you run the code, another different random result will be shown.
By inspecting what happens across several runs (each drawing `nr_draws` times from the urn), you can check the effect of varying the variable `nr_draws`.
E.g., what happens with a low sample size, e.g., `nr_draws = 20`, as opposed to a large sample size, e.g., `nr_draws = 100000`?
```
// how many balls are black? how many in total?
var nr_black = 7
var nr_total = 10
// how many draws from the urn (with replacement)?
var nr_draws = 20
///fold:
var model = function() {
flip(nr_black/nr_total) == 1 ? "black" : "white"
}
display('Proportion of balls sampled')
Infer({method: "forward", samples : nr_draws}, model)
///
```
Solution
With a small sample size, there is a lot of variation in the observed proportion. As the sample size gets larger and larger, the result converges to `nr_black / nr_total`.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-03-01-probability-marginal.html |
7\.2 Structured events \& marginal distributions
------------------------------------------------
The single urn scenario of the last section is a very basic first example.
To pave the way for learning about conditional probability and Bayes rule in the next sections, let us consider a slightly more complex example.
We call it the *flip\-and\-draw scenario*.
### 7\.2\.1 Probability table for a flip\-and\-draw scenario
Suppose we have two urns. Both have \\(N\=10\\) balls. Urn 1 has \\(k\_1\=2\\) black and \\(N\-k\_1 \= 8\\) white balls. Urn 2 has \\(k\_2\=4\\) black and \\(N\-k\_2\=6\\) white balls. We sometimes draw from urn 1, sometimes from urn 2\. To decide from which urn a ball should be drawn, we flip a fair coin. If it comes up heads, we draw from urn 1; if it comes up tails, we draw from urn 2\. The process is visualized in Figure [7\.2](Chap-03-01-probability-marginal.html#fig:03-01-flip-and-draw) below.
An elementary outcome of this two\-step process of flip\-and\-draw is a pair \\(\\langle \\text{outcome\-flip}, \\text{outcome\-draw} \\rangle\\). The set of all possible such outcomes is:
\\\[\\Omega\_{\\text{flip\-and\-draw}} \= \\left \\{ \\langle \\text{heads}, \\text{black} \\rangle, \\langle \\text{heads}, \\text{white} \\rangle, \\langle \\text{tails}, \\text{black} \\rangle, \\langle \\text{tails}, \\text{white} \\rangle \\right \\}\\,.\\]
The probability of event \\(\\langle \\text{heads}, \\text{black} \\rangle\\) is given by multiplying the probability of seeing “heads” on the first flip, which happens with probability \\(0\.5\\), and then drawing a black ball, which happens with probability \\(0\.2\\), so that \\(P(\\langle \\text{heads}, \\text{black} \\rangle) \= 0\.5 \\times 0\.2 \= 0\.1\\). The probability distribution over \\(\\Omega\_{\\text{flip\-draw}}\\) is consequently as in Table [7\.1](Chap-03-01-probability-marginal.html#tab:flipdrawprobabilities). (If in doubt, start flipping \& drawing and count your outcomes or use the WebPPL code box in the exercise below to simulate flips\-and\-draws.)
Table 7\.1: Joint probability table for the flip\-and\-draw scenario
| | heads | tails |
| --- | --- | --- |
| black | \\(0\.5 \\times 0\.2 \= 0\.1\\) | \\(0\.5 \\times 0\.4 \= 0\.2\\) |
| white | \\(0\.5 \\times 0\.8 \= 0\.4\\) | \\(0\.5 \\times 0\.6 \= 0\.3\\) |
Figure 7\.2: The flip\-and\-draw scenario, with transition and full path probabilities.
### 7\.2\.2 Structured events and joint\-probability distributions
Table [7\.1](Chap-03-01-probability-marginal.html#tab:flipdrawprobabilities) is an example of a **joint probability distribution** over a structured event space, which here has two dimensions. Since our space of outcomes is the Cartesian product of two simpler outcome spaces, namely
\\(\\Omega\_{flip\\text{\-}\\\&\\text{\-}draw} \= \\Omega\_{flip} \\times \\Omega\_{draw}\\),[36](#fn36) we can use notation
\\(P(\\text{heads}, \\text{black})\\) as shorthand for \\(P(\\langle \\text{heads}, \\text{black} \\rangle)\\). More
generally, if \\(\\Omega \= \\Omega\_1 \\times \\dots \\Omega\_n\\), we can think of \\(P \\in \\Delta(\\Omega)\\)
as a joint probability distribution over \\(n\\) subspaces.
### 7\.2\.3 Marginalization
If \\(P\\) is a joint probability distribution over event space \\(\\Omega \= \\Omega\_1 \\times \\dots \\Omega\_n\\), the **marginal distribution** over subspace \\(\\Omega\_i\\), \\(1 \\le i \\le n\\) is the probability distribution that assigns to all \\(A\_i \\subseteq \\Omega\_i\\) the probability (where notation \\(P(\\dots, \\omega, \\dots )\\) is shorthand for \\(P(\\dots, \\{\\omega \\}, \\dots)\\)):[37](#fn37)
\\\[
\\begin{align\*}
P(A\_i) \& \= \\sum\_{\\omega\_1 \\in \\Omega\_{1}} \\sum\_{\\omega\_2 \\in \\Omega\_{2}} \\dots \\sum\_{\\omega\_{i\-1} \\in \\Omega\_{i\-1}} \\sum\_{\\omega\_{i\+1} \\in \\Omega\_{i\+1}} \\dots \\sum\_{\\omega\_n \\in \\Omega\_n} P(\\omega\_1, \\dots, \\omega\_{i\-1}, A\_{i}, \\omega\_{i\+1}, \\dots \\omega\_n)
\\end{align\*}
\\]
For example, the marginal distribution of draws derivable from Table [7\.1](Chap-03-01-probability-marginal.html#tab:flipdrawprobabilities) has \\(P(\\text{black}) \= P(\\text{heads, black}) \+ P(\\text{tails, black}) \= 0\.3\\) and \\(P(\\text{white}) \= 0\.7\\).[38](#fn38) The marginal distribution of coin flips derivable from the joint probability distribution in Table [7\.1](Chap-03-01-probability-marginal.html#tab:flipdrawprobabilities) gives \\(P(\\text{heads}) \= P(\\text{tails}) \= 0\.5\\), since the sum of each column is exactly \\(0\.5\\).
**Exercise 7\.3**
1. Given the following joint probability table, compute the probability that a student does not attend the lecture, i.e., \\(P(\\text{miss})\\).
| | attend | miss |
| --- | --- | --- |
| rainy | 0\.1 | 0\.6 |
| dry | 0\.2 | 0\.1 |
Solution
\\(P(\\text{miss}) \= P(\\text{miss, rainy}) \+ P(\\text{miss, dry}) \= 0\.6 \+ 0\.1 \= 0\.7\\)
2. Play around with the following WebPPL implementation of the flip\-and\-draw scenario. Change the ‘input values’ of the coin’s bias and the probabilities of sampling a black ball from either urn. Inspect the resulting joint probability tables and the marginal distribution of observing “black”. Try to find at least three different parameter settings that result in the marginal probability of black being 0\.7\.
```
// you can play around with the values of these variables
var coin_bias = 0.5 // coin bias
var prob_black_urn_1 = 0.2 // probability of drawing "black" from urn 1
var prob_black_urn_2 = 0.4 // probability of drawing "black" from urn 2
///fold:
// convenience function for showing nicer tables
var condProb2Table = function(condProbFct, row_names, col_names, precision){
var matrix = map(function(row) {
map(function(col) {
_.round(Math.exp(condProbFct.score({"coin": row, "ball": col})),precision)},
col_names)},
row_names)
var max_length_col = _.max(map(function(c) {c.length}, col_names))
var max_length_row = _.max(map(function(r) {r.length}, row_names))
var header = _.repeat(" ", max_length_row + 2)+ col_names.join(" ") + "\n"
var row = mapIndexed(function(i,r) { _.padEnd(r, max_length_row, " ") + " " +
mapIndexed(function(j,c) {
_.padEnd(matrix[i][j], c.length+2," ")},
col_names).join("") + "\n" },
row_names).join("")
return header + row
}
// flip-and-draw scenario model
var model = function() {
var coin_flip = flip(coin_bias) == 1 ? "heads" : "tails"
var prob_black_selected_urn = coin_flip == "heads" ?
prob_black_urn_1 : prob_black_urn_2
var ball_color = flip(prob_black_selected_urn) == 1 ? "black" : "white"
return({coin: coin_flip, ball: ball_color})
}
// infer model and display as (custom-made) table
var inferred_model = Infer({method: 'enumerate'}, model)
display("Joint probability table")
display(condProb2Table(inferred_model, ["tails", "heads"], ["white", "black"], 3))
display("\nMarginal probability of ball color")
viz(marginalize(inferred_model, function(x) {return x.ball}))
///
```
Solution
Three possibilities for obtaining a value of 0\.7 for the marginal probability of “black”:
1. `prob_black_urn_1 = prob_black_urn_2 = 0.7`
2. `coin_bias = 1` and `prob_black_urn_1 = 0.7`
3. `coin_bias = 0.5`, `prob_black_urn_1 = 0.8` and `prob_black_urn_2 = 0.6`
### 7\.2\.1 Probability table for a flip\-and\-draw scenario
Suppose we have two urns. Both have \\(N\=10\\) balls. Urn 1 has \\(k\_1\=2\\) black and \\(N\-k\_1 \= 8\\) white balls. Urn 2 has \\(k\_2\=4\\) black and \\(N\-k\_2\=6\\) white balls. We sometimes draw from urn 1, sometimes from urn 2\. To decide from which urn a ball should be drawn, we flip a fair coin. If it comes up heads, we draw from urn 1; if it comes up tails, we draw from urn 2\. The process is visualized in Figure [7\.2](Chap-03-01-probability-marginal.html#fig:03-01-flip-and-draw) below.
An elementary outcome of this two\-step process of flip\-and\-draw is a pair \\(\\langle \\text{outcome\-flip}, \\text{outcome\-draw} \\rangle\\). The set of all possible such outcomes is:
\\\[\\Omega\_{\\text{flip\-and\-draw}} \= \\left \\{ \\langle \\text{heads}, \\text{black} \\rangle, \\langle \\text{heads}, \\text{white} \\rangle, \\langle \\text{tails}, \\text{black} \\rangle, \\langle \\text{tails}, \\text{white} \\rangle \\right \\}\\,.\\]
The probability of event \\(\\langle \\text{heads}, \\text{black} \\rangle\\) is given by multiplying the probability of seeing “heads” on the first flip, which happens with probability \\(0\.5\\), and then drawing a black ball, which happens with probability \\(0\.2\\), so that \\(P(\\langle \\text{heads}, \\text{black} \\rangle) \= 0\.5 \\times 0\.2 \= 0\.1\\). The probability distribution over \\(\\Omega\_{\\text{flip\-draw}}\\) is consequently as in Table [7\.1](Chap-03-01-probability-marginal.html#tab:flipdrawprobabilities). (If in doubt, start flipping \& drawing and count your outcomes or use the WebPPL code box in the exercise below to simulate flips\-and\-draws.)
Table 7\.1: Joint probability table for the flip\-and\-draw scenario
| | heads | tails |
| --- | --- | --- |
| black | \\(0\.5 \\times 0\.2 \= 0\.1\\) | \\(0\.5 \\times 0\.4 \= 0\.2\\) |
| white | \\(0\.5 \\times 0\.8 \= 0\.4\\) | \\(0\.5 \\times 0\.6 \= 0\.3\\) |
Figure 7\.2: The flip\-and\-draw scenario, with transition and full path probabilities.
### 7\.2\.2 Structured events and joint\-probability distributions
Table [7\.1](Chap-03-01-probability-marginal.html#tab:flipdrawprobabilities) is an example of a **joint probability distribution** over a structured event space, which here has two dimensions. Since our space of outcomes is the Cartesian product of two simpler outcome spaces, namely
\\(\\Omega\_{flip\\text{\-}\\\&\\text{\-}draw} \= \\Omega\_{flip} \\times \\Omega\_{draw}\\),[36](#fn36) we can use notation
\\(P(\\text{heads}, \\text{black})\\) as shorthand for \\(P(\\langle \\text{heads}, \\text{black} \\rangle)\\). More
generally, if \\(\\Omega \= \\Omega\_1 \\times \\dots \\Omega\_n\\), we can think of \\(P \\in \\Delta(\\Omega)\\)
as a joint probability distribution over \\(n\\) subspaces.
### 7\.2\.3 Marginalization
If \\(P\\) is a joint probability distribution over event space \\(\\Omega \= \\Omega\_1 \\times \\dots \\Omega\_n\\), the **marginal distribution** over subspace \\(\\Omega\_i\\), \\(1 \\le i \\le n\\) is the probability distribution that assigns to all \\(A\_i \\subseteq \\Omega\_i\\) the probability (where notation \\(P(\\dots, \\omega, \\dots )\\) is shorthand for \\(P(\\dots, \\{\\omega \\}, \\dots)\\)):[37](#fn37)
\\\[
\\begin{align\*}
P(A\_i) \& \= \\sum\_{\\omega\_1 \\in \\Omega\_{1}} \\sum\_{\\omega\_2 \\in \\Omega\_{2}} \\dots \\sum\_{\\omega\_{i\-1} \\in \\Omega\_{i\-1}} \\sum\_{\\omega\_{i\+1} \\in \\Omega\_{i\+1}} \\dots \\sum\_{\\omega\_n \\in \\Omega\_n} P(\\omega\_1, \\dots, \\omega\_{i\-1}, A\_{i}, \\omega\_{i\+1}, \\dots \\omega\_n)
\\end{align\*}
\\]
For example, the marginal distribution of draws derivable from Table [7\.1](Chap-03-01-probability-marginal.html#tab:flipdrawprobabilities) has \\(P(\\text{black}) \= P(\\text{heads, black}) \+ P(\\text{tails, black}) \= 0\.3\\) and \\(P(\\text{white}) \= 0\.7\\).[38](#fn38) The marginal distribution of coin flips derivable from the joint probability distribution in Table [7\.1](Chap-03-01-probability-marginal.html#tab:flipdrawprobabilities) gives \\(P(\\text{heads}) \= P(\\text{tails}) \= 0\.5\\), since the sum of each column is exactly \\(0\.5\\).
**Exercise 7\.3**
1. Given the following joint probability table, compute the probability that a student does not attend the lecture, i.e., \\(P(\\text{miss})\\).
| | attend | miss |
| --- | --- | --- |
| rainy | 0\.1 | 0\.6 |
| dry | 0\.2 | 0\.1 |
Solution
\\(P(\\text{miss}) \= P(\\text{miss, rainy}) \+ P(\\text{miss, dry}) \= 0\.6 \+ 0\.1 \= 0\.7\\)
2. Play around with the following WebPPL implementation of the flip\-and\-draw scenario. Change the ‘input values’ of the coin’s bias and the probabilities of sampling a black ball from either urn. Inspect the resulting joint probability tables and the marginal distribution of observing “black”. Try to find at least three different parameter settings that result in the marginal probability of black being 0\.7\.
```
// you can play around with the values of these variables
var coin_bias = 0.5 // coin bias
var prob_black_urn_1 = 0.2 // probability of drawing "black" from urn 1
var prob_black_urn_2 = 0.4 // probability of drawing "black" from urn 2
///fold:
// convenience function for showing nicer tables
var condProb2Table = function(condProbFct, row_names, col_names, precision){
var matrix = map(function(row) {
map(function(col) {
_.round(Math.exp(condProbFct.score({"coin": row, "ball": col})),precision)},
col_names)},
row_names)
var max_length_col = _.max(map(function(c) {c.length}, col_names))
var max_length_row = _.max(map(function(r) {r.length}, row_names))
var header = _.repeat(" ", max_length_row + 2)+ col_names.join(" ") + "\n"
var row = mapIndexed(function(i,r) { _.padEnd(r, max_length_row, " ") + " " +
mapIndexed(function(j,c) {
_.padEnd(matrix[i][j], c.length+2," ")},
col_names).join("") + "\n" },
row_names).join("")
return header + row
}
// flip-and-draw scenario model
var model = function() {
var coin_flip = flip(coin_bias) == 1 ? "heads" : "tails"
var prob_black_selected_urn = coin_flip == "heads" ?
prob_black_urn_1 : prob_black_urn_2
var ball_color = flip(prob_black_selected_urn) == 1 ? "black" : "white"
return({coin: coin_flip, ball: ball_color})
}
// infer model and display as (custom-made) table
var inferred_model = Infer({method: 'enumerate'}, model)
display("Joint probability table")
display(condProb2Table(inferred_model, ["tails", "heads"], ["white", "black"], 3))
display("\nMarginal probability of ball color")
viz(marginalize(inferred_model, function(x) {return x.ball}))
///
```
Solution
Three possibilities for obtaining a value of 0\.7 for the marginal probability of “black”:
1. `prob_black_urn_1 = prob_black_urn_2 = 0.7`
2. `coin_bias = 1` and `prob_black_urn_1 = 0.7`
3. `coin_bias = 0.5`, `prob_black_urn_1 = 0.8` and `prob_black_urn_2 = 0.6`
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-03-01-probability-conditional.html |
7\.3 Conditional probability
----------------------------
Let us assume probability distribution \\(P \\in \\Delta(\\Omega)\\) and that events \\(A,B \\subseteq \\Omega\\) are given. The conditional probability of \\(A\\) given \\(B\\), written as \\(P(A \\mid B)\\), gives the probability of \\(A\\) on the assumption that \\(B\\) is true.[39](#fn39) It is defined like so:
\\\[P(A \\mid B) \= \\frac{P(A \\cap B)}{P(B)}\\]
Conditional probabilities are only defined when \\(P(B) \> 0\\).[40](#fn40)
**Example.** If a die is unbiased, each of its six faces has equal probability to come up after a toss. The probability of event \\(B \= \\{\\) ⚀, ⚂, ⚄ \\(\\}\\) that the tossed number is odd has probability \\(P(B) \= \\frac{1}{2}\\). The probability of event \\(A \= \\{\\) ⚂, ⚃, ⚄, ⚅ \\(\\}\\) that the tossed number is bigger than two is \\(P(A) \= \\frac{2}{3}\\). The probability that the tossed number is bigger than two *and* odd is \\(P(A \\cap B) \= P(\\{\\) ⚂, ⚄ \\(\\}) \= \\frac{1}{3}\\). The conditional probability of tossing a number that is bigger than two, when we know that the toss is odd, is \\(P(A \\mid B) \= \\frac{1 / 3}{1 / 2} \= \\frac{2}{3}\\).
Algorithmically, conditional probability first rules out all events in which \\(B\\) is not true and then simply renormalizes the probabilities assigned to the remaining events in such a way that their relative probabilities remain unchanged.
Given this, another way of interpreting conditional probability is that \\(P(A \\mid B)\\) is what a rational agent should believe about \\(A\\) after observing (nothing more than) that \\(B\\) is true.
The agent rules out, possibly hypothetically, that \\(B\\) is false, but otherwise does not change opinion about the relative probabilities of anything that is compatible with \\(B\\).
This is also explained in the video embedded below.
### 7\.3\.1 Bayes rule
Looking back at the joint\-probability distribution in Table [7\.1](Chap-03-01-probability-marginal.html#tab:flipdrawprobabilities), the conditional probability \\(P(\\text{black} \\mid \\text{heads})\\) of drawing a black ball, given that the initial coin flip
showed heads, can be calculated as follows:
\\\[
P(\\text{black} \\mid \\text{heads}) \=
\\frac{P(\\text{black} , \\text{heads})}{P(\\text{heads})} \=
\\frac{0\.1}{0\.5} \= 0\.2
\\]
This calculation, however, is quite excessive.
We can read out the conditional probability directly already from the way the flip\-and\-draw scenario was set up.
After flipping heads, we draw from urn 1, which has \\(k\=2\\) out of \\(N\=10\\) black balls, so clearly: if the initial flip comes up heads, then the probability of a black ball is \\(0\.2\\).
Indeed, in a step\-wise random generative process like the flip\-and\-draw scenario, some conditional probabilities are very clear, and sometimes given by definition.
These are, usually, the conditional probabilities that define how the process unfolds forward in time, so to speak.
**Bayes rule** is a way of expressing, in a manner of speaking, conditional probabilities in terms of the
“reversed” conditional probabilities:
\\\[P(B \\mid A) \= \\frac{P(A \\mid B) \\times P(B)}{P(A)}\\]
Bayes rule is a straightforward corollary of the definition of conditional probabilities,
according to which \\(P(A \\cap B) \= P(A \\mid B) \\times P(B)\\), so that:
\\\[
P(B \\mid A) \=
\\frac{P(A \\cap B)}{P(A)} \=
\\frac{P(A \\mid B) \\times P(B)}{P(A)}
\\]
Bayes rule allows for reasoning backward from observed causes to likely underlying effects. When we have a feed\-forward model of how unobservable effects probabilistically constrain observable outcomes, Bayes rule allows us to draw inferences about *latent/unobservable variables* based on the observation of their downstream effects.
Consider yet again the flip\-and\-draw scenario. But now assume that Jones flipped the coin and
drew a ball. We see that it is black. What is the probability that it was drawn from urn 1,
or equivalently, that the coin landed heads? It is not \\(P(\\text{heads}) \= 0\.5\\), the so\-called
*prior probability* of the coin landing heads. It is a conditional probability, also
called the *posterior probability*,[41](#fn41) namely \\(P(\\text{heads} \\mid \\text{black})\\). But it is not as easy and straightforward to write down as the reverse probability
\\(P(\\text{black} \\mid \\text{heads})\\) of which we said above that it is an almost trivial part of
the set up of the flip\-and\-draw scenario. It is here that Bayes rule has its purpose:
\\\[
P(\\text{heads} \\mid \\text{black}) \=
\\frac{P(\\text{black} \\mid \\text{heads}) \\times P(\\text{heads})}{P(\\text{black})} \=
\\frac{0\.2 \\times 0\.5}{0\.3} \=
\\frac{1}{3}
\\]
This result is quite intuitive. Drawing a black ball from urn 2 (i.e., after seeing tails) is twice as likely as drawing a black ball from urn 1 (i.e., after seeing heads). Consequently, after seeing a black ball drawn, with equal probabilities of heads and tails, the probability that
the coin landed tails is also twice as large as that it landed heads.
**Exercise 7\.4**
1. Play around with the following WebPPL implementation of the flip\-and\-draw scenario, which calculates the posterior distribution over coin flip outcomes given that we observed the draw of a black ball. Change the parameters of the scenario and try to build intuitions about how your changes will affect the resulting posterior distribution.
```
// you can play around with the values of these variables
var coin_bias = 0.5 // coin bias
var prob_black_urn_1 = 0.2 // probability of drawing "black" from urn 1
var prob_black_urn_2 = 0.4 // probability of drawing "black" from urn 2
///fold:
// flip-and-draw scenario model
var model = function() {
var coin_flip = flip(coin_bias) == 1 ? "heads" : "tails"
var prob_black_selected_urn = coin_flip == "heads" ?
prob_black_urn_1 : prob_black_urn_2
var ball_color = flip(prob_black_selected_urn) == 1 ? "black" : "white"
condition(ball_color == "black")
return({coin: coin_flip})
}
// infer model and display as (custom-made) table
var inferred_model = Infer({method: 'enumerate'}, model)
viz(inferred_model)
///
```
Solution
Three possibilities for obtaining a value of 0\.7 for the marginal probability of “black”:
1. `prob_black_urn_1 = prob_black_urn_2 = 0.7`
2. `coin_bias = 1` and `prob_black_urn_1 = 0.7`
3. `coin_bias = 0.5`, `prob_black_urn_1 = 0.8` and `prob_black_urn_2 = 0.6`
2. Suppose that we know that around 6% of the population has statisticositis, a rare disease that makes you allergic to fallacious statistical reasoning.
A new test has been developed to diagnose statisticositis but it is not infallible.
The *specificity* of the test (the test result is negative when the subject really does not have statisticositis) is 98%.
The *sensitivity* of the test (the test result is positive when the subject really does have statisticositis) is 95%.
When you take this test and it gives a negative test result, how likely is it that you do not have statisticositis?
Solution
First, let’s abbreviate the test result being negative or positive as \\(\\overline{T}\\) and \\(T\\) and actual statisticositis as \\(\\overline{S}\\) and \\(S\\).
We want to calculate \\(P(\\overline{S} \\mid \\overline{T})\\).
According to Bayes rule, \\(P(\\overline{S} \\mid \\overline{T}) \= \\frac{P(\\overline{T} \\mid \\overline{S}) P(\\overline{S})} {P(\\overline{T})}\\).
We are given that \\(P(\\overline{T} \\mid \\overline{S}) \= 0\.98\\), \\(P(\\overline{T} \\mid S) \= 1 \- P(T \\mid S) \= 0\.05\\) and \\(P(\\overline{S}) \= 1 \- P(S) \= 0\.94\\).
Furthermore, \\(P(\\overline{T}) \= P(\\overline{T},S) \+ P(\\overline{T},\\overline{S}) \= P(\\overline{T} \\mid S) P(S) \+ P(\\overline{T} \\mid \\overline{S}) P(\\overline{S}) \= 0\.9242\\). Putting this all together, we get \\(P(\\overline{S} \\mid \\overline{T}) \\approx 99\.7 \\%\\). So, given a negative test result, you can be pretty certain that you do not have statisticositis.
Check out [this website](https://oehoedatascience.com/2020/06/12/bayes-rule-applied-on-covid-19-immunity-testing/) for more details on these calculations in the context of a more serious application.
**Excursion: Bayes rule for data analysis** In later chapters, we will use Bayes rule for data analysis. The flip\-and\-draw scenario structurally “reflects” what will happen later. Think of the color of the ball drawn as the *data* \\(D\\) which we observe. Think of the coin as a *latent parameter* \\(\\theta\\) of a statistical model. Bayes rule for data analysis then looks like this:
\\\[P(\\theta \\mid D) \= \\frac{P(D \\mid \\theta) \\times P(\\theta)}{P(D)}\\]
We will discuss this at length in Chapter [8](Chap-03-03-models.html#Chap-03-03-models) and thereafter.
### 7\.3\.2 Stochastic (in\-)dependence
Event \\(A\\) is **stochastically independent** of \\(B\\) if, intuitively speaking, learning \\(B\\) does not change one’s beliefs about \\(A\\), i.e., \\(P(A \\mid B) \= P(A)\\). If \\(A\\) is stochastically independent of \\(B\\), then \\(B\\) is stochastically independent of \\(A\\) because:
\\\[
\\begin{aligned}
P(B \\mid A)
\& \=
\\frac{P(A \\mid B) \\ P(B)}{P(A)} \&\& \\text{\[Bayes rule]}
\\\\
\& \=
\\frac{P(A) \\ P(B)}{P(A)} \&\& \\text{\[by ass. of independence]}
\\\\
\& \=
P(B) \&\& \\text{\[cancellation]}
\\\\
\\end{aligned}
\\]
For example, imagine a flip\-and\-draw scenario where the initial coin flip has a bias of \\(0\.8\\) towards heads, but each of the two urns has the same number of black balls, namely \\(3\\) black and \\(7\\) white balls. Intuitively and formally, the probability of drawing a black ball is then *independent* of the outcome of the coin flip; learning that the coin landed heads, does not change our beliefs about how likely the subsequent draw will result in a black ball. The probability table for this example is in Table [7\.2](Chap-03-01-probability-conditional.html#tab:flipdrawprobabilities-independent).
Table 7\.2: Joint probability table for a flip\-and\-draw scenario where the coin has a bias of \\(0\.8\\) towards heads and where each of the two urns holds \\(3\\) black and \\(7\\) white balls.
| | heads | tails | \\(\\Sigma\\) rows |
| --- | --- | --- | --- |
| black | \\(0\.8 \\times 0\.3 \= 0\.24\\) | \\(0\.2 \\times 0\.3 \= 0\.06\\) | 0\.3 |
| white | \\(0\.8 \\times 0\.7 \= 0\.56\\) | \\(0\.2 \\times 0\.7 \= 0\.14\\) | 0\.7 |
| \\(\\Sigma\\) columns | 0\.8 | 0\.2 | 1\.0 |
Independence shows in Table [7\.2](Chap-03-01-probability-conditional.html#tab:flipdrawprobabilities-independent) in the fact that the probability in each cell is the product of the two marginal probabilities. This is a direct consequence of stochastic independence:
**Proposition 7\.1 (Probability of conjunction of stochastically independent events)** For any pair of events \\(A\\) and \\(B\\) with non\-zero probability:
\\\[P(A \\cap B) \= P(A) \\ P(B) \\, \\ \\ \\ \\ \\text{\[if } A \\text{ and } B \\text{ are stoch. independent]} \\]
Show proof.
*Proof*. By assumption of independence, it holds that \\(P(A \\mid B) \= P(A)\\). But then:
\\\[
\\begin{aligned}
P(A \\cap B)
\& \=
P(A \\mid B) \\ P(B) \&\& \\text{\[def. of conditional probability]}
\\\\
\& \=
P(A) \\ P(B) \&\& \\text{\[by ass. of independence]}
\\end{aligned}
\\]
### 7\.3\.1 Bayes rule
Looking back at the joint\-probability distribution in Table [7\.1](Chap-03-01-probability-marginal.html#tab:flipdrawprobabilities), the conditional probability \\(P(\\text{black} \\mid \\text{heads})\\) of drawing a black ball, given that the initial coin flip
showed heads, can be calculated as follows:
\\\[
P(\\text{black} \\mid \\text{heads}) \=
\\frac{P(\\text{black} , \\text{heads})}{P(\\text{heads})} \=
\\frac{0\.1}{0\.5} \= 0\.2
\\]
This calculation, however, is quite excessive.
We can read out the conditional probability directly already from the way the flip\-and\-draw scenario was set up.
After flipping heads, we draw from urn 1, which has \\(k\=2\\) out of \\(N\=10\\) black balls, so clearly: if the initial flip comes up heads, then the probability of a black ball is \\(0\.2\\).
Indeed, in a step\-wise random generative process like the flip\-and\-draw scenario, some conditional probabilities are very clear, and sometimes given by definition.
These are, usually, the conditional probabilities that define how the process unfolds forward in time, so to speak.
**Bayes rule** is a way of expressing, in a manner of speaking, conditional probabilities in terms of the
“reversed” conditional probabilities:
\\\[P(B \\mid A) \= \\frac{P(A \\mid B) \\times P(B)}{P(A)}\\]
Bayes rule is a straightforward corollary of the definition of conditional probabilities,
according to which \\(P(A \\cap B) \= P(A \\mid B) \\times P(B)\\), so that:
\\\[
P(B \\mid A) \=
\\frac{P(A \\cap B)}{P(A)} \=
\\frac{P(A \\mid B) \\times P(B)}{P(A)}
\\]
Bayes rule allows for reasoning backward from observed causes to likely underlying effects. When we have a feed\-forward model of how unobservable effects probabilistically constrain observable outcomes, Bayes rule allows us to draw inferences about *latent/unobservable variables* based on the observation of their downstream effects.
Consider yet again the flip\-and\-draw scenario. But now assume that Jones flipped the coin and
drew a ball. We see that it is black. What is the probability that it was drawn from urn 1,
or equivalently, that the coin landed heads? It is not \\(P(\\text{heads}) \= 0\.5\\), the so\-called
*prior probability* of the coin landing heads. It is a conditional probability, also
called the *posterior probability*,[41](#fn41) namely \\(P(\\text{heads} \\mid \\text{black})\\). But it is not as easy and straightforward to write down as the reverse probability
\\(P(\\text{black} \\mid \\text{heads})\\) of which we said above that it is an almost trivial part of
the set up of the flip\-and\-draw scenario. It is here that Bayes rule has its purpose:
\\\[
P(\\text{heads} \\mid \\text{black}) \=
\\frac{P(\\text{black} \\mid \\text{heads}) \\times P(\\text{heads})}{P(\\text{black})} \=
\\frac{0\.2 \\times 0\.5}{0\.3} \=
\\frac{1}{3}
\\]
This result is quite intuitive. Drawing a black ball from urn 2 (i.e., after seeing tails) is twice as likely as drawing a black ball from urn 1 (i.e., after seeing heads). Consequently, after seeing a black ball drawn, with equal probabilities of heads and tails, the probability that
the coin landed tails is also twice as large as that it landed heads.
**Exercise 7\.4**
1. Play around with the following WebPPL implementation of the flip\-and\-draw scenario, which calculates the posterior distribution over coin flip outcomes given that we observed the draw of a black ball. Change the parameters of the scenario and try to build intuitions about how your changes will affect the resulting posterior distribution.
```
// you can play around with the values of these variables
var coin_bias = 0.5 // coin bias
var prob_black_urn_1 = 0.2 // probability of drawing "black" from urn 1
var prob_black_urn_2 = 0.4 // probability of drawing "black" from urn 2
///fold:
// flip-and-draw scenario model
var model = function() {
var coin_flip = flip(coin_bias) == 1 ? "heads" : "tails"
var prob_black_selected_urn = coin_flip == "heads" ?
prob_black_urn_1 : prob_black_urn_2
var ball_color = flip(prob_black_selected_urn) == 1 ? "black" : "white"
condition(ball_color == "black")
return({coin: coin_flip})
}
// infer model and display as (custom-made) table
var inferred_model = Infer({method: 'enumerate'}, model)
viz(inferred_model)
///
```
Solution
Three possibilities for obtaining a value of 0\.7 for the marginal probability of “black”:
1. `prob_black_urn_1 = prob_black_urn_2 = 0.7`
2. `coin_bias = 1` and `prob_black_urn_1 = 0.7`
3. `coin_bias = 0.5`, `prob_black_urn_1 = 0.8` and `prob_black_urn_2 = 0.6`
2. Suppose that we know that around 6% of the population has statisticositis, a rare disease that makes you allergic to fallacious statistical reasoning.
A new test has been developed to diagnose statisticositis but it is not infallible.
The *specificity* of the test (the test result is negative when the subject really does not have statisticositis) is 98%.
The *sensitivity* of the test (the test result is positive when the subject really does have statisticositis) is 95%.
When you take this test and it gives a negative test result, how likely is it that you do not have statisticositis?
Solution
First, let’s abbreviate the test result being negative or positive as \\(\\overline{T}\\) and \\(T\\) and actual statisticositis as \\(\\overline{S}\\) and \\(S\\).
We want to calculate \\(P(\\overline{S} \\mid \\overline{T})\\).
According to Bayes rule, \\(P(\\overline{S} \\mid \\overline{T}) \= \\frac{P(\\overline{T} \\mid \\overline{S}) P(\\overline{S})} {P(\\overline{T})}\\).
We are given that \\(P(\\overline{T} \\mid \\overline{S}) \= 0\.98\\), \\(P(\\overline{T} \\mid S) \= 1 \- P(T \\mid S) \= 0\.05\\) and \\(P(\\overline{S}) \= 1 \- P(S) \= 0\.94\\).
Furthermore, \\(P(\\overline{T}) \= P(\\overline{T},S) \+ P(\\overline{T},\\overline{S}) \= P(\\overline{T} \\mid S) P(S) \+ P(\\overline{T} \\mid \\overline{S}) P(\\overline{S}) \= 0\.9242\\). Putting this all together, we get \\(P(\\overline{S} \\mid \\overline{T}) \\approx 99\.7 \\%\\). So, given a negative test result, you can be pretty certain that you do not have statisticositis.
Check out [this website](https://oehoedatascience.com/2020/06/12/bayes-rule-applied-on-covid-19-immunity-testing/) for more details on these calculations in the context of a more serious application.
**Excursion: Bayes rule for data analysis** In later chapters, we will use Bayes rule for data analysis. The flip\-and\-draw scenario structurally “reflects” what will happen later. Think of the color of the ball drawn as the *data* \\(D\\) which we observe. Think of the coin as a *latent parameter* \\(\\theta\\) of a statistical model. Bayes rule for data analysis then looks like this:
\\\[P(\\theta \\mid D) \= \\frac{P(D \\mid \\theta) \\times P(\\theta)}{P(D)}\\]
We will discuss this at length in Chapter [8](Chap-03-03-models.html#Chap-03-03-models) and thereafter.
### 7\.3\.2 Stochastic (in\-)dependence
Event \\(A\\) is **stochastically independent** of \\(B\\) if, intuitively speaking, learning \\(B\\) does not change one’s beliefs about \\(A\\), i.e., \\(P(A \\mid B) \= P(A)\\). If \\(A\\) is stochastically independent of \\(B\\), then \\(B\\) is stochastically independent of \\(A\\) because:
\\\[
\\begin{aligned}
P(B \\mid A)
\& \=
\\frac{P(A \\mid B) \\ P(B)}{P(A)} \&\& \\text{\[Bayes rule]}
\\\\
\& \=
\\frac{P(A) \\ P(B)}{P(A)} \&\& \\text{\[by ass. of independence]}
\\\\
\& \=
P(B) \&\& \\text{\[cancellation]}
\\\\
\\end{aligned}
\\]
For example, imagine a flip\-and\-draw scenario where the initial coin flip has a bias of \\(0\.8\\) towards heads, but each of the two urns has the same number of black balls, namely \\(3\\) black and \\(7\\) white balls. Intuitively and formally, the probability of drawing a black ball is then *independent* of the outcome of the coin flip; learning that the coin landed heads, does not change our beliefs about how likely the subsequent draw will result in a black ball. The probability table for this example is in Table [7\.2](Chap-03-01-probability-conditional.html#tab:flipdrawprobabilities-independent).
Table 7\.2: Joint probability table for a flip\-and\-draw scenario where the coin has a bias of \\(0\.8\\) towards heads and where each of the two urns holds \\(3\\) black and \\(7\\) white balls.
| | heads | tails | \\(\\Sigma\\) rows |
| --- | --- | --- | --- |
| black | \\(0\.8 \\times 0\.3 \= 0\.24\\) | \\(0\.2 \\times 0\.3 \= 0\.06\\) | 0\.3 |
| white | \\(0\.8 \\times 0\.7 \= 0\.56\\) | \\(0\.2 \\times 0\.7 \= 0\.14\\) | 0\.7 |
| \\(\\Sigma\\) columns | 0\.8 | 0\.2 | 1\.0 |
Independence shows in Table [7\.2](Chap-03-01-probability-conditional.html#tab:flipdrawprobabilities-independent) in the fact that the probability in each cell is the product of the two marginal probabilities. This is a direct consequence of stochastic independence:
**Proposition 7\.1 (Probability of conjunction of stochastically independent events)** For any pair of events \\(A\\) and \\(B\\) with non\-zero probability:
\\\[P(A \\cap B) \= P(A) \\ P(B) \\, \\ \\ \\ \\ \\text{\[if } A \\text{ and } B \\text{ are stoch. independent]} \\]
Show proof.
*Proof*. By assumption of independence, it holds that \\(P(A \\mid B) \= P(A)\\). But then:
\\\[
\\begin{aligned}
P(A \\cap B)
\& \=
P(A \\mid B) \\ P(B) \&\& \\text{\[def. of conditional probability]}
\\\\
\& \=
P(A) \\ P(B) \&\& \\text{\[by ass. of independence]}
\\end{aligned}
\\]
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-03-01-probability-random-variables.html |
7\.4 Random variables
---------------------
So far, we have defined a probability distribution as a function that assigns a probability to each subset of the space \\(\\Omega\\) of elementary outcomes.
We saw that rational beliefs should conform to certain axioms, reflecting a “logic of rational beliefs”.
But in data analysis, we are often interested in a space of numeric outcomes.
You probably know stuff like the “normal distribution” which is a distribution that assigns a probability to each real number.
In keeping with our previous definition of probability as targeting a measurable set \\(\\Omega\\), we introduce what we could sloppily call “probability distributions over numbers” using the concept of random variables.
Caveat: random variables are very useful concepts and offer highly versatile notation, but both concept and notation can be elusive in the beginning.
Formally, a **random variable** is a function \\(X \\ \\colon \\ \\Omega \\rightarrow \\mathbb{R}\\) that assigns to each elementary outcome a numerical value.
It is reasonable to think of this number as a **summary statistic**: a number that captures one aspect of relevance of what is actually a much more complex chunk of reality.
**Example.** For a single coin flip, we have \\(\\Omega\_{\\text{coin flip}} \= \\left \\{ \\text{heads}, \\text{tails} \\right \\}\\). A usual way of mapping this onto numerical outcomes is to define \\(X\_{\\text{coin flip}} \\ \\colon \\ \\text{heads} \\mapsto 1; \\text{tails} \\mapsto 0\\). Less trivially, consider flipping a coin two times. Elementary outcomes should be individuated by the outcome of the first flip and the outcome of the second flip, so that we get:
\\\[
\\Omega\_{\\text{two flips}} \= \\left \\{ \\langle \\text{heads}, \\text{heads} \\rangle, \\langle \\text{heads}, \\text{tails} \\rangle,
\\langle \\text{tails}, \\text{heads} \\rangle, \\langle \\text{tails}, \\text{tails} \\rangle \\right \\}
\\]
Consider the random variable \\(X\_{\\text{two flips}}\\) that counts the total number of heads. Crucially, \\(X\_{\\text{two flips}}(\\langle \\text{heads}, \\text{tails} \\rangle) \= 1 \= X\_{\\text{two flips}}(\\langle \\text{tails}, \\text{heads} \\rangle)\\). We assign the same numerical value to different elementary outcomes since the order is not relevant if we are only interested in a count of the number of heads.
### 7\.4\.1 Notation \& terminology
Traditionally, random variables are represented by capital letters, like \\(X\\). The numeric values they take on are written as small letters, like \\(x\\).
We write \\(P(X \= x)\\) as a shorthand for the probability \\(P(\\left \\{ \\omega \\in \\Omega \\mid X(\\omega) \= x \\right \\})\\), that an event \\(\\omega\\) occurs which is mapped onto \\(x\\) by the random variable \\(X\\). For example, if our coin is fair, then \\(P(X\_{\\text{two flips}} \= x) \= 0\.5\\) for \\(x\=1\\) and \\(0\.25\\) for \\(x \\in \\{0,2\\}\\). Similarly, we can also write \\(P(X \\le x)\\) for the probability of observing any event that \\(X\\) maps to a number not bigger than \\(x\\).
If the range of \\(X\\) is countable (not necessarily finite), we say that \\(X\\) is **discrete**. For ease of exposition, we may say that if the range of \\(X\\) is an interval of real numbers, \\(X\\) is called **continuous**.
### 7\.4\.2 Cumulative distribution functions, mass \& density
For a discrete random variable \\(X\\), the **cumulative distribution function** \\(F\_X\\) associated with \\(X\\) is defined as:
\\\[
F\_X(x) \= P(X \\le x) \= \\sum\_{x' \\in \\left \\{ x'' \\in \\text{range}(X) \\mid x'' \\le x \\right \\}} P(X \= x')
\\]
The **probability mass function** \\(f\_x\\) associated with \\(X\\) is defined as:
\\\[
f\_X(x) \= P(X \= x)
\\]
**Example.** Suppose we flip a coin with a bias of \\(\\theta\\) towards heads \\(n\\) times. What is the probability that we will see heads \\(k\\) times? If we map the outcome of heads to 1 and tails to 0, this probability is given by the [Binomial distribution](selected-discrete-distributions-of-random-variables.html#app-91-distributions-binomial), as follows:
\\\[
\\text{Binom}(K \= k ; n, \\theta) \= \\binom{n}{k} \\, \\theta^{k} \\, (1\-\\theta)^{n\-k}
\\]
Here \\(\\binom{n}{k} \= \\frac{n!}{k!(n\-k)!}\\) is the binomial coefficient, which gives the number of possibilities of drawing an unordered subset with \\(k\\) elements from a set with a total of \\(n\\) elements. Figure [7\.3](Chap-03-01-probability-random-variables.html#fig:ch-03-BinomialDistribution-Mass) gives examples of the Binomial distribution, concretely its probability mass functions, for two values of the coin’s bias, \\(\\theta \= 0\.25\\) or \\(\\theta \= 0\.5\\), when flipping the coin \\(n\=24\\) times. Figure [7\.4](Chap-03-01-probability-random-variables.html#fig:ch-03-BinomialDistribution-Cumulative) gives the corresponding cumulative distributions.
Figure 7\.3: Examples of the Binomial distribution. The \\(y\\)\-axis gives the probability of seeing \\(k\\) heads when flipping a coin \\(n\=24\\) times with a bias of either \\(\\theta \= 0\.25\\) or \\(\\theta \= 0\.5\\).
Figure 7\.4: Examples of the cumulative distribution of the Binomial distribution. The \\(y\\)\-axis gives the probability of seeing \\(k\\) or fewer outcomes of heads when flipping a coin \\(n\=24\\) times with a bias of either \\(\\theta \= 0\.25\\) or \\(\\theta \= 0\.5\\).
For a continuous random variable \\(X\\), the probability \\(P(X \= x)\\) will usually be zero: it is virtually impossible that we will see precisely the value \\(x\\) realized in a random event that can realize uncountably many numerical values of \\(X\\). However, \\(P(X \\le x)\\) does usually take non\-zero values and so we define the cumulative distribution function \\(F\_X\\) associated with \\(X\\) as:
\\\[
F\_X(x) \= P(X \\le x)
\\]
Instead of a probability **mass** function, we derive a **probability density function** from the cumulative function as:
\\\[
f\_X(x) \= F'(x)
\\]
A probability density function can take values greater than one, unlike a probability mass
function.
**Example.** The [Gaussian (Normal) distribution](selected-continuous-distributions-of-random-variables.html#app-91-distributions-normal) characterizes many natural distributions of measurements which are symmetrically spread around a central tendency. It is defined as:
\\\[
\\mathcal{N}(X \= x ; \\mu, \\sigma) \= \\frac{1}{\\sqrt{2 \\sigma^2 \\pi}} \\exp \\left ( \-
\\frac{(x\-\\mu)^2}{2 \\sigma^2} \\right)
\\]
where parameter \\(\\mu\\) is the *mean*, the central tendency, and parameter \\(\\sigma\\) is the *standard deviation*. Figure [7\.5](Chap-03-01-probability-random-variables.html#fig:ch-03-NormalDistribution-Density) gives examples of the probability density function of two normal distributions. Figure [7\.6](Chap-03-01-probability-random-variables.html#fig:ch-03-NormalDistribution-Cumulative) gives the corresponding cumulative distribution functions.
Figure 7\.5: Examples of the Normal distribution. In both cases \\(\\mu \= 0\\), once with \\(\\sigma \= 1\\) and once with \\(\\sigma \= 4\\).
Figure 7\.6: Examples of the cumulative normal distribution corresponding to the previous probability density functions.
### 7\.4\.3 Expected value \& variance
The **expected value** of a random variable \\(X\\) is a measure of central tendency. It tells us, like the name suggests, which average value of \\(X\\) we can expect when repeatedly sampling from \\(X\\). If \\(X\\) is discrete, the expected value is:
\\\[
\\mathbb{E}\_X \= \\sum\_{x} x \\times f\_X(x)
\\]
If \\(X\\) is continuous, it is:
\\\[
\\mathbb{E}\_X \= \\int x \\times f\_X(x) \\ \\text{d}x
\\]
The expected value is also frequently called the **mean**.
The **variance** of a random variable \\(X\\) is a measure of how much likely values of \\(X\\) are spread or clustered around the expected value. If \\(X\\) is discrete, the variance is:
\\\[
\\text{Var}(X) \= \\sum\_x (\\mathbb{E}\_X \- x)^2 \\times f\_X(x) \= \\mathbb{E}\_{X^2} \-\\mathbb{E}\_X^2
\\]
If \\(X\\) is continuous, it is:
\\\[
\\text{Var}(X) \= \\int (\\mathbb{E}\_X \- x)^2 \\times f\_X(x) \\ \\text{d}x \= \\mathbb{E}\_{X^2} \-\\mathbb{E}\_X^2
\\]
**Example.** If we flip a coin with bias \\(\\theta \= 0\.25\\) a total of \\(n\=24\\) times, we expect on average to see \\(n \\times\\theta \= 24 \\times 0\.25 \= 6\\) outcomes showing heads.[42](#fn42) The variance of a binomially distributed variable is \\(n \\times\\theta \\times(1\-\\theta) \= 24 \\times 0\.25 \\times 0\.75 \= \\frac{24 \\times 3}{16} \= \\frac{18}{4} \= 4\.5\\).
The expected value of a normal distribution is just its mean \\(\\mu\\) and its variance is \\(\\sigma^2\\).
**Exercise 7\.5**
1. Compute the expected value and variance of a fair die.
Solution
```
expected_value <- 1*(1/6) + 2*(1/6) + 3*(1/6) + 4*(1/6) + 5*(1/6) + 6*(1/6)
variance <- 1^2*(1/6) + 2^2*(1/6) + 3^2*(1/6) + 4^2*(1/6) + 5^2*(1/6) + 6^2*(1/6) - expected_value^2
print(expected_value)
```
```
## [1] 3.5
```
```
variance
```
```
## [1] 2.916667
```
2. Below, you see several normal distributions with differing means \\(\\mu\\) and standard deviations \\(\\sigma\\). The red, unnumbered distribution is the so\-called standard normal distribution; it has a mean of 0 and a standard deviation of 1\. Compare each distribution below (1\-4\) to the standard normal distribution and think about how the parameters of the standard normal were changed. Also, think about which distribution (1\-4\) has the smallest/largest mean and the smallest/largest standard deviation.
Solution
Distribution 1 (\\(\\mu\\) \= 5, \\(\\sigma\\) \= 1\): larger mean, same standard deviation
Distribution 2 (\\(\\mu\\) \= 0, \\(\\sigma\\) \= 3\): same mean, larger standard deviation
Distribution 3 (\\(\\mu\\) \= 6, \\(\\sigma\\) \= 2\): larger mean, larger standard deviation
Distribution 4 (\\(\\mu\\) \= \-6, \\(\\sigma\\) \= 0\.5\): smaller mean, smaller standard deviation
### 7\.4\.4 Composite random variables
Composite random variables are random variables generated by mathematical operations conjoining other random variables. For example, if \\(X\\) and \\(Y\\) are random variables, then we can define a new derived random variable \\(Z\\) using notation like:
\\\[Z \= X \+ Y\\]
This notation looks innocuous but is conceptually tricky yet ultimately very powerful. On the face of it, we are doing as if we are using `+` to add two functions. But a sampling\-based perspective makes this quite intuitive. We can think of \\(X\\) and \\(Y\\) as large samples, representing the probability distributions in question. Then we build a sample by just adding elements in \\(X\\) and \\(Y\\). (If samples are of different size, just add a random element of \\(Y\\) to each \\(X\\).)
Consider the following concrete example. \\(X\\) is the probability distribution of rolling a fair dice with six sides. \\(Y\\) is the probability distribution of flipping a biased coin that lands heads (represented as number 1\) with probability 0\.75\. The derived probability distribution \\(Z \= X \+ Y\\) can be approximately represented by samples derived as follows:
```
n_samples <- 1e6
# `n_samples` rolls of a fair dice
samples_x <- sample(
1:6,
size = n_samples,
replace = T
)
# `n_samples` flips of a biased coin
samples_y <- sample(
c(0, 1),
prob = c(0.25, 0.75),
size = n_samples,
replace = T
)
samples_z <- samples_x + samples_y
tibble(outcome = samples_z) %>%
dplyr::count(outcome) %>%
mutate(n = n / sum(n)) %>%
ggplot(aes(x = outcome, y = n)) +
geom_col() +
labs(y = "proportion")
```
### 7\.4\.1 Notation \& terminology
Traditionally, random variables are represented by capital letters, like \\(X\\). The numeric values they take on are written as small letters, like \\(x\\).
We write \\(P(X \= x)\\) as a shorthand for the probability \\(P(\\left \\{ \\omega \\in \\Omega \\mid X(\\omega) \= x \\right \\})\\), that an event \\(\\omega\\) occurs which is mapped onto \\(x\\) by the random variable \\(X\\). For example, if our coin is fair, then \\(P(X\_{\\text{two flips}} \= x) \= 0\.5\\) for \\(x\=1\\) and \\(0\.25\\) for \\(x \\in \\{0,2\\}\\). Similarly, we can also write \\(P(X \\le x)\\) for the probability of observing any event that \\(X\\) maps to a number not bigger than \\(x\\).
If the range of \\(X\\) is countable (not necessarily finite), we say that \\(X\\) is **discrete**. For ease of exposition, we may say that if the range of \\(X\\) is an interval of real numbers, \\(X\\) is called **continuous**.
### 7\.4\.2 Cumulative distribution functions, mass \& density
For a discrete random variable \\(X\\), the **cumulative distribution function** \\(F\_X\\) associated with \\(X\\) is defined as:
\\\[
F\_X(x) \= P(X \\le x) \= \\sum\_{x' \\in \\left \\{ x'' \\in \\text{range}(X) \\mid x'' \\le x \\right \\}} P(X \= x')
\\]
The **probability mass function** \\(f\_x\\) associated with \\(X\\) is defined as:
\\\[
f\_X(x) \= P(X \= x)
\\]
**Example.** Suppose we flip a coin with a bias of \\(\\theta\\) towards heads \\(n\\) times. What is the probability that we will see heads \\(k\\) times? If we map the outcome of heads to 1 and tails to 0, this probability is given by the [Binomial distribution](selected-discrete-distributions-of-random-variables.html#app-91-distributions-binomial), as follows:
\\\[
\\text{Binom}(K \= k ; n, \\theta) \= \\binom{n}{k} \\, \\theta^{k} \\, (1\-\\theta)^{n\-k}
\\]
Here \\(\\binom{n}{k} \= \\frac{n!}{k!(n\-k)!}\\) is the binomial coefficient, which gives the number of possibilities of drawing an unordered subset with \\(k\\) elements from a set with a total of \\(n\\) elements. Figure [7\.3](Chap-03-01-probability-random-variables.html#fig:ch-03-BinomialDistribution-Mass) gives examples of the Binomial distribution, concretely its probability mass functions, for two values of the coin’s bias, \\(\\theta \= 0\.25\\) or \\(\\theta \= 0\.5\\), when flipping the coin \\(n\=24\\) times. Figure [7\.4](Chap-03-01-probability-random-variables.html#fig:ch-03-BinomialDistribution-Cumulative) gives the corresponding cumulative distributions.
Figure 7\.3: Examples of the Binomial distribution. The \\(y\\)\-axis gives the probability of seeing \\(k\\) heads when flipping a coin \\(n\=24\\) times with a bias of either \\(\\theta \= 0\.25\\) or \\(\\theta \= 0\.5\\).
Figure 7\.4: Examples of the cumulative distribution of the Binomial distribution. The \\(y\\)\-axis gives the probability of seeing \\(k\\) or fewer outcomes of heads when flipping a coin \\(n\=24\\) times with a bias of either \\(\\theta \= 0\.25\\) or \\(\\theta \= 0\.5\\).
For a continuous random variable \\(X\\), the probability \\(P(X \= x)\\) will usually be zero: it is virtually impossible that we will see precisely the value \\(x\\) realized in a random event that can realize uncountably many numerical values of \\(X\\). However, \\(P(X \\le x)\\) does usually take non\-zero values and so we define the cumulative distribution function \\(F\_X\\) associated with \\(X\\) as:
\\\[
F\_X(x) \= P(X \\le x)
\\]
Instead of a probability **mass** function, we derive a **probability density function** from the cumulative function as:
\\\[
f\_X(x) \= F'(x)
\\]
A probability density function can take values greater than one, unlike a probability mass
function.
**Example.** The [Gaussian (Normal) distribution](selected-continuous-distributions-of-random-variables.html#app-91-distributions-normal) characterizes many natural distributions of measurements which are symmetrically spread around a central tendency. It is defined as:
\\\[
\\mathcal{N}(X \= x ; \\mu, \\sigma) \= \\frac{1}{\\sqrt{2 \\sigma^2 \\pi}} \\exp \\left ( \-
\\frac{(x\-\\mu)^2}{2 \\sigma^2} \\right)
\\]
where parameter \\(\\mu\\) is the *mean*, the central tendency, and parameter \\(\\sigma\\) is the *standard deviation*. Figure [7\.5](Chap-03-01-probability-random-variables.html#fig:ch-03-NormalDistribution-Density) gives examples of the probability density function of two normal distributions. Figure [7\.6](Chap-03-01-probability-random-variables.html#fig:ch-03-NormalDistribution-Cumulative) gives the corresponding cumulative distribution functions.
Figure 7\.5: Examples of the Normal distribution. In both cases \\(\\mu \= 0\\), once with \\(\\sigma \= 1\\) and once with \\(\\sigma \= 4\\).
Figure 7\.6: Examples of the cumulative normal distribution corresponding to the previous probability density functions.
### 7\.4\.3 Expected value \& variance
The **expected value** of a random variable \\(X\\) is a measure of central tendency. It tells us, like the name suggests, which average value of \\(X\\) we can expect when repeatedly sampling from \\(X\\). If \\(X\\) is discrete, the expected value is:
\\\[
\\mathbb{E}\_X \= \\sum\_{x} x \\times f\_X(x)
\\]
If \\(X\\) is continuous, it is:
\\\[
\\mathbb{E}\_X \= \\int x \\times f\_X(x) \\ \\text{d}x
\\]
The expected value is also frequently called the **mean**.
The **variance** of a random variable \\(X\\) is a measure of how much likely values of \\(X\\) are spread or clustered around the expected value. If \\(X\\) is discrete, the variance is:
\\\[
\\text{Var}(X) \= \\sum\_x (\\mathbb{E}\_X \- x)^2 \\times f\_X(x) \= \\mathbb{E}\_{X^2} \-\\mathbb{E}\_X^2
\\]
If \\(X\\) is continuous, it is:
\\\[
\\text{Var}(X) \= \\int (\\mathbb{E}\_X \- x)^2 \\times f\_X(x) \\ \\text{d}x \= \\mathbb{E}\_{X^2} \-\\mathbb{E}\_X^2
\\]
**Example.** If we flip a coin with bias \\(\\theta \= 0\.25\\) a total of \\(n\=24\\) times, we expect on average to see \\(n \\times\\theta \= 24 \\times 0\.25 \= 6\\) outcomes showing heads.[42](#fn42) The variance of a binomially distributed variable is \\(n \\times\\theta \\times(1\-\\theta) \= 24 \\times 0\.25 \\times 0\.75 \= \\frac{24 \\times 3}{16} \= \\frac{18}{4} \= 4\.5\\).
The expected value of a normal distribution is just its mean \\(\\mu\\) and its variance is \\(\\sigma^2\\).
**Exercise 7\.5**
1. Compute the expected value and variance of a fair die.
Solution
```
expected_value <- 1*(1/6) + 2*(1/6) + 3*(1/6) + 4*(1/6) + 5*(1/6) + 6*(1/6)
variance <- 1^2*(1/6) + 2^2*(1/6) + 3^2*(1/6) + 4^2*(1/6) + 5^2*(1/6) + 6^2*(1/6) - expected_value^2
print(expected_value)
```
```
## [1] 3.5
```
```
variance
```
```
## [1] 2.916667
```
2. Below, you see several normal distributions with differing means \\(\\mu\\) and standard deviations \\(\\sigma\\). The red, unnumbered distribution is the so\-called standard normal distribution; it has a mean of 0 and a standard deviation of 1\. Compare each distribution below (1\-4\) to the standard normal distribution and think about how the parameters of the standard normal were changed. Also, think about which distribution (1\-4\) has the smallest/largest mean and the smallest/largest standard deviation.
Solution
Distribution 1 (\\(\\mu\\) \= 5, \\(\\sigma\\) \= 1\): larger mean, same standard deviation
Distribution 2 (\\(\\mu\\) \= 0, \\(\\sigma\\) \= 3\): same mean, larger standard deviation
Distribution 3 (\\(\\mu\\) \= 6, \\(\\sigma\\) \= 2\): larger mean, larger standard deviation
Distribution 4 (\\(\\mu\\) \= \-6, \\(\\sigma\\) \= 0\.5\): smaller mean, smaller standard deviation
### 7\.4\.4 Composite random variables
Composite random variables are random variables generated by mathematical operations conjoining other random variables. For example, if \\(X\\) and \\(Y\\) are random variables, then we can define a new derived random variable \\(Z\\) using notation like:
\\\[Z \= X \+ Y\\]
This notation looks innocuous but is conceptually tricky yet ultimately very powerful. On the face of it, we are doing as if we are using `+` to add two functions. But a sampling\-based perspective makes this quite intuitive. We can think of \\(X\\) and \\(Y\\) as large samples, representing the probability distributions in question. Then we build a sample by just adding elements in \\(X\\) and \\(Y\\). (If samples are of different size, just add a random element of \\(Y\\) to each \\(X\\).)
Consider the following concrete example. \\(X\\) is the probability distribution of rolling a fair dice with six sides. \\(Y\\) is the probability distribution of flipping a biased coin that lands heads (represented as number 1\) with probability 0\.75\. The derived probability distribution \\(Z \= X \+ Y\\) can be approximately represented by samples derived as follows:
```
n_samples <- 1e6
# `n_samples` rolls of a fair dice
samples_x <- sample(
1:6,
size = n_samples,
replace = T
)
# `n_samples` flips of a biased coin
samples_y <- sample(
c(0, 1),
prob = c(0.25, 0.75),
size = n_samples,
replace = T
)
samples_z <- samples_x + samples_y
tibble(outcome = samples_z) %>%
dplyr::count(outcome) %>%
mutate(n = n / sum(n)) %>%
ggplot(aes(x = outcome, y = n)) +
geom_col() +
labs(y = "proportion")
```
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-03-01-probability-R.html |
7\.5 Probability distributions in R
-----------------------------------
Appendix [B](app-91-distributions.html#app-91-distributions) covers a number of common probability distributions that are relevant for the purposes of this course.
Appendix [C](app-92-exponential-family.html#app-92-exponential-family) furthermore provides additional theoretical background on the *exponential family*, an important class of probability distributions widely used in statistics.
R has built\-in functions for most common probability distributions. Further distributions are covered in additional packages. If `mydist` is the name of a probability distribution, then R routinely offers four functions for `mydist`, distinguished by the first letter:
1. `dmydist(x, ...)` the *density function* gives the probability (mass/density) \\(f(x)\\) for `x`
2. `pmydist(x, ...)` the *cumulative probability function* gives the cumulative distribution function \\(F(x)\\) for `x`
3. `qmydist(p, ...)` the *quantile function* gives the value `x` for which `p = pmydist(x, ...)`
4. `rmydist(n, ...)` the *random sample function* returns `n` samples from the distribution
For example, the family of functions for the normal distribution has the following functions:
```
# density of standard normal at x = 1
dnorm(x = 1, mean = 0, sd = 1)
```
```
## [1] 0.2419707
```
```
# cumulative density of standard normal at q = 0
pnorm(q = 0, mean = 0, sd = 1)
```
```
## [1] 0.5
```
```
# point where the cumulative density of standard normal is p = 0.5
qnorm(p = 0.5, mean = 0, sd = 1)
```
```
## [1] 0
```
```
# n = 3 random samples from a standard normal
rnorm(n = 3, mean = 0, sd = 1)
```
```
## [1] 0.8625486 -2.6191819 -0.8560546
```
**Exercise 7\.6**
1. Use R to compute the median of the exponential distribution with rate \\(\\lambda \= 1\\). Remember that the median is the 50% quantile. The quantile function of the exponential distribution can be accessed with `qexp` in R.
Solution
```
qexp(0.5, rate = 1)
```
```
## [1] 0.6931472
```
2. Use R’s function for the cumulative normal distribution (see above) to compute this integral, i.e., the area under the density function of a standard normal distribution ranging from \-1 to 2:
\\\[
\\int\_{\-1}^{2} \\mathcal{N}(x, \\mu \= 0, \\sigma \= 1\) \\text{d}x
\\]
Solution
```
pnorm(2, mean = 0, sd = 1) - pnorm(-1, mean = 0, sd = 1)
```
```
## [1] 0.8185946
```
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-03-03-models-parameters-priors.html |
8\.3 Parameters, priors, and prior predictions
----------------------------------------------
We defined a Bayesian model as a pair consisting of a parameterized likelihood function and a prior distribution over parameter values:
\\\[
\\begin{aligned}
\& \\text{Likelihood: } \& P\_M(D \\mid \\theta) \\\\
\& \\text{Prior: } \& P\_M(\\theta)
\\end{aligned}
\\]
In this section, we dive deeper into what a parameter is, what a prior distribution \\(P\_M(\\theta)\\) is, and how we can use a model to make predictions about data.
The running example for this section is the *Binomial Model* as introduced above.
As a concrete example of data, we consider a case with \\(N\=24\\) coin flips and \\(k\=7\\) head outcomes.
### 8\.3\.1 What’s a model parameter?
A model parameter is a value that the likelihood depends on.
In the graphical notation we introduced in Section [8\.2](Chap-03-03-models-representation.html#Chap-03-03-models-representation), parameters usually (but not necessarily) show up as white nodes, because they are unknowns.
For example, the single parameter \\(\\theta\_c\\) in the Binomial Model shapes or fine\-tunes the likelihood function.
Remember that the likelihood function for the Binomial Model is:
\\\[ P\_M(k \\mid \\theta\_c, N) \= \\text{Binomial}(k, N, \\theta\_c) \= \\binom{N}{k}\\theta\_c^k(1\-\\theta\_c)^{N\-k} \\]
To understand the role of the parameter \\(\\theta\_c\\), we can plot the likelihood of the observed data (here: \\(k\=7\\) and \\(N\=24\\)) as a function of \\(\\theta\_c\\).
This is what is shown in Figure [8\.2](Chap-03-03-models-parameters-priors.html#fig:ch-03-02-LH-Binomial-Model).
For each logically possible value of \\(\\theta\_c \\in \[0;1]\\) on the horizontal axis, Figure [8\.2](Chap-03-03-models-parameters-priors.html#fig:ch-03-02-LH-Binomial-Model) plots the resulting likelihood of the observed data on the vertical axis.
What this plot shows is how the likelihood function depends on its parameter \\(\\theta\_c\\).
Different values of \\(\\theta\_c\\) make the data we observed more or less likely.
Figure 8\.2: Likelihood function for the Binomial Model, for \\(k\=7\\) and \\(N\=24\\).
**Exercise 8\.1**
1. Use R to calculate how likely it is to get \\(k\=22\\) heads when tossing a coin with bias \\(\\theta\_c \= 0\.5\\) a total of \\(N\=100\\) times.
Solution
```
dbinom(22, size = 100, prob = 0.5, log = FALSE)
```
```
## [1] 5.783981e-09
```
2. Which parameter value, \\(\\theta\_c \= 0\.4\\) or \\(\\theta\_c \= 0\.6\\), makes the data from the previous part of this exercise (\\(N\=100\\) and \\(k\=22\\)) more likely? \- Give a reason for your intuitive guess and use R to check your intuition.
Solution
The number of heads \\(k\=22\\) is (far) less than half of the total number of coin flips \\(N\=100\\). This should be more likely for a bias towards tails than for a bias towards heads. So, we might assume that \\(\\theta\_c\=0\.4\\) makes the data more likely than \\(\\theta\_c \= 0\.6\\).
```
dbinom(22, size = 100, prob = 0.4, log = FALSE)
```
```
## [1] 6.402414e-05
```
```
dbinom(22, size = 100, prob = 0.6, log = FALSE)
```
```
## [1] 8.815222e-15
```
### 8\.3\.2 Priors over parameters
The prior distribution over parameter values \\(P\_M(\\theta)\\) is an integral part of a model when we adopt a Bayesian approach to data analysis. This entails that two (Bayesian) models can share the same likelihood function, and yet ought to be considered as different models. (This also means that, when we say “Binomial Model” we really mean a whole class of models, all varying in the prior on \\(\\theta\\).)
In Bayesian data analysis, priors \\(P\_M(\\theta)\\) are most saliently interpreted as encoding the modeler’s prior beliefs about the parameters in question.
Ideally, the beliefs that support the specification of a prior should be supported by an argument, results of previous research, or other justifiable motivations.
However, informed subjective priors are just one of the ways to justify priors over parameters.
There are three main types of motivations for priors \\(P\_M(\\theta)\\); though the choice of a particular prior for a particular application might have mixed motives.
1. **Subjective priors** capture the modeler’s genuine subjective beliefs in the sense described above.
2. **Practical priors** are priors that are used pragmatically because of their specific usefulness, e.g., because they simplify a mathematical calculation or a computer simulation, or because they help in statistical reasoning, such as when *skeptical priors* are formulated that work against a particular conclusion.
3. **Objective priors** are priors that, as some argue, *should* be adopted for a given likelihood function to avoid conceptually paradoxical consequences. **We will not deal with objective priors in this introductory course beyond mentioning them here for completeness.**
Orthogonal to the kind of motivation given for a prior, we can distinguish different priors based on how strongly they commit the modeler to a particular range of parameter values. The most extreme case of ignorance are **uninformative priors** which assign the same level of credence to all parameter values. Uninformative priors are also called *flat priors* because they express themselves as flat lines for discrete probability distributions and continuous distributions defined over an interval with finite lower and upper bounds.[46](#fn46) Informative priors, on the other hand, can be *weakly informative* or *strongly informative*, depending on how much commitment they express. The most extreme case of commitment would be expressed in a **point\-valued prior**, which puts all probability (mass or density) on a single value of a parameter. Since this is no longer a respectable probability distribution, although it satisfies the definition, we speak of a *degenerate prior* here.
Figure [8\.3](Chap-03-03-models-parameters-priors.html#fig:ch-03-02-models-types-of-priors) shows examples of uninformative, weakly or strongly informative priors, as well as point\-valued priors for the Binomial Model. The priors shown here (resulting in four different Bayesian models all falling inside the family of Binomial Models) are as follows:
* *uninformative* : \\(\\theta\_c \\sim \\text{Beta}(1,1\)\\)
* *weakly informative* : \\(\\theta\_c \\sim \\text{Beta}(5,2\)\\)
* *strongly informative* : \\(\\theta\_c \\sim \\text{Beta}(50,20\)\\)
* *point\-valued* : \\(\\theta\_c \\sim \\text{Beta}(\\alpha, \\beta)\\) with \\(\\alpha, \\beta \\rightarrow \\infty\\) and \\(\\frac{\\alpha}{\\beta} \= \\frac{5}{2}\\)
Figure 8\.3: Examples of different kinds of Bayesian priors for coin bias \\(\\theta\_c\\) in the Binomial Model.
### 8\.3\.3 Prior predictions
How should priors be specified for a Bayesian model?
Several aspects might inform this decision.
Practical considerations may matter (maybe the model can only be implemented and run with common software for certain priors).
If subjective beliefs play a role, it may be hard to specify an exact shape of the prior distribution over some or all parameters, especially when these parameters are not easily interpretable in an intuitive way.
Therefore, two principles for the specification of priors are important:
1. **Sensitivity analysis:** Researchers should always check diligently whether or how much their results depend on the specific choices of priors, e.g., by running the same analysis with a wide range of different priors.
2. **Inspecting the prior predictive distribution:** It is one thing to ask whether a particular value for some parameter makes intuitive or conceptual sense. It is another at least as important question whether the predictions that the model makes about the data are intuitively or conceptually reasonable from an *a priori* perspective.[47](#fn47)
Indeed, by specifying priors over parameter values, Bayesian models make predictions about how likely a particular data outcome is, even before having seen any data at all.
The (Bayesian) **prior predictive distribution** of model \\(M\\) is a probability distribution over future or hypothetical data observations, written here as \\(D\_{\\text{pred}}\\) for “predicted data”:
\\\[
\\begin{aligned}
P\_M(D\_{\\text{pred}}) \& \= \\sum\_{\\theta} P\_M(D\_{\\text{pred}} \\mid \\theta) \\ P\_M(\\theta) \&\& \\text{\[discrete parameter space]} \\\\
P\_M(D\_{\\text{pred}}) \& \= \\int P\_M(D\_{\\text{pred}} \\mid \\theta) \\ P\_M(\\theta) \\ \\text{d}\\theta \&\& \\text{\[continuous parameter space]}
\\end{aligned}
\\]
The formula above is obtained by marginalization over parameter values (represented here as an integral for the continuous case).
We can think of the prior predictive distribution also in terms of samples.
We want to know how likely a given logically possible data observation \\(D\_{\\text{pred}}\\) is, according to the model with its *a priori* distribution over parameters.
So we sample, repeatedly, parameter vectors \\(\\theta\\) from the prior distribution.
For each sampled \\(\\theta\\), we then sample a potential data observation \\(D\_{\\text{pred}}\\).
The prior predictive distribution captures how likely it is under this sampling process to see each logically possible data observation \\(D\_{\\text{pred}}\\).
Notice that this sampling process corresponds exactly to the way in which we write down models using the conventions laid out in Section [8\.2](Chap-03-03-models-representation.html#Chap-03-03-models-representation), underlining once more how a model is really a representation of a random process that could have generated the data.
In the case of the Binomial Model when we use a Beta prior over \\(\\theta\\), the prior predictive distribution is so prominent that it has its own name and fame. It’s called the [Beta\-binomial distribution](selected-discrete-distributions-of-random-variables.html#app-91-distributions-beta-binomial). Figure [8\.4](Chap-03-03-models-parameters-priors.html#fig:ch-03-02-models-prior-predictives-binomial) shows the prior predictions for the four kinds of priors from Figure [8\.3](Chap-03-03-models-parameters-priors.html#fig:ch-03-02-models-types-of-priors) when \\(N \= 24\\).
Figure 8\.4: Prior predictive distributions for Binomial Models with the Beta\-priors from the previous figure.
### 8\.3\.1 What’s a model parameter?
A model parameter is a value that the likelihood depends on.
In the graphical notation we introduced in Section [8\.2](Chap-03-03-models-representation.html#Chap-03-03-models-representation), parameters usually (but not necessarily) show up as white nodes, because they are unknowns.
For example, the single parameter \\(\\theta\_c\\) in the Binomial Model shapes or fine\-tunes the likelihood function.
Remember that the likelihood function for the Binomial Model is:
\\\[ P\_M(k \\mid \\theta\_c, N) \= \\text{Binomial}(k, N, \\theta\_c) \= \\binom{N}{k}\\theta\_c^k(1\-\\theta\_c)^{N\-k} \\]
To understand the role of the parameter \\(\\theta\_c\\), we can plot the likelihood of the observed data (here: \\(k\=7\\) and \\(N\=24\\)) as a function of \\(\\theta\_c\\).
This is what is shown in Figure [8\.2](Chap-03-03-models-parameters-priors.html#fig:ch-03-02-LH-Binomial-Model).
For each logically possible value of \\(\\theta\_c \\in \[0;1]\\) on the horizontal axis, Figure [8\.2](Chap-03-03-models-parameters-priors.html#fig:ch-03-02-LH-Binomial-Model) plots the resulting likelihood of the observed data on the vertical axis.
What this plot shows is how the likelihood function depends on its parameter \\(\\theta\_c\\).
Different values of \\(\\theta\_c\\) make the data we observed more or less likely.
Figure 8\.2: Likelihood function for the Binomial Model, for \\(k\=7\\) and \\(N\=24\\).
**Exercise 8\.1**
1. Use R to calculate how likely it is to get \\(k\=22\\) heads when tossing a coin with bias \\(\\theta\_c \= 0\.5\\) a total of \\(N\=100\\) times.
Solution
```
dbinom(22, size = 100, prob = 0.5, log = FALSE)
```
```
## [1] 5.783981e-09
```
2. Which parameter value, \\(\\theta\_c \= 0\.4\\) or \\(\\theta\_c \= 0\.6\\), makes the data from the previous part of this exercise (\\(N\=100\\) and \\(k\=22\\)) more likely? \- Give a reason for your intuitive guess and use R to check your intuition.
Solution
The number of heads \\(k\=22\\) is (far) less than half of the total number of coin flips \\(N\=100\\). This should be more likely for a bias towards tails than for a bias towards heads. So, we might assume that \\(\\theta\_c\=0\.4\\) makes the data more likely than \\(\\theta\_c \= 0\.6\\).
```
dbinom(22, size = 100, prob = 0.4, log = FALSE)
```
```
## [1] 6.402414e-05
```
```
dbinom(22, size = 100, prob = 0.6, log = FALSE)
```
```
## [1] 8.815222e-15
```
### 8\.3\.2 Priors over parameters
The prior distribution over parameter values \\(P\_M(\\theta)\\) is an integral part of a model when we adopt a Bayesian approach to data analysis. This entails that two (Bayesian) models can share the same likelihood function, and yet ought to be considered as different models. (This also means that, when we say “Binomial Model” we really mean a whole class of models, all varying in the prior on \\(\\theta\\).)
In Bayesian data analysis, priors \\(P\_M(\\theta)\\) are most saliently interpreted as encoding the modeler’s prior beliefs about the parameters in question.
Ideally, the beliefs that support the specification of a prior should be supported by an argument, results of previous research, or other justifiable motivations.
However, informed subjective priors are just one of the ways to justify priors over parameters.
There are three main types of motivations for priors \\(P\_M(\\theta)\\); though the choice of a particular prior for a particular application might have mixed motives.
1. **Subjective priors** capture the modeler’s genuine subjective beliefs in the sense described above.
2. **Practical priors** are priors that are used pragmatically because of their specific usefulness, e.g., because they simplify a mathematical calculation or a computer simulation, or because they help in statistical reasoning, such as when *skeptical priors* are formulated that work against a particular conclusion.
3. **Objective priors** are priors that, as some argue, *should* be adopted for a given likelihood function to avoid conceptually paradoxical consequences. **We will not deal with objective priors in this introductory course beyond mentioning them here for completeness.**
Orthogonal to the kind of motivation given for a prior, we can distinguish different priors based on how strongly they commit the modeler to a particular range of parameter values. The most extreme case of ignorance are **uninformative priors** which assign the same level of credence to all parameter values. Uninformative priors are also called *flat priors* because they express themselves as flat lines for discrete probability distributions and continuous distributions defined over an interval with finite lower and upper bounds.[46](#fn46) Informative priors, on the other hand, can be *weakly informative* or *strongly informative*, depending on how much commitment they express. The most extreme case of commitment would be expressed in a **point\-valued prior**, which puts all probability (mass or density) on a single value of a parameter. Since this is no longer a respectable probability distribution, although it satisfies the definition, we speak of a *degenerate prior* here.
Figure [8\.3](Chap-03-03-models-parameters-priors.html#fig:ch-03-02-models-types-of-priors) shows examples of uninformative, weakly or strongly informative priors, as well as point\-valued priors for the Binomial Model. The priors shown here (resulting in four different Bayesian models all falling inside the family of Binomial Models) are as follows:
* *uninformative* : \\(\\theta\_c \\sim \\text{Beta}(1,1\)\\)
* *weakly informative* : \\(\\theta\_c \\sim \\text{Beta}(5,2\)\\)
* *strongly informative* : \\(\\theta\_c \\sim \\text{Beta}(50,20\)\\)
* *point\-valued* : \\(\\theta\_c \\sim \\text{Beta}(\\alpha, \\beta)\\) with \\(\\alpha, \\beta \\rightarrow \\infty\\) and \\(\\frac{\\alpha}{\\beta} \= \\frac{5}{2}\\)
Figure 8\.3: Examples of different kinds of Bayesian priors for coin bias \\(\\theta\_c\\) in the Binomial Model.
### 8\.3\.3 Prior predictions
How should priors be specified for a Bayesian model?
Several aspects might inform this decision.
Practical considerations may matter (maybe the model can only be implemented and run with common software for certain priors).
If subjective beliefs play a role, it may be hard to specify an exact shape of the prior distribution over some or all parameters, especially when these parameters are not easily interpretable in an intuitive way.
Therefore, two principles for the specification of priors are important:
1. **Sensitivity analysis:** Researchers should always check diligently whether or how much their results depend on the specific choices of priors, e.g., by running the same analysis with a wide range of different priors.
2. **Inspecting the prior predictive distribution:** It is one thing to ask whether a particular value for some parameter makes intuitive or conceptual sense. It is another at least as important question whether the predictions that the model makes about the data are intuitively or conceptually reasonable from an *a priori* perspective.[47](#fn47)
Indeed, by specifying priors over parameter values, Bayesian models make predictions about how likely a particular data outcome is, even before having seen any data at all.
The (Bayesian) **prior predictive distribution** of model \\(M\\) is a probability distribution over future or hypothetical data observations, written here as \\(D\_{\\text{pred}}\\) for “predicted data”:
\\\[
\\begin{aligned}
P\_M(D\_{\\text{pred}}) \& \= \\sum\_{\\theta} P\_M(D\_{\\text{pred}} \\mid \\theta) \\ P\_M(\\theta) \&\& \\text{\[discrete parameter space]} \\\\
P\_M(D\_{\\text{pred}}) \& \= \\int P\_M(D\_{\\text{pred}} \\mid \\theta) \\ P\_M(\\theta) \\ \\text{d}\\theta \&\& \\text{\[continuous parameter space]}
\\end{aligned}
\\]
The formula above is obtained by marginalization over parameter values (represented here as an integral for the continuous case).
We can think of the prior predictive distribution also in terms of samples.
We want to know how likely a given logically possible data observation \\(D\_{\\text{pred}}\\) is, according to the model with its *a priori* distribution over parameters.
So we sample, repeatedly, parameter vectors \\(\\theta\\) from the prior distribution.
For each sampled \\(\\theta\\), we then sample a potential data observation \\(D\_{\\text{pred}}\\).
The prior predictive distribution captures how likely it is under this sampling process to see each logically possible data observation \\(D\_{\\text{pred}}\\).
Notice that this sampling process corresponds exactly to the way in which we write down models using the conventions laid out in Section [8\.2](Chap-03-03-models-representation.html#Chap-03-03-models-representation), underlining once more how a model is really a representation of a random process that could have generated the data.
In the case of the Binomial Model when we use a Beta prior over \\(\\theta\\), the prior predictive distribution is so prominent that it has its own name and fame. It’s called the [Beta\-binomial distribution](selected-discrete-distributions-of-random-variables.html#app-91-distributions-beta-binomial). Figure [8\.4](Chap-03-03-models-parameters-priors.html#fig:ch-03-02-models-prior-predictives-binomial) shows the prior predictions for the four kinds of priors from Figure [8\.3](Chap-03-03-models-parameters-priors.html#fig:ch-03-02-models-types-of-priors) when \\(N \= 24\\).
Figure 8\.4: Prior predictive distributions for Binomial Models with the Beta\-priors from the previous figure.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/ch-03-03-estimation-bayes.html |
9\.1 Bayes rule for parameter estimation
----------------------------------------
### 9\.1\.1 Definitions and terminology
Fix a Bayesian model \\(M\\) with likelihood \\(P(D \\mid \\theta)\\) for observed data \\(D\\) and prior over parameters \\(P(\\theta)\\). We then update our prior beliefs \\(P(\\theta)\\) to obtain posterior beliefs by Bayes rule:[48](#fn48)
\\\[P(\\theta \\mid D) \= \\frac{P(D \\mid \\theta) \\ P(\\theta)}{P(D)}\\]
The ingredients of this equation are:
* the **posterior distribution** \\(P(\\theta \\mid D)\\) \- our posterior beliefs about how likely each value of \\(\\theta\\) is given \\(D\\);
* the **likelihood function** \\(P(D \\mid \\theta)\\) \- how likely each observation of \\(D\\) is for a fixed \\(\\theta\\);
* the **prior distribution** \\(P(\\theta)\\) \- our initial (*prior*) beliefs about how likely each value of \\(\\theta\\) might be;
* the **marginal likelihood** \\(P(D) \= \\int P(D \\mid \\theta) \\ P(\\theta) \\ \\text{d}\\theta\\) \- how likely an observation of \\(D\\) is under our prior beliefs about \\(\\theta\\) (a.k.a., the prior predictive probability of \\(D\\) according to \\(M\\))
A frequently used shorthand notation for probabilities is this:
\\\[\\underbrace{P(\\theta \\, \| \\, D)}\_{posterior} \\propto \\underbrace{P(\\theta)}\_{prior} \\ \\underbrace{P(D \\, \| \\, \\theta)}\_{likelihood}\\]
where the “proportional to” sign \\(\\propto\\) indicates that the probabilities on the LHS are defined in terms of the quantity on the RHS after normalization. So, if \\(F \\colon X \\rightarrow \\mathbb{R}^\+\\) is a positive function of non\-normalized probabilities (assuming, for simplicity, finite \\(X\\)), \\(P(x) \\propto F(x)\\) is equivalent to \\(P(x) \= \\frac{F(x)}{\\sum\_{x' \\in X} F(x')}\\).
### 9\.1\.2 The effects of prior and likelihood on the posterior
The shorthand notation for the posterior \\(P(\\theta \\, \| \\, D) \\propto P(\\theta) \\ P(D \\, \| \\, \\theta)\\) makes it particularly clear that the posterior distribution is a “mix” of prior and likelihood.
Let’s first explore this “mixing property” of the posterior before worrying about how to compute posteriors concretely.
We consider the case of flipping a coin with unknown bias \\(\\theta\\) a total of \\(N\\) times and observing \\(k\\) heads (\= successes). This is modeled with the **Binomial Model** (see Section [8\.1](Chap-03-03-models-general.html#Chap-03-03-models-general)), using priors expressed with a [Beta distribution](selected-continuous-distributions-of-random-variables.html#app-91-distributions-beta), giving us a model specification as:
\\\[
\\begin{aligned}
k \& \\sim \\text{Binomial}(N, \\theta) \\\\
\\theta \& \\sim \\text{Beta}(a, b)
\\end{aligned}
\\]
To study the impact of the likelihood function, we compare two data sets. The first one is the contrived “24/7” example where \\(N \= 24\\) and \\(k \= 7\\). The second example uses a much larger naturalistic data set stemming from the [King of France](app-93-data-sets-king-of-france.html#app-93-data-sets-king-of-france) example, namely \\(k \= 109\\) for \\(N \= 311\\). These numbers are the number of “true” responses and the total number of responses for all conditions except Condition 1, which did not involve a presupposition.
```
data_KoF_cleaned <- aida::data_KoF_cleaned
```
```
data_KoF_cleaned %>%
filter(condition != "Condition 1") %>%
group_by(response) %>%
dplyr::count()
```
```
## # A tibble: 2 × 2
## # Groups: response [2]
## response n
## <lgl> <int>
## 1 FALSE 202
## 2 TRUE 109
```
The likelihood function for both data sets is plotted in Figure [9\.1](ch-03-03-estimation-bayes.html#fig:ch-03-03-estimation-likelihood-functions).
The most important thing to notice is that the more data we have (as in the KoF example), the narrower the range of parameter values that make the data likely.
Intuitively, this means that the more data we have, the more severely constrained the range of *a posteriori* plausible parameter values will be, all else equal.
Figure 9\.1: Likelihood for two examples of binomial data. The first example has \\(k \= 7\\) and \\(N \= 24\\). The second has \\(k \= 109\\) and \\(N \= 311\\).
Picking up the example from Section [8\.3\.2](Chap-03-03-models-parameters-priors.html#Chap-03-02-models-priors), we will consider the four types of priors show below in Figure [9\.2](ch-03-03-estimation-bayes.html#fig:ch-03-03-estimation-types-of-priors).
Figure 9\.2: Examples of different kinds of Bayesian priors for the Binomial Model.
Combining the four different priors and the two different data sets, we see that the posterior is indeed a mix of prior and likelihood. In particular, we see that the weakly informative prior has only little effect if there are many data points (the KoF data), but does affect the posterior of the 24/7 case (compared against the uninformative prior).
Figure 9\.3: Posterior beliefs over bias parameter \\(\\theta\\) under different priors and different data sets. We see that strongly informative priors have more influence on the posterior than weakly informative priors, and that the influence of the prior is stronger for less data than for more.
**Exercise 9\.1**
1. Use the WebPPL code below to explore the effects of priors and different observations in the Binomial Model in order to be able to answer the questions in the second part below. Ask yourself how you need to change parameters in such a way as to:
* make the contribution of the likelihood function stronger
* make the prior more informative
```
// select your parameters here
var k = 7 // observed successes (heads)
var N = 24 // total flips of a coin
var a = 1 // first shape parameter of beta prior
var b = 1 // second shape parameter of beta prior
var n_samples = 50000 // number of samples for approximation
///fold:
display("Prior distribution")
var prior = function() {
beta(a, b)
}
viz(Infer({method: "rejection", samples: n_samples}, prior))
display("\nPosterior distribution")
var posterior = function() {
beta(k + a, N - k + b)
}
viz(Infer({method: "rejection", samples: n_samples}, posterior))
///
```
Solution
To make the influence of the likelihood function stronger, we need more data. Try increasing variables `N` and `k` without changing their ratio.
To make the prior more strongly informative, you should increase the shape parameters `a` and `b`.
2. Based on your explorations of the WebPPL code, which of the following statements do you think is true?
1. The prior always influences the posterior more than the likelihood.
2. The less informative the prior, the more the posterior is influenced by it.
3. The posterior is more influenced by the likelihood the less informative the prior is.
4. The likelihood always influences the posterior more than the prior.
5. The likelihood has no influence on the posterior in case of a point\-valued prior (assuming a single\-parameter model).
Solution
1. False
2. False
3. True
4. False
5. True
### 9\.1\.3 Computing Bayesian posteriors with conjugate priors
Bayesian posterior distributions can be hard to compute. Almost always, the prior \\(P(\\theta)\\) is easy to compute (otherwise, we might choose a different one for practicality). Usually, the likelihood function \\(P(D \\mid \\theta)\\) is also fast to compute. Everything seems innocuous when we just write:
\\\[\\underbrace{P(\\theta \\, \| \\, D)}\_{posterior} \\propto \\underbrace{P(\\theta)}\_{prior} \\ \\underbrace{P(D \\, \| \\, \\theta)}\_{likelihood}\\]
But the real pain is the normalizing constant, i.e., the marginalized likelihood a.k.a. the “integral of doom”, which to compute can be intractable, especially if the parameter space is large and not well\-behaved:
\\\[P(D) \= \\int P(D \\mid \\theta) \\ P(\\theta) \\ \\text{d}\\theta\\]
Section [9\.3](Ch-03-03-estimation-algorithms.html#Ch-03-03-estimation-algorithms) will, therefore, enlarge on methods to compute or approximate the posterior distribution efficiently.
Fortunately, the computation of Bayesian posterior distributions can be quite simple in special cases. If the prior and the likelihood function cooperate, so to speak, the computation of the posterior can be as simple as sleep. The nature of the data often prescribes which likelihood function is plausible. But we have more wiggle room in the choice of the priors. If prior \\(P(\\theta)\\) and posterior \\(P(\\theta \\, \| \\, D)\\) are of the same family, i.e., if they are the same kind of distribution albeit possibly with different parameterizations, we say that they **conjugate**. In that case, the prior \\(P(\\theta)\\) is called **conjugate prior** for the likelihood function \\(P(D \\, \| \\, \\theta)\\) from which the posterior \\(P(\\theta \\, \| \\, D)\\) is derived.
**Theorem 9\.1** The Beta distribution is the conjugate prior of binomial likelihood. For \\(\\theta \\sim \\text{Beta}(a,b)\\) as prior and data \\(k\\) and \\(N\\), the posterior is \\(\\theta \\sim \\text{Beta}(a\+k, b\+ N\-k)\\).
Show proof.
*Proof*. By construction, the posterior is:
\\\[P(\\theta \\mid \\langle{k, N \\rangle}) \\propto \\text{Binomial}(k ; N, \\theta) \\ \\text{Beta}(\\theta \\, \| \\, a, b) \\]
We extend the RHS by definitions, while omitting the normalizing constants:
\\\[
\\begin{aligned}
\\text{Binomial}(k ; N, \\theta) \\ \\text{Beta}(\\theta \\, \| \\, a, b) \&
\\propto \\theta^{k} \\, (1\-\\theta)^{N\-k} \\, \\theta^{a\-1} \\, (1\-\\theta)^{b\-1} \\\\ \&
\= \\theta^{k \+ a \- 1} \\, (1\-\\theta)^{N\-k \+b \-1}
\\end{aligned}
\\]
This latter expression is the non\-normalized Beta\-distribution for parameters \\(a \+ k\\) and \\(b \+ N \- k\\), so that we conclude with what was to be shown:
\\\[
\\begin{aligned}
P(\\theta \\mid \\langle k, N \\rangle) \& \= \\text{Beta}(\\theta \\, \| \\, a \+ k, b\+ N\-k)
\\end{aligned}
\\]
**Exercise 9\.2**
1. Fill in the blanks in the code below to get a plot of the posterior distribution for the coin flip scenario with \\(k\=20\\), \\(N\=24\\), making use of conjugacy and starting with a uniform Beta prior.
```
theta = seq(0, 1, length.out = 401)
as_tibble(theta) %>%
mutate(posterior = ____ ) %>%
ggplot(aes(___, posterior)) +
geom_line()
```
Solution
```
theta <- seq(0, 1, length.out = 401)
as_tibble(theta) %>%
mutate(posterior = dbeta(theta, 21, 5)) %>%
ggplot(aes(theta, posterior)) +
geom_line()
```
2. Suppose that Jones flipped a coin with unknown bias 30 times. She observed 20 heads. She updates her beliefs rationally with Bayes rule. Her posterior beliefs have the form of a beta distribution with parameters \\(\\alpha \= 25\\), \\(\\beta \= 15\\). What distribution and what parameter values of that distribution capture Jones’ prior beliefs before updating her beliefs with this data?
Solution
\\(\\text{Beta}(5,5\)\\)
### 9\.1\.4 Excursion: Sequential updating
Ancient wisdom has coined the widely popular proverb: “Today’s posterior is tomorrow’s prior.” Suppose we collected data from an experiment, like \\(k \= 7\\) in \\(N \= 24\\). Using uninformative priors at the outset, our posterior belief after the experiment is \\(\\theta \\sim \\text{Beta}(8,18\)\\). But now consider what happened at half\-time. After half the experiment, we had \\(k \= 2\\) and \\(N \= 12\\), so our beliefs followed \\(\\theta \\sim \\text{Beta}(3, 11\)\\) at this moment in time. But using these beliefs as priors, and observing the rest of the data would consequently result in updating the prior \\(\\theta \\sim \\text{Beta}(3, 11\)\\) with another set of observations \\(k \= 5\\) and \\(N \= 12\\), giving us the same posterior belief as what we would have gotten if we updated in one swoop. Figure [9\.4](ch-03-03-estimation-bayes.html#fig:ch-03-03-estimation-sequential-updates) shows the steps through the belief space, starting uninformed and observing one piece of data at a time (going right for each outcome of heads, down for each outcome of tails).
Figure 9\.4: Beta distributions for different parameters. Starting from an uninformative prior (top left), we arrive at the posterior distribution in the bottom left, in any sequence of sequentially updating with the data.
This sequential updating is not a peculiarity of the Beta\-Binomial case or of conjugacy. It holds in general for Bayesian inference. Sequential updating is a very intuitive property, but it is not shared by all other forms of inference from data. That Bayesian inference is sequential and commutative follows from the commutativity of multiplication of likelihoods (and the definition of Bayes rule).
**Theorem 9\.2** Bayesian posterior inference is sequential and commutative in the sense that for a data set \\(D\\) which is comprised of two mutually exclusive subsets \\(D\_1\\) and \\(D\_2\\) such that \\(D\_1 \\cup D\_2 \= D\\), we have:
\\\[ P(\\theta \\mid D ) \\propto P(\\theta \\mid D\_1\) \\ P(D\_2 \\mid \\theta) \\]
Show proof.
*Proof*. \\\[
\\begin{aligned}
P(\\theta \\mid D) \& \= \\frac{P(\\theta) \\ P(D \\mid \\theta)}{ \\int P(\\theta') \\ P(D \\mid \\theta') \\text{d}\\theta'} \\\\
\& \= \\frac{P(\\theta) \\ P(D\_1 \\mid \\theta) \\ P(D\_2 \\mid \\theta)}{ \\int P(\\theta') \\ P(D\_1 \\mid \\theta') \\ P(D\_2 \\mid \\theta') \\text{d}\\theta'} \& \\text{\[from multiplicativity of likelihood]} \\\\
\& \= \\frac{P(\\theta) \\ P(D\_1 \\mid \\theta) \\ P(D\_2 \\mid \\theta)}{ \\frac{k}{k} \\int P(\\theta') \\ P(D\_1 \\mid \\theta') \\ P(D\_2 \\mid \\theta') \\text{d}\\theta'} \& \\text{\[for random positive k]} \\\\
\& \= \\frac{\\frac{P(\\theta) \\ P(D\_1 \\mid \\theta)}{k} \\ P(D\_2 \\mid \\theta)}{\\int \\frac{P(\\theta') \\ P(D\_1 \\mid \\theta')}{k} \\ P(D\_2 \\mid \\theta') \\text{d}\\theta'} \& \\text{\[rules of integration; basic calculus]} \\\\
\& \= \\frac{P(\\theta \\mid D\_1\) \\ P(D\_2 \\mid \\theta)}{\\int P(\\theta' \\mid D\_1\) \\ P(D\_2 \\mid \\theta') \\text{d}\\theta'} \& \\text{\[Bayes rule with } k \= \\int P(\\theta) P(D\_1 \\mid \\theta) \\text{d}\\theta ]\\\\
\\end{aligned}
\\]
### 9\.1\.5 Posterior predictive distribution
We already learned about the *prior predictive distribution* of a model in Chapter [8\.3\.3](Chap-03-03-models-parameters-priors.html#Chap-03-03-models-parameters-prior-predictive). Remember that the prior predictive distribution of a model \\(M\\) captures how likely hypothetical data observations are from an *a priori* point of view.
It was defined like this:
\\\[
\\begin{aligned}
P\_M(D\_{\\text{pred}}) \& \= \\sum\_{\\theta} P\_M(D\_{\\text{pred}} \\mid \\theta) \\ P\_M(\\theta) \&\& \\text{\[discrete parameter space]} \\\\
P\_M(D\_{\\text{pred}}) \& \= \\int P\_M(D\_{\\text{pred}} \\mid \\theta) \\ P\_M(\\theta) \\ \\text{d}\\theta \&\& \\text{\[continuous parameter space]}
\\end{aligned}
\\]
After updating beliefs about parameter values in the light of observed data \\(D\_{\\text{obs}}\\), we can similarly define the **posterior predictive distribution**, which is analogous to the prior predictive distribution, except that it relies on the posterior over parameter values \\(P\_{M(\\theta \\mid D\_{\\text{obs}})}\\) instead of the prior \\(P\_M(\\theta)\\):
\\\[
\\begin{aligned}
P\_M(D\_{\\text{pred}} \\mid D\_{\\text{obs}}) \& \= \\sum\_{\\theta} P\_M(D\_{\\text{pred}} \\mid \\theta) \\ P\_M(\\theta \\mid D\_{\\text{obs}}) \&\& \\text{\[discrete parameter space]} \\\\
P\_M(D\_{\\text{pred}} \\mid D\_{\\text{obs}}) \& \= \\int P\_M(D\_{\\text{pred}} \\mid \\theta) \\ P\_M(\\theta \\mid D\_{\\text{obs}}) \\ \\text{d}\\theta \&\& \\text{\[continuous parameter space]}
\\end{aligned}
\\]
### 9\.1\.1 Definitions and terminology
Fix a Bayesian model \\(M\\) with likelihood \\(P(D \\mid \\theta)\\) for observed data \\(D\\) and prior over parameters \\(P(\\theta)\\). We then update our prior beliefs \\(P(\\theta)\\) to obtain posterior beliefs by Bayes rule:[48](#fn48)
\\\[P(\\theta \\mid D) \= \\frac{P(D \\mid \\theta) \\ P(\\theta)}{P(D)}\\]
The ingredients of this equation are:
* the **posterior distribution** \\(P(\\theta \\mid D)\\) \- our posterior beliefs about how likely each value of \\(\\theta\\) is given \\(D\\);
* the **likelihood function** \\(P(D \\mid \\theta)\\) \- how likely each observation of \\(D\\) is for a fixed \\(\\theta\\);
* the **prior distribution** \\(P(\\theta)\\) \- our initial (*prior*) beliefs about how likely each value of \\(\\theta\\) might be;
* the **marginal likelihood** \\(P(D) \= \\int P(D \\mid \\theta) \\ P(\\theta) \\ \\text{d}\\theta\\) \- how likely an observation of \\(D\\) is under our prior beliefs about \\(\\theta\\) (a.k.a., the prior predictive probability of \\(D\\) according to \\(M\\))
A frequently used shorthand notation for probabilities is this:
\\\[\\underbrace{P(\\theta \\, \| \\, D)}\_{posterior} \\propto \\underbrace{P(\\theta)}\_{prior} \\ \\underbrace{P(D \\, \| \\, \\theta)}\_{likelihood}\\]
where the “proportional to” sign \\(\\propto\\) indicates that the probabilities on the LHS are defined in terms of the quantity on the RHS after normalization. So, if \\(F \\colon X \\rightarrow \\mathbb{R}^\+\\) is a positive function of non\-normalized probabilities (assuming, for simplicity, finite \\(X\\)), \\(P(x) \\propto F(x)\\) is equivalent to \\(P(x) \= \\frac{F(x)}{\\sum\_{x' \\in X} F(x')}\\).
### 9\.1\.2 The effects of prior and likelihood on the posterior
The shorthand notation for the posterior \\(P(\\theta \\, \| \\, D) \\propto P(\\theta) \\ P(D \\, \| \\, \\theta)\\) makes it particularly clear that the posterior distribution is a “mix” of prior and likelihood.
Let’s first explore this “mixing property” of the posterior before worrying about how to compute posteriors concretely.
We consider the case of flipping a coin with unknown bias \\(\\theta\\) a total of \\(N\\) times and observing \\(k\\) heads (\= successes). This is modeled with the **Binomial Model** (see Section [8\.1](Chap-03-03-models-general.html#Chap-03-03-models-general)), using priors expressed with a [Beta distribution](selected-continuous-distributions-of-random-variables.html#app-91-distributions-beta), giving us a model specification as:
\\\[
\\begin{aligned}
k \& \\sim \\text{Binomial}(N, \\theta) \\\\
\\theta \& \\sim \\text{Beta}(a, b)
\\end{aligned}
\\]
To study the impact of the likelihood function, we compare two data sets. The first one is the contrived “24/7” example where \\(N \= 24\\) and \\(k \= 7\\). The second example uses a much larger naturalistic data set stemming from the [King of France](app-93-data-sets-king-of-france.html#app-93-data-sets-king-of-france) example, namely \\(k \= 109\\) for \\(N \= 311\\). These numbers are the number of “true” responses and the total number of responses for all conditions except Condition 1, which did not involve a presupposition.
```
data_KoF_cleaned <- aida::data_KoF_cleaned
```
```
data_KoF_cleaned %>%
filter(condition != "Condition 1") %>%
group_by(response) %>%
dplyr::count()
```
```
## # A tibble: 2 × 2
## # Groups: response [2]
## response n
## <lgl> <int>
## 1 FALSE 202
## 2 TRUE 109
```
The likelihood function for both data sets is plotted in Figure [9\.1](ch-03-03-estimation-bayes.html#fig:ch-03-03-estimation-likelihood-functions).
The most important thing to notice is that the more data we have (as in the KoF example), the narrower the range of parameter values that make the data likely.
Intuitively, this means that the more data we have, the more severely constrained the range of *a posteriori* plausible parameter values will be, all else equal.
Figure 9\.1: Likelihood for two examples of binomial data. The first example has \\(k \= 7\\) and \\(N \= 24\\). The second has \\(k \= 109\\) and \\(N \= 311\\).
Picking up the example from Section [8\.3\.2](Chap-03-03-models-parameters-priors.html#Chap-03-02-models-priors), we will consider the four types of priors show below in Figure [9\.2](ch-03-03-estimation-bayes.html#fig:ch-03-03-estimation-types-of-priors).
Figure 9\.2: Examples of different kinds of Bayesian priors for the Binomial Model.
Combining the four different priors and the two different data sets, we see that the posterior is indeed a mix of prior and likelihood. In particular, we see that the weakly informative prior has only little effect if there are many data points (the KoF data), but does affect the posterior of the 24/7 case (compared against the uninformative prior).
Figure 9\.3: Posterior beliefs over bias parameter \\(\\theta\\) under different priors and different data sets. We see that strongly informative priors have more influence on the posterior than weakly informative priors, and that the influence of the prior is stronger for less data than for more.
**Exercise 9\.1**
1. Use the WebPPL code below to explore the effects of priors and different observations in the Binomial Model in order to be able to answer the questions in the second part below. Ask yourself how you need to change parameters in such a way as to:
* make the contribution of the likelihood function stronger
* make the prior more informative
```
// select your parameters here
var k = 7 // observed successes (heads)
var N = 24 // total flips of a coin
var a = 1 // first shape parameter of beta prior
var b = 1 // second shape parameter of beta prior
var n_samples = 50000 // number of samples for approximation
///fold:
display("Prior distribution")
var prior = function() {
beta(a, b)
}
viz(Infer({method: "rejection", samples: n_samples}, prior))
display("\nPosterior distribution")
var posterior = function() {
beta(k + a, N - k + b)
}
viz(Infer({method: "rejection", samples: n_samples}, posterior))
///
```
Solution
To make the influence of the likelihood function stronger, we need more data. Try increasing variables `N` and `k` without changing their ratio.
To make the prior more strongly informative, you should increase the shape parameters `a` and `b`.
2. Based on your explorations of the WebPPL code, which of the following statements do you think is true?
1. The prior always influences the posterior more than the likelihood.
2. The less informative the prior, the more the posterior is influenced by it.
3. The posterior is more influenced by the likelihood the less informative the prior is.
4. The likelihood always influences the posterior more than the prior.
5. The likelihood has no influence on the posterior in case of a point\-valued prior (assuming a single\-parameter model).
Solution
1. False
2. False
3. True
4. False
5. True
### 9\.1\.3 Computing Bayesian posteriors with conjugate priors
Bayesian posterior distributions can be hard to compute. Almost always, the prior \\(P(\\theta)\\) is easy to compute (otherwise, we might choose a different one for practicality). Usually, the likelihood function \\(P(D \\mid \\theta)\\) is also fast to compute. Everything seems innocuous when we just write:
\\\[\\underbrace{P(\\theta \\, \| \\, D)}\_{posterior} \\propto \\underbrace{P(\\theta)}\_{prior} \\ \\underbrace{P(D \\, \| \\, \\theta)}\_{likelihood}\\]
But the real pain is the normalizing constant, i.e., the marginalized likelihood a.k.a. the “integral of doom”, which to compute can be intractable, especially if the parameter space is large and not well\-behaved:
\\\[P(D) \= \\int P(D \\mid \\theta) \\ P(\\theta) \\ \\text{d}\\theta\\]
Section [9\.3](Ch-03-03-estimation-algorithms.html#Ch-03-03-estimation-algorithms) will, therefore, enlarge on methods to compute or approximate the posterior distribution efficiently.
Fortunately, the computation of Bayesian posterior distributions can be quite simple in special cases. If the prior and the likelihood function cooperate, so to speak, the computation of the posterior can be as simple as sleep. The nature of the data often prescribes which likelihood function is plausible. But we have more wiggle room in the choice of the priors. If prior \\(P(\\theta)\\) and posterior \\(P(\\theta \\, \| \\, D)\\) are of the same family, i.e., if they are the same kind of distribution albeit possibly with different parameterizations, we say that they **conjugate**. In that case, the prior \\(P(\\theta)\\) is called **conjugate prior** for the likelihood function \\(P(D \\, \| \\, \\theta)\\) from which the posterior \\(P(\\theta \\, \| \\, D)\\) is derived.
**Theorem 9\.1** The Beta distribution is the conjugate prior of binomial likelihood. For \\(\\theta \\sim \\text{Beta}(a,b)\\) as prior and data \\(k\\) and \\(N\\), the posterior is \\(\\theta \\sim \\text{Beta}(a\+k, b\+ N\-k)\\).
Show proof.
*Proof*. By construction, the posterior is:
\\\[P(\\theta \\mid \\langle{k, N \\rangle}) \\propto \\text{Binomial}(k ; N, \\theta) \\ \\text{Beta}(\\theta \\, \| \\, a, b) \\]
We extend the RHS by definitions, while omitting the normalizing constants:
\\\[
\\begin{aligned}
\\text{Binomial}(k ; N, \\theta) \\ \\text{Beta}(\\theta \\, \| \\, a, b) \&
\\propto \\theta^{k} \\, (1\-\\theta)^{N\-k} \\, \\theta^{a\-1} \\, (1\-\\theta)^{b\-1} \\\\ \&
\= \\theta^{k \+ a \- 1} \\, (1\-\\theta)^{N\-k \+b \-1}
\\end{aligned}
\\]
This latter expression is the non\-normalized Beta\-distribution for parameters \\(a \+ k\\) and \\(b \+ N \- k\\), so that we conclude with what was to be shown:
\\\[
\\begin{aligned}
P(\\theta \\mid \\langle k, N \\rangle) \& \= \\text{Beta}(\\theta \\, \| \\, a \+ k, b\+ N\-k)
\\end{aligned}
\\]
**Exercise 9\.2**
1. Fill in the blanks in the code below to get a plot of the posterior distribution for the coin flip scenario with \\(k\=20\\), \\(N\=24\\), making use of conjugacy and starting with a uniform Beta prior.
```
theta = seq(0, 1, length.out = 401)
as_tibble(theta) %>%
mutate(posterior = ____ ) %>%
ggplot(aes(___, posterior)) +
geom_line()
```
Solution
```
theta <- seq(0, 1, length.out = 401)
as_tibble(theta) %>%
mutate(posterior = dbeta(theta, 21, 5)) %>%
ggplot(aes(theta, posterior)) +
geom_line()
```
2. Suppose that Jones flipped a coin with unknown bias 30 times. She observed 20 heads. She updates her beliefs rationally with Bayes rule. Her posterior beliefs have the form of a beta distribution with parameters \\(\\alpha \= 25\\), \\(\\beta \= 15\\). What distribution and what parameter values of that distribution capture Jones’ prior beliefs before updating her beliefs with this data?
Solution
\\(\\text{Beta}(5,5\)\\)
### 9\.1\.4 Excursion: Sequential updating
Ancient wisdom has coined the widely popular proverb: “Today’s posterior is tomorrow’s prior.” Suppose we collected data from an experiment, like \\(k \= 7\\) in \\(N \= 24\\). Using uninformative priors at the outset, our posterior belief after the experiment is \\(\\theta \\sim \\text{Beta}(8,18\)\\). But now consider what happened at half\-time. After half the experiment, we had \\(k \= 2\\) and \\(N \= 12\\), so our beliefs followed \\(\\theta \\sim \\text{Beta}(3, 11\)\\) at this moment in time. But using these beliefs as priors, and observing the rest of the data would consequently result in updating the prior \\(\\theta \\sim \\text{Beta}(3, 11\)\\) with another set of observations \\(k \= 5\\) and \\(N \= 12\\), giving us the same posterior belief as what we would have gotten if we updated in one swoop. Figure [9\.4](ch-03-03-estimation-bayes.html#fig:ch-03-03-estimation-sequential-updates) shows the steps through the belief space, starting uninformed and observing one piece of data at a time (going right for each outcome of heads, down for each outcome of tails).
Figure 9\.4: Beta distributions for different parameters. Starting from an uninformative prior (top left), we arrive at the posterior distribution in the bottom left, in any sequence of sequentially updating with the data.
This sequential updating is not a peculiarity of the Beta\-Binomial case or of conjugacy. It holds in general for Bayesian inference. Sequential updating is a very intuitive property, but it is not shared by all other forms of inference from data. That Bayesian inference is sequential and commutative follows from the commutativity of multiplication of likelihoods (and the definition of Bayes rule).
**Theorem 9\.2** Bayesian posterior inference is sequential and commutative in the sense that for a data set \\(D\\) which is comprised of two mutually exclusive subsets \\(D\_1\\) and \\(D\_2\\) such that \\(D\_1 \\cup D\_2 \= D\\), we have:
\\\[ P(\\theta \\mid D ) \\propto P(\\theta \\mid D\_1\) \\ P(D\_2 \\mid \\theta) \\]
Show proof.
*Proof*. \\\[
\\begin{aligned}
P(\\theta \\mid D) \& \= \\frac{P(\\theta) \\ P(D \\mid \\theta)}{ \\int P(\\theta') \\ P(D \\mid \\theta') \\text{d}\\theta'} \\\\
\& \= \\frac{P(\\theta) \\ P(D\_1 \\mid \\theta) \\ P(D\_2 \\mid \\theta)}{ \\int P(\\theta') \\ P(D\_1 \\mid \\theta') \\ P(D\_2 \\mid \\theta') \\text{d}\\theta'} \& \\text{\[from multiplicativity of likelihood]} \\\\
\& \= \\frac{P(\\theta) \\ P(D\_1 \\mid \\theta) \\ P(D\_2 \\mid \\theta)}{ \\frac{k}{k} \\int P(\\theta') \\ P(D\_1 \\mid \\theta') \\ P(D\_2 \\mid \\theta') \\text{d}\\theta'} \& \\text{\[for random positive k]} \\\\
\& \= \\frac{\\frac{P(\\theta) \\ P(D\_1 \\mid \\theta)}{k} \\ P(D\_2 \\mid \\theta)}{\\int \\frac{P(\\theta') \\ P(D\_1 \\mid \\theta')}{k} \\ P(D\_2 \\mid \\theta') \\text{d}\\theta'} \& \\text{\[rules of integration; basic calculus]} \\\\
\& \= \\frac{P(\\theta \\mid D\_1\) \\ P(D\_2 \\mid \\theta)}{\\int P(\\theta' \\mid D\_1\) \\ P(D\_2 \\mid \\theta') \\text{d}\\theta'} \& \\text{\[Bayes rule with } k \= \\int P(\\theta) P(D\_1 \\mid \\theta) \\text{d}\\theta ]\\\\
\\end{aligned}
\\]
### 9\.1\.5 Posterior predictive distribution
We already learned about the *prior predictive distribution* of a model in Chapter [8\.3\.3](Chap-03-03-models-parameters-priors.html#Chap-03-03-models-parameters-prior-predictive). Remember that the prior predictive distribution of a model \\(M\\) captures how likely hypothetical data observations are from an *a priori* point of view.
It was defined like this:
\\\[
\\begin{aligned}
P\_M(D\_{\\text{pred}}) \& \= \\sum\_{\\theta} P\_M(D\_{\\text{pred}} \\mid \\theta) \\ P\_M(\\theta) \&\& \\text{\[discrete parameter space]} \\\\
P\_M(D\_{\\text{pred}}) \& \= \\int P\_M(D\_{\\text{pred}} \\mid \\theta) \\ P\_M(\\theta) \\ \\text{d}\\theta \&\& \\text{\[continuous parameter space]}
\\end{aligned}
\\]
After updating beliefs about parameter values in the light of observed data \\(D\_{\\text{obs}}\\), we can similarly define the **posterior predictive distribution**, which is analogous to the prior predictive distribution, except that it relies on the posterior over parameter values \\(P\_{M(\\theta \\mid D\_{\\text{obs}})}\\) instead of the prior \\(P\_M(\\theta)\\):
\\\[
\\begin{aligned}
P\_M(D\_{\\text{pred}} \\mid D\_{\\text{obs}}) \& \= \\sum\_{\\theta} P\_M(D\_{\\text{pred}} \\mid \\theta) \\ P\_M(\\theta \\mid D\_{\\text{obs}}) \&\& \\text{\[discrete parameter space]} \\\\
P\_M(D\_{\\text{pred}} \\mid D\_{\\text{obs}}) \& \= \\int P\_M(D\_{\\text{pred}} \\mid \\theta) \\ P\_M(\\theta \\mid D\_{\\text{obs}}) \\ \\text{d}\\theta \&\& \\text{\[continuous parameter space]}
\\end{aligned}
\\]
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/ch-03-04-parameter-estimation-points-intervals.html |
9\.2 Point\-valued and interval\-ranged estimates
-------------------------------------------------
Let’s consider the “24/7” example with a flat prior again, concisely repeated in Figure [9\.5](ch-03-04-parameter-estimation-points-intervals.html#fig:ch-03-03-estimation-24-7-overview).
Figure 9\.5: Prior (uninformative), likelihood and posterior for the 24/7 example.
The posterior probability distribution in Figure [9\.5](ch-03-04-parameter-estimation-points-intervals.html#fig:ch-03-03-estimation-24-7-overview) contains rich information. It specifies how likely each value of \\(\\theta\\) is, obtained by updating the original prior beliefs with the observed data. Such rich information is difficult to process and communicate in natural language. It is therefore convenient to have conventional means of summarizing the rich information carried in a probability distribution like in Figure [9\.5](ch-03-04-parameter-estimation-points-intervals.html#fig:ch-03-03-estimation-24-7-overview). Customarily, we summarize in terms of a point\-estimate and/or an interval estimate. The *point estimate* gives information about a “best value”, i.e., a salient point, and the *interval estimate* gives, usually, an indication of how closely other “good values” are scattered around the “best value”.
The most frequently used Bayesian point estimate is the mean of the posterior distribution, and the most frequently used Bayesian interval estimate is the credible interval.
We will introduce both below, alongside some alternatives (namely the *maximum a posteriori*, the *maximum likelihood estimate* and the inner\-quantile range).
### 9\.2\.1 Point\-valued estimates
A common Bayesian point estimate of parameter vector \\(\\theta\\) is **the mean of the posterior distribution** over \\(\\theta\\). It gives the value of \\(\\theta\\) which we would expect to see when basing out expectations on the posterior distribution:
\\\[
\\begin{aligned}
\\mathbb{E}\_{P(\\theta \\mid D)} \= \\int \\theta \\ P(\\theta \\mid D) \\ \\text{d}\\theta
\\end{aligned}
\\]
Taking the Binomial Model as example, if we start with flat beliefs, the expected value of \\(\\theta\\) after \\(k\\) successes in \\(N\\) flips can be calculated rather easily as \\(\\frac{k\+1}{n\+2}\\).[49](#fn49) For our example case, we calculate the expected value of \\(\\theta\\) as \\(\\frac{8}{26} \\approx 0\.308\\) (see also Figure [9\.5](ch-03-04-parameter-estimation-points-intervals.html#fig:ch-03-03-estimation-24-7-overview)).
Another salient point\-estimate to summarize a Bayesian posterior distribution is the **maximum *a posteriori***, or MAP, for short. The MAP is the parameter value (tuple) that maximizes the posterior distribution:
\\\[ \\text{MAP}(P(\\theta \\mid D)) \= \\arg \\max\_\\theta P(\\theta \\mid D) \\]
While the mean of the posterior is “holistic” in the sense that it depends on the whole distribution, the MAP does not. The mean is therefore more faithful to the Bayesian ideal of taking the full posterior distribution into account. Moreover, depending on how Bayesian posteriors are computed/approximated, the estimation of a mean can be more reliable than that of a MAP.
The **maximum likelihood estimate (MLE)** is a point estimate based on the likelihood function alone.
It specifies the value of \\(\\theta\\) for which the observed data is most likely. We often use the notation \\(\\hat{\\theta}\\) to denote the MLE of \\(\\theta\\):
\\\[
\\begin{aligned}
\\hat{\\theta} \= \\arg \\max\_{\\theta} P(D \\mid \\theta)
\\end{aligned}
\\]
By ignoring the prior information entirely, the MLE is not a Bayesian notion, but a frequentist one (more on this in later chapters).
For the binomial likelihood function, the maximum likelihood estimate is easy to calculate as \\(\\frac{k}{N}\\), yielding
\\(\\frac{7}{24} \\approx 0\.292\\) for the running example. Figure [9\.6](ch-03-04-parameter-estimation-points-intervals.html#fig:ch-03-03-estimation-MLE) shows a graph of the non\-normalized likelihood function and indicates the maximum likelihood estimate (the value that maximizes the curve).
Figure 9\.6: Non\-normalized likelihood function for the observation of \\(k\=7\\) successes in \\(N\=24\\) flips, including maximum likelihood estimate.
**Exercise 9\.3**
Can you think of a situation where MLE and MAP are the same? HINT: Think which prior eliminates the difference between them!
Solution
MLE is a special case of MAP with a uniform prior.
### 9\.2\.2 Interval\-ranged estimates
A common Bayesian interval estimate of the coin bias parameter \\(\\theta\\) is a **credible interval**.[50](#fn50) An interval \\(\[l;u]\\) is a \\(\\gamma\\%\\) credible interval for a random variable \\(X\\) if two conditions hold, namely
\\\[
\\begin{aligned}
P(l \\le X \\le u) \= \\frac{\\gamma}{100}
\\end{aligned}
\\]
and, secondly, for every \\(x \\in\[l;u]\\) and \\(x' \\not \\in\[l;u]\\) we have \\(P(X\=x) \> P(X \= x')\\). Intuitively, a \\(95\\%\\) credible interval gives the range of values in which we believe with
relatively high certainty that the true value resides. Figure [9\.5](ch-03-04-parameter-estimation-points-intervals.html#fig:ch-03-03-estimation-24-7-overview) indicates the \\(95\\%\\) credible interval, based on the posterior distribution \\(P(\\theta \\mid D)\\) of \\(\\theta\\), for the 24/7 example.[51](#fn51)
Instead of credible intervals, sometimes posteriors are also summarized in terms of the \\(\\gamma\\%\\) inner\-quantile region. This is the interval \\(\[l;u]\\) such that
\\\[P(l \\le X) \= P(X \\le u) \= 0\.5 \\cdot (1 \- \\frac{\\gamma}{100})\\]
For example, a 95% inner\-quantile region contains all values except the smallest and largest values what each comprise 2\.5% of the probability mass/density.
The inner\-quantile range is easier to compute and does not have trouble with multi\-modality.
This is why it is frequently used to approximate Bayesian credible intervals.
However, care must be taken because the inner\-quantile range is not as intuitive a measure of the “best values” as credible intervals.
While credible intervals and inner quantile regions coincide for distributions with a symmetric distribution around a single maximum, and so tend to coincide for large sample size when posteriors tend to converge to normal distributions, there are cases of clear divergence.
Figure [9\.7](ch-03-04-parameter-estimation-points-intervals.html#fig:ch-03-03-estimation-credible-interval-vs-interquantilerange) shows such a case.
While the inner\-quantile region does not include the most likely values, the credible interval does.
Figure 9\.7: Difference between a 95% credible interval and a 95% inner\-quantile region.
### 9\.2\.3 Computing Bayesian estimates
As mentioned, the most common (and arguably best) summaries to report for a Bayesian posterior are the posterior mean and a credible interval. The `aida` package which accompanies this book has a convenience function called `aida::summarize_sample_vector()` that gives the mean and 95% credible interval for a vector of samples.
You can use it like so:
```
# take samples from a posterior (24/7 example with flat priors)
posterior_samples <- rbeta(100000, 8, 18)
# get summaries
aida::summarize_sample_vector(
# vector of samples
samples = posterior_samples,
# name of output column
name = "theta"
)
```
```
## # A tibble: 1 × 4
## Parameter `|95%` mean `95%|`
## <chr> <dbl> <dbl> <dbl>
## 1 theta 0.145 0.308 0.486
```
### 9\.2\.4 Excursion: Computing MLEs and MAPs in R
Computing the maximum or minimum of a function, such as an MLE or MAP estimate, is a common problem. R has a built\-in function `optim` that is useful for finding the minimum of a function. (If a maximum is needed, just multiply by \\(\-1\\) and search the minimum with `optim`.)
We can use the `optim` function to retrieve an MLE for 24/7 data and the Binomial Model (with flat priors) using conjugacy like so:
```
# perform optimization
MLE <- optim(
# starting value for optimization
par = 0.2,
# funtion to minimize (= optimize)
fn = function(par){
-dbeta(par, 8, 18)
},
# method of optimization (for 1-d cases)
method = "Brent",
# lower and upper bound of possible parameter values
lower = 0,
upper = 1
)
# retrieve MLE
MLE$par
```
```
## [1] 0.2916667
```
Indeed, the value obtained by computationally approximating the maximum likelihood estimate for this likelihood function coincides with the true value of \\(\\frac{7}{24}\\).
### 9\.2\.1 Point\-valued estimates
A common Bayesian point estimate of parameter vector \\(\\theta\\) is **the mean of the posterior distribution** over \\(\\theta\\). It gives the value of \\(\\theta\\) which we would expect to see when basing out expectations on the posterior distribution:
\\\[
\\begin{aligned}
\\mathbb{E}\_{P(\\theta \\mid D)} \= \\int \\theta \\ P(\\theta \\mid D) \\ \\text{d}\\theta
\\end{aligned}
\\]
Taking the Binomial Model as example, if we start with flat beliefs, the expected value of \\(\\theta\\) after \\(k\\) successes in \\(N\\) flips can be calculated rather easily as \\(\\frac{k\+1}{n\+2}\\).[49](#fn49) For our example case, we calculate the expected value of \\(\\theta\\) as \\(\\frac{8}{26} \\approx 0\.308\\) (see also Figure [9\.5](ch-03-04-parameter-estimation-points-intervals.html#fig:ch-03-03-estimation-24-7-overview)).
Another salient point\-estimate to summarize a Bayesian posterior distribution is the **maximum *a posteriori***, or MAP, for short. The MAP is the parameter value (tuple) that maximizes the posterior distribution:
\\\[ \\text{MAP}(P(\\theta \\mid D)) \= \\arg \\max\_\\theta P(\\theta \\mid D) \\]
While the mean of the posterior is “holistic” in the sense that it depends on the whole distribution, the MAP does not. The mean is therefore more faithful to the Bayesian ideal of taking the full posterior distribution into account. Moreover, depending on how Bayesian posteriors are computed/approximated, the estimation of a mean can be more reliable than that of a MAP.
The **maximum likelihood estimate (MLE)** is a point estimate based on the likelihood function alone.
It specifies the value of \\(\\theta\\) for which the observed data is most likely. We often use the notation \\(\\hat{\\theta}\\) to denote the MLE of \\(\\theta\\):
\\\[
\\begin{aligned}
\\hat{\\theta} \= \\arg \\max\_{\\theta} P(D \\mid \\theta)
\\end{aligned}
\\]
By ignoring the prior information entirely, the MLE is not a Bayesian notion, but a frequentist one (more on this in later chapters).
For the binomial likelihood function, the maximum likelihood estimate is easy to calculate as \\(\\frac{k}{N}\\), yielding
\\(\\frac{7}{24} \\approx 0\.292\\) for the running example. Figure [9\.6](ch-03-04-parameter-estimation-points-intervals.html#fig:ch-03-03-estimation-MLE) shows a graph of the non\-normalized likelihood function and indicates the maximum likelihood estimate (the value that maximizes the curve).
Figure 9\.6: Non\-normalized likelihood function for the observation of \\(k\=7\\) successes in \\(N\=24\\) flips, including maximum likelihood estimate.
**Exercise 9\.3**
Can you think of a situation where MLE and MAP are the same? HINT: Think which prior eliminates the difference between them!
Solution
MLE is a special case of MAP with a uniform prior.
### 9\.2\.2 Interval\-ranged estimates
A common Bayesian interval estimate of the coin bias parameter \\(\\theta\\) is a **credible interval**.[50](#fn50) An interval \\(\[l;u]\\) is a \\(\\gamma\\%\\) credible interval for a random variable \\(X\\) if two conditions hold, namely
\\\[
\\begin{aligned}
P(l \\le X \\le u) \= \\frac{\\gamma}{100}
\\end{aligned}
\\]
and, secondly, for every \\(x \\in\[l;u]\\) and \\(x' \\not \\in\[l;u]\\) we have \\(P(X\=x) \> P(X \= x')\\). Intuitively, a \\(95\\%\\) credible interval gives the range of values in which we believe with
relatively high certainty that the true value resides. Figure [9\.5](ch-03-04-parameter-estimation-points-intervals.html#fig:ch-03-03-estimation-24-7-overview) indicates the \\(95\\%\\) credible interval, based on the posterior distribution \\(P(\\theta \\mid D)\\) of \\(\\theta\\), for the 24/7 example.[51](#fn51)
Instead of credible intervals, sometimes posteriors are also summarized in terms of the \\(\\gamma\\%\\) inner\-quantile region. This is the interval \\(\[l;u]\\) such that
\\\[P(l \\le X) \= P(X \\le u) \= 0\.5 \\cdot (1 \- \\frac{\\gamma}{100})\\]
For example, a 95% inner\-quantile region contains all values except the smallest and largest values what each comprise 2\.5% of the probability mass/density.
The inner\-quantile range is easier to compute and does not have trouble with multi\-modality.
This is why it is frequently used to approximate Bayesian credible intervals.
However, care must be taken because the inner\-quantile range is not as intuitive a measure of the “best values” as credible intervals.
While credible intervals and inner quantile regions coincide for distributions with a symmetric distribution around a single maximum, and so tend to coincide for large sample size when posteriors tend to converge to normal distributions, there are cases of clear divergence.
Figure [9\.7](ch-03-04-parameter-estimation-points-intervals.html#fig:ch-03-03-estimation-credible-interval-vs-interquantilerange) shows such a case.
While the inner\-quantile region does not include the most likely values, the credible interval does.
Figure 9\.7: Difference between a 95% credible interval and a 95% inner\-quantile region.
### 9\.2\.3 Computing Bayesian estimates
As mentioned, the most common (and arguably best) summaries to report for a Bayesian posterior are the posterior mean and a credible interval. The `aida` package which accompanies this book has a convenience function called `aida::summarize_sample_vector()` that gives the mean and 95% credible interval for a vector of samples.
You can use it like so:
```
# take samples from a posterior (24/7 example with flat priors)
posterior_samples <- rbeta(100000, 8, 18)
# get summaries
aida::summarize_sample_vector(
# vector of samples
samples = posterior_samples,
# name of output column
name = "theta"
)
```
```
## # A tibble: 1 × 4
## Parameter `|95%` mean `95%|`
## <chr> <dbl> <dbl> <dbl>
## 1 theta 0.145 0.308 0.486
```
### 9\.2\.4 Excursion: Computing MLEs and MAPs in R
Computing the maximum or minimum of a function, such as an MLE or MAP estimate, is a common problem. R has a built\-in function `optim` that is useful for finding the minimum of a function. (If a maximum is needed, just multiply by \\(\-1\\) and search the minimum with `optim`.)
We can use the `optim` function to retrieve an MLE for 24/7 data and the Binomial Model (with flat priors) using conjugacy like so:
```
# perform optimization
MLE <- optim(
# starting value for optimization
par = 0.2,
# funtion to minimize (= optimize)
fn = function(par){
-dbeta(par, 8, 18)
},
# method of optimization (for 1-d cases)
method = "Brent",
# lower and upper bound of possible parameter values
lower = 0,
upper = 1
)
# retrieve MLE
MLE$par
```
```
## [1] 0.2916667
```
Indeed, the value obtained by computationally approximating the maximum likelihood estimate for this likelihood function coincides with the true value of \\(\\frac{7}{24}\\).
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Ch-03-03-estimation-algorithms.html |
9\.3 Approximating the posterior
--------------------------------
There are several methods of computing approximations of Bayesian posteriors. **Variational inference**, for example, hinges on the fact that under very general conditions, Bayesian posterior distributions are well approximated by (multi\-variate) normal distributions. The more data, the better the approximation. We can then reduce the approximation of a Bayesian posterior to a problem of optimizing parameter values: we simply look for the parameter values that yield the “best” parametric approximation to the Bayesian posterior. (Here, “best” is usually expressed in terms of minimizing a measure of divergence between probability distributions, such as [Kullback\-Leibler divergence](https://en.wikipedia.org/wiki/Kullback–Leibler_divergence).) Another prominent method of approximating Bayesian posteriors is [rejection sampling](https://en.wikipedia.org/wiki/Rejection_sampling).
The most prominent class of methods to approximate Bayesian posteriors are Markov Chain Monte Carlo methods. We will describe the most basic version of such MCMC algorithms below. For most applications in the context of this introductory book, it suffices to accept that there are black boxes (with some knobs for fine\-tuning) that, if you supply a model description, priors and data, will return samples from the posterior distribution.
### 9\.3\.1 Of apples and trees: Markov Chain Monte Carlo sampling
Beginning of each summer, Nature sends out the Children to distribute the apples among the trees. It is custom that bigger trees ought to receive more apples. Indeed, every tree is supposed to receive apples in proportion to how many leaves it has. If Giant George (an apple tree!) has twice as many leaves as Thin Finn (another apple tree!), Giant George is to receive twice as many apples as Thin Finn. This means that if there are \\(n\_a\\) apples to distribute in total, and \\(L(t)\\) is the number of leaves of tree \\(t\\), every tree should receive \\(A(t)\\) apples, where:
\\\[ A(t) \= \\frac{L(t)}{\\sum\_{t'} L(t')} \\ n\_a \\]
The trouble is that Nature does not know the number of leaves of all the trees: Nature does not care about numbers. The Children, however, can count. But they cannot keep in mind the number of leaves for many trees for a long time. And no single Child could ever visit all the trees before the winter. This is why the Children distribute apples in a way that approximates Nature’s will. The more apples to distribute, the better the approximation. Nature is generally fine with approximate but practical solutions.
When a Child visits a tree, it affectionately hangs an apple into its branches. It also writes down the name of the tree in a list next to the number of the apple it has just delivered. It then looks around and selects a random tree in the neighborhood. If the current tree \\(t\_c\\), where the Child is at present, has fewer leaves than this other tree \\(t\_o\\), i.e., if \\(L(t\_c) \< L(t\_o)\\), the Child visits \\(t\_o\\). If instead \\(L(t\_c) \\ge L(t\_o)\\), the Child flips a coin and visits \\(t\_o\\) with a probability proportional to \\(\\frac{L(t\_o)}{L(t\_c)}\\). In other words, the Child will always visit a tree with more leaves, and it will visit a tree with fewer leaves depending on the proportion of leaves.
When a large number of apples are distributed, and Nature looks at the list of trees each Child has visited. This list of tree names is a set of **representative samples** from the probability distribution:
\\\[P(t) \\propto L(t)\\]
These samples were obtained without the knowledge of the normalizing constant. The Children only had \\(L(t)\\) at their disposal. When trees are parameter tuples \\(\\theta\\) and the number of leaves is the product \\(P(D \\mid \\theta) \\ P(\\theta)\\), the Children would deliver samples from the posterior distribution *without* knowledge of the normalizing constant (a.k.a. the integral of doom).
The sequence of trees visited by a single Child is a **sample chain**. Usually, Nature sends out at least 2\-4 Children. The first tree a Child visits is the **initialization of the chain**. Sometimes Nature selects initial trees strategically for each Child. Sometimes Nature lets randomness rule. In any case, a Child might be quite far away from the meadow with lush apple trees, the so\-called **critical region** (where to dwell makes the most sense). It might take many tree hops before a Child reaches this meadow. Nature, therefore, allows each Child to hop from tree to tree for a certain time, the **warm\-up period**, before the Children start distributing apples and taking notes. If each Child only records every \\(k\\)\-th tree it visits, Nature calls \\(k\\) a **thinning factor**. Thinning generally reduces **autocorrelation** (think: the amount to which subsequent samples do not carry independent information about the distribution). Since every next hop depends on the current tree (and only on the current tree), the whole process is a **Markov process**. It is light on memory and parallelizable but also affected by autocorrelation. Since we are using samples, a so\-called **Monte Carlo method**, the whole affair is a **Markov Chain Monte Carlo** algorithm. It is one of many. It’s called **Metropolis\-Hastings**. More complex MCMC algorithms exist. One class of such MCMC algorithms is called **Hamiltonian Monte Carlo**, and these approaches use gradients to optimize the **proposal function**, i.e., the choice of the next tree to consider going to. They use the warm\-up period to initialize certain tuning parameters, making them much faster and more reliable (at least if the distribution of leaves among neighboring trees is well\-behaved).
How could Nature be sure that the plan succeeded? If not even Nature knows the distribution \\(P(t)\\), how can we be sure that the Children’s list gives representative samples to work with? Certainty is petty. The reduction of uncertainty is key! Since we send out several Children in parallel, and since each Child distributed many apples, we can compare the list of trees delivered by each Child (\= the set of samples in each chain). For that purpose, we can use statistics and ask: is it plausible that the set of samples in each chain has been generated from the same probability distribution? \- The answer to this question can help reduce uncertainty about the quality of the sampling process.
**Exercise 9\.6**
On the right, there is a shuffled list of the steps that occur in the MH algorithm. Bring the list in the right order by dragging each step to the corresponding box on the left.
If the new proposal has a higher posterior value than the most recent
sample, then accept the new proposal.
Generate a new value (proposal).
Set an initial value.
Compare the posterior value of the new proposal and the height of the
posterior at the previous step.
Choose to accept or reject the new proposal concerning the computed
proportion.
If the new proposal has a lower posterior value than the most recent
sample, compute the proportion of the posterior value of the new proposal and the height of the posterior at the previous step.
**Step 1:**
**Step 2:**
**Step 3:**
**Step 4:**
**Step 5:**
**Step 6:**
Solution
**Step 1:** Set an initial value.
**Step 2:** Generate a new value (proposal).
**Step 3:** Compare the posterior value of the new proposal and the height of the posterior at the previous step.
**Step 4:** If the new proposal has a higher posterior value than the most recent sample, then accept the new proposal.
**Step 5:** If the new proposal has a lower posterior value than the most recent sample, compute the proportion of the posterior value of the new proposal and the height of the posterior at the previous step.
**Step 6:** Choose to accept or reject the new proposal concerning the computed proportion.
### 9\.3\.2 Excursion: Probabilistic modeling with Stan
There are a number of software solutions for Bayesian posterior approximation, all of which implement a form of MCMC sampling, and most of which also realize at least one other form of parameter estimation. Many of these use a special language to define the model and rely on a different programming language (like R, Python, Julia, etc.) to communicate with the program that does the sampling. Some options are:
* [WinBUGS](https://www.mrc-bsu.cam.ac.uk/software/bugs/the-bugs-project-winbugs/): a classic which has grown out of use a bit
* [JAGS](http://mcmc-jags.sourceforge.net): another classic
* [Stan](https://mc-stan.org): strongly developed current workhorse
* [WebPPL](http://webppl.org): light\-weight, browser\-based full probabilistic programming language
* [pyro](http://pyro.ai): for probabilistic (deep) machine learning, based on PyTorch
* [greta](https://greta-stats.org): R\-only probabilistic modeling package, based on Python and TensorFlow
This section will showcase an example using Stan.
Later parts of this book will focus on regression models, for which we will use an R package called `brms`.
This package uses Stan in the background.
We do not have to write or read Stan code to work with `brms`.
Still, a short peek at how Stan works is interesting if only to get a rough feeling for what is happening under the hood.
#### 9\.3\.2\.1 Basics of Stan
In order to approximate a posterior distribution over parameters for a model, given some data, using an MCMC algorithm, we need to specify the model for the sampler. In particular, we must tell it about (i) the parameters, (ii) their priors, and (iii) the likelihood function. The latter requires that the sampler knows about the data. To communicate with Stan we will use the R package `rstan` (there are similar packages also for Python, Julia and other languages). More information about Stan can be found in [the documentation section of the Stan homepage](https://mc-stan.org/users/documentation/).
The usual workflow with Stan and `rstan` consists of the following steps. First, we use R to massage the data into the right format for passing to Stan (a named list, see below). Second, we write the model in the Stan programming language. We do this in a stand\-alone file.[52](#fn52) Then, we run the Stan code with the R command `rstan::stan` supplied by the package `rstan`. Finally, we collect the output of this operation (basically: a set of samples from the posterior distribution) and do with it as we please (plotting, further analysis, diagnosing the quality of the samples, …).
This is best conveyed by a simple example.
#### 9\.3\.2\.2 Binomial Model
Figure [9\.8](Ch-03-03-estimation-algorithms.html#fig:ch-03-03-Binomial-Model-repeated) shows the Binomial model for coin flips, as discussed before. We are going to implement it in Stan.
Figure 9\.8: The Binomial Model (repeated from before).
We use the data from the [King of France example](app-93-data-sets-king-of-france.html#app-93-data-sets-king-of-france), where we are interested in the number \\(k \= 109\\) of “true” responses to sentences with a false presupposition over all \\(N \= 311\\) relevant observations.
We collect this information in a named list, which we will pass to Stan.
```
KoF_data_4_Stan <- list(
k = 109,
N = 311
)
```
Next, we need to write the actual model. Notice that Stan code is strictly regimented to be divided into different blocks, so that Stan knows what is data, what are parameters and what constitutes the actual model (prior and likelihood). Stan also wants to know the type of its variables (and the ranges of values these can take on).
```
data {
int<lower=0> N ;
int<lower=0,upper=N> k ;
}
parameters {
real<lower=0,upper=1> theta ;
}
model {
# prior
theta ~ beta(1,1) ;
# likelihood
k ~ binomial(N, theta) ;
}
```
We save this Stan code in a file `binomial_model.stan` (which you can download [here](https://raw.githubusercontent.com/michael-franke/intro-data-analysis/master/models_stan/binomial_model.stan)) in a folder `models_stan` and then use the function `rstan::stan` to run the Stan code from within R.
```
stan_fit_binomial <- rstan::stan(
# where is the Stan code
file = 'models_stan/binomial_model.stan',
# data to supply to the Stan program
data = KoF_data_4_Stan,
# how many iterations of MCMC
iter = 3000,
# how many warmup steps
warmup = 500
)
```
The object returned from this call to Stan is a special model fit object. If we just print it, we get interesting information about the estimated parameters:
```
print(stan_fit_binomial)
```
```
## Inference for Stan model: binomial_model.
## 4 chains, each with iter=3000; warmup=500; thin=1;
## post-warmup draws per chain=2500, total post-warmup draws=10000.
##
## mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
## theta 0.35 0.00 0.03 0.30 0.33 0.35 0.37 0.41 4039 1
## lp__ -203.42 0.01 0.71 -205.39 -203.58 -203.15 -202.98 -202.93 4668 1
##
## Samples were drawn using NUTS(diag_e) at Wed Feb 8 13:05:33 2023.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
```
To get the posterior samples in a tidy format we use a function from the `tidybayes` package:
```
tidy_samples <- tidybayes::tidy_draws(stan_fit_binomial) %>% select(theta)
tidy_samples
```
```
## # A tibble: 10,000 × 1
## theta
## <dbl>
## 1 0.382
## 2 0.374
## 3 0.370
## 4 0.334
## 5 0.338
## 6 0.357
## 7 0.331
## 8 0.330
## 9 0.325
## 10 0.380
## # … with 9,990 more rows
```
We can then `pull` out the column `theta` as a vector and feed it into the summary function from the `aida` package to get our key Bayesian estimates:
```
Bayes_estimates <- tidy_samples %>%
pull(theta) %>%
aida::summarize_sample_vector("theta")
Bayes_estimates
```
```
## # A tibble: 1 × 4
## Parameter `|95%` mean `95%|`
## <chr> <dbl> <dbl> <dbl>
## 1 theta 0.300 0.352 0.405
```
Figure [9\.9](Ch-03-03-estimation-algorithms.html#fig:ch-03-03-binomial-posterior) moreover shows a density plot derived from the MCMC samples, together with the estimated 95% HDI and the true posterior distribution (in back), as derived by conjugacy.
Figure 9\.9: Posterior over bias \\(\\theta\\) given \\(k\=109\\) and \\(N\=311\\) approximated by samples from Stan, with estimated 95% credible interval (red area). The black curve shows the true posterior, derived through conjugacy.
### 9\.3\.1 Of apples and trees: Markov Chain Monte Carlo sampling
Beginning of each summer, Nature sends out the Children to distribute the apples among the trees. It is custom that bigger trees ought to receive more apples. Indeed, every tree is supposed to receive apples in proportion to how many leaves it has. If Giant George (an apple tree!) has twice as many leaves as Thin Finn (another apple tree!), Giant George is to receive twice as many apples as Thin Finn. This means that if there are \\(n\_a\\) apples to distribute in total, and \\(L(t)\\) is the number of leaves of tree \\(t\\), every tree should receive \\(A(t)\\) apples, where:
\\\[ A(t) \= \\frac{L(t)}{\\sum\_{t'} L(t')} \\ n\_a \\]
The trouble is that Nature does not know the number of leaves of all the trees: Nature does not care about numbers. The Children, however, can count. But they cannot keep in mind the number of leaves for many trees for a long time. And no single Child could ever visit all the trees before the winter. This is why the Children distribute apples in a way that approximates Nature’s will. The more apples to distribute, the better the approximation. Nature is generally fine with approximate but practical solutions.
When a Child visits a tree, it affectionately hangs an apple into its branches. It also writes down the name of the tree in a list next to the number of the apple it has just delivered. It then looks around and selects a random tree in the neighborhood. If the current tree \\(t\_c\\), where the Child is at present, has fewer leaves than this other tree \\(t\_o\\), i.e., if \\(L(t\_c) \< L(t\_o)\\), the Child visits \\(t\_o\\). If instead \\(L(t\_c) \\ge L(t\_o)\\), the Child flips a coin and visits \\(t\_o\\) with a probability proportional to \\(\\frac{L(t\_o)}{L(t\_c)}\\). In other words, the Child will always visit a tree with more leaves, and it will visit a tree with fewer leaves depending on the proportion of leaves.
When a large number of apples are distributed, and Nature looks at the list of trees each Child has visited. This list of tree names is a set of **representative samples** from the probability distribution:
\\\[P(t) \\propto L(t)\\]
These samples were obtained without the knowledge of the normalizing constant. The Children only had \\(L(t)\\) at their disposal. When trees are parameter tuples \\(\\theta\\) and the number of leaves is the product \\(P(D \\mid \\theta) \\ P(\\theta)\\), the Children would deliver samples from the posterior distribution *without* knowledge of the normalizing constant (a.k.a. the integral of doom).
The sequence of trees visited by a single Child is a **sample chain**. Usually, Nature sends out at least 2\-4 Children. The first tree a Child visits is the **initialization of the chain**. Sometimes Nature selects initial trees strategically for each Child. Sometimes Nature lets randomness rule. In any case, a Child might be quite far away from the meadow with lush apple trees, the so\-called **critical region** (where to dwell makes the most sense). It might take many tree hops before a Child reaches this meadow. Nature, therefore, allows each Child to hop from tree to tree for a certain time, the **warm\-up period**, before the Children start distributing apples and taking notes. If each Child only records every \\(k\\)\-th tree it visits, Nature calls \\(k\\) a **thinning factor**. Thinning generally reduces **autocorrelation** (think: the amount to which subsequent samples do not carry independent information about the distribution). Since every next hop depends on the current tree (and only on the current tree), the whole process is a **Markov process**. It is light on memory and parallelizable but also affected by autocorrelation. Since we are using samples, a so\-called **Monte Carlo method**, the whole affair is a **Markov Chain Monte Carlo** algorithm. It is one of many. It’s called **Metropolis\-Hastings**. More complex MCMC algorithms exist. One class of such MCMC algorithms is called **Hamiltonian Monte Carlo**, and these approaches use gradients to optimize the **proposal function**, i.e., the choice of the next tree to consider going to. They use the warm\-up period to initialize certain tuning parameters, making them much faster and more reliable (at least if the distribution of leaves among neighboring trees is well\-behaved).
How could Nature be sure that the plan succeeded? If not even Nature knows the distribution \\(P(t)\\), how can we be sure that the Children’s list gives representative samples to work with? Certainty is petty. The reduction of uncertainty is key! Since we send out several Children in parallel, and since each Child distributed many apples, we can compare the list of trees delivered by each Child (\= the set of samples in each chain). For that purpose, we can use statistics and ask: is it plausible that the set of samples in each chain has been generated from the same probability distribution? \- The answer to this question can help reduce uncertainty about the quality of the sampling process.
**Exercise 9\.6**
On the right, there is a shuffled list of the steps that occur in the MH algorithm. Bring the list in the right order by dragging each step to the corresponding box on the left.
If the new proposal has a higher posterior value than the most recent
sample, then accept the new proposal.
Generate a new value (proposal).
Set an initial value.
Compare the posterior value of the new proposal and the height of the
posterior at the previous step.
Choose to accept or reject the new proposal concerning the computed
proportion.
If the new proposal has a lower posterior value than the most recent
sample, compute the proportion of the posterior value of the new proposal and the height of the posterior at the previous step.
**Step 1:**
**Step 2:**
**Step 3:**
**Step 4:**
**Step 5:**
**Step 6:**
Solution
**Step 1:** Set an initial value.
**Step 2:** Generate a new value (proposal).
**Step 3:** Compare the posterior value of the new proposal and the height of the posterior at the previous step.
**Step 4:** If the new proposal has a higher posterior value than the most recent sample, then accept the new proposal.
**Step 5:** If the new proposal has a lower posterior value than the most recent sample, compute the proportion of the posterior value of the new proposal and the height of the posterior at the previous step.
**Step 6:** Choose to accept or reject the new proposal concerning the computed proportion.
### 9\.3\.2 Excursion: Probabilistic modeling with Stan
There are a number of software solutions for Bayesian posterior approximation, all of which implement a form of MCMC sampling, and most of which also realize at least one other form of parameter estimation. Many of these use a special language to define the model and rely on a different programming language (like R, Python, Julia, etc.) to communicate with the program that does the sampling. Some options are:
* [WinBUGS](https://www.mrc-bsu.cam.ac.uk/software/bugs/the-bugs-project-winbugs/): a classic which has grown out of use a bit
* [JAGS](http://mcmc-jags.sourceforge.net): another classic
* [Stan](https://mc-stan.org): strongly developed current workhorse
* [WebPPL](http://webppl.org): light\-weight, browser\-based full probabilistic programming language
* [pyro](http://pyro.ai): for probabilistic (deep) machine learning, based on PyTorch
* [greta](https://greta-stats.org): R\-only probabilistic modeling package, based on Python and TensorFlow
This section will showcase an example using Stan.
Later parts of this book will focus on regression models, for which we will use an R package called `brms`.
This package uses Stan in the background.
We do not have to write or read Stan code to work with `brms`.
Still, a short peek at how Stan works is interesting if only to get a rough feeling for what is happening under the hood.
#### 9\.3\.2\.1 Basics of Stan
In order to approximate a posterior distribution over parameters for a model, given some data, using an MCMC algorithm, we need to specify the model for the sampler. In particular, we must tell it about (i) the parameters, (ii) their priors, and (iii) the likelihood function. The latter requires that the sampler knows about the data. To communicate with Stan we will use the R package `rstan` (there are similar packages also for Python, Julia and other languages). More information about Stan can be found in [the documentation section of the Stan homepage](https://mc-stan.org/users/documentation/).
The usual workflow with Stan and `rstan` consists of the following steps. First, we use R to massage the data into the right format for passing to Stan (a named list, see below). Second, we write the model in the Stan programming language. We do this in a stand\-alone file.[52](#fn52) Then, we run the Stan code with the R command `rstan::stan` supplied by the package `rstan`. Finally, we collect the output of this operation (basically: a set of samples from the posterior distribution) and do with it as we please (plotting, further analysis, diagnosing the quality of the samples, …).
This is best conveyed by a simple example.
#### 9\.3\.2\.2 Binomial Model
Figure [9\.8](Ch-03-03-estimation-algorithms.html#fig:ch-03-03-Binomial-Model-repeated) shows the Binomial model for coin flips, as discussed before. We are going to implement it in Stan.
Figure 9\.8: The Binomial Model (repeated from before).
We use the data from the [King of France example](app-93-data-sets-king-of-france.html#app-93-data-sets-king-of-france), where we are interested in the number \\(k \= 109\\) of “true” responses to sentences with a false presupposition over all \\(N \= 311\\) relevant observations.
We collect this information in a named list, which we will pass to Stan.
```
KoF_data_4_Stan <- list(
k = 109,
N = 311
)
```
Next, we need to write the actual model. Notice that Stan code is strictly regimented to be divided into different blocks, so that Stan knows what is data, what are parameters and what constitutes the actual model (prior and likelihood). Stan also wants to know the type of its variables (and the ranges of values these can take on).
```
data {
int<lower=0> N ;
int<lower=0,upper=N> k ;
}
parameters {
real<lower=0,upper=1> theta ;
}
model {
# prior
theta ~ beta(1,1) ;
# likelihood
k ~ binomial(N, theta) ;
}
```
We save this Stan code in a file `binomial_model.stan` (which you can download [here](https://raw.githubusercontent.com/michael-franke/intro-data-analysis/master/models_stan/binomial_model.stan)) in a folder `models_stan` and then use the function `rstan::stan` to run the Stan code from within R.
```
stan_fit_binomial <- rstan::stan(
# where is the Stan code
file = 'models_stan/binomial_model.stan',
# data to supply to the Stan program
data = KoF_data_4_Stan,
# how many iterations of MCMC
iter = 3000,
# how many warmup steps
warmup = 500
)
```
The object returned from this call to Stan is a special model fit object. If we just print it, we get interesting information about the estimated parameters:
```
print(stan_fit_binomial)
```
```
## Inference for Stan model: binomial_model.
## 4 chains, each with iter=3000; warmup=500; thin=1;
## post-warmup draws per chain=2500, total post-warmup draws=10000.
##
## mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
## theta 0.35 0.00 0.03 0.30 0.33 0.35 0.37 0.41 4039 1
## lp__ -203.42 0.01 0.71 -205.39 -203.58 -203.15 -202.98 -202.93 4668 1
##
## Samples were drawn using NUTS(diag_e) at Wed Feb 8 13:05:33 2023.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
```
To get the posterior samples in a tidy format we use a function from the `tidybayes` package:
```
tidy_samples <- tidybayes::tidy_draws(stan_fit_binomial) %>% select(theta)
tidy_samples
```
```
## # A tibble: 10,000 × 1
## theta
## <dbl>
## 1 0.382
## 2 0.374
## 3 0.370
## 4 0.334
## 5 0.338
## 6 0.357
## 7 0.331
## 8 0.330
## 9 0.325
## 10 0.380
## # … with 9,990 more rows
```
We can then `pull` out the column `theta` as a vector and feed it into the summary function from the `aida` package to get our key Bayesian estimates:
```
Bayes_estimates <- tidy_samples %>%
pull(theta) %>%
aida::summarize_sample_vector("theta")
Bayes_estimates
```
```
## # A tibble: 1 × 4
## Parameter `|95%` mean `95%|`
## <chr> <dbl> <dbl> <dbl>
## 1 theta 0.300 0.352 0.405
```
Figure [9\.9](Ch-03-03-estimation-algorithms.html#fig:ch-03-03-binomial-posterior) moreover shows a density plot derived from the MCMC samples, together with the estimated 95% HDI and the true posterior distribution (in back), as derived by conjugacy.
Figure 9\.9: Posterior over bias \\(\\theta\\) given \\(k\=109\\) and \\(N\=311\\) approximated by samples from Stan, with estimated 95% credible interval (red area). The black curve shows the true posterior, derived through conjugacy.
#### 9\.3\.2\.1 Basics of Stan
In order to approximate a posterior distribution over parameters for a model, given some data, using an MCMC algorithm, we need to specify the model for the sampler. In particular, we must tell it about (i) the parameters, (ii) their priors, and (iii) the likelihood function. The latter requires that the sampler knows about the data. To communicate with Stan we will use the R package `rstan` (there are similar packages also for Python, Julia and other languages). More information about Stan can be found in [the documentation section of the Stan homepage](https://mc-stan.org/users/documentation/).
The usual workflow with Stan and `rstan` consists of the following steps. First, we use R to massage the data into the right format for passing to Stan (a named list, see below). Second, we write the model in the Stan programming language. We do this in a stand\-alone file.[52](#fn52) Then, we run the Stan code with the R command `rstan::stan` supplied by the package `rstan`. Finally, we collect the output of this operation (basically: a set of samples from the posterior distribution) and do with it as we please (plotting, further analysis, diagnosing the quality of the samples, …).
This is best conveyed by a simple example.
#### 9\.3\.2\.2 Binomial Model
Figure [9\.8](Ch-03-03-estimation-algorithms.html#fig:ch-03-03-Binomial-Model-repeated) shows the Binomial model for coin flips, as discussed before. We are going to implement it in Stan.
Figure 9\.8: The Binomial Model (repeated from before).
We use the data from the [King of France example](app-93-data-sets-king-of-france.html#app-93-data-sets-king-of-france), where we are interested in the number \\(k \= 109\\) of “true” responses to sentences with a false presupposition over all \\(N \= 311\\) relevant observations.
We collect this information in a named list, which we will pass to Stan.
```
KoF_data_4_Stan <- list(
k = 109,
N = 311
)
```
Next, we need to write the actual model. Notice that Stan code is strictly regimented to be divided into different blocks, so that Stan knows what is data, what are parameters and what constitutes the actual model (prior and likelihood). Stan also wants to know the type of its variables (and the ranges of values these can take on).
```
data {
int<lower=0> N ;
int<lower=0,upper=N> k ;
}
parameters {
real<lower=0,upper=1> theta ;
}
model {
# prior
theta ~ beta(1,1) ;
# likelihood
k ~ binomial(N, theta) ;
}
```
We save this Stan code in a file `binomial_model.stan` (which you can download [here](https://raw.githubusercontent.com/michael-franke/intro-data-analysis/master/models_stan/binomial_model.stan)) in a folder `models_stan` and then use the function `rstan::stan` to run the Stan code from within R.
```
stan_fit_binomial <- rstan::stan(
# where is the Stan code
file = 'models_stan/binomial_model.stan',
# data to supply to the Stan program
data = KoF_data_4_Stan,
# how many iterations of MCMC
iter = 3000,
# how many warmup steps
warmup = 500
)
```
The object returned from this call to Stan is a special model fit object. If we just print it, we get interesting information about the estimated parameters:
```
print(stan_fit_binomial)
```
```
## Inference for Stan model: binomial_model.
## 4 chains, each with iter=3000; warmup=500; thin=1;
## post-warmup draws per chain=2500, total post-warmup draws=10000.
##
## mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat
## theta 0.35 0.00 0.03 0.30 0.33 0.35 0.37 0.41 4039 1
## lp__ -203.42 0.01 0.71 -205.39 -203.58 -203.15 -202.98 -202.93 4668 1
##
## Samples were drawn using NUTS(diag_e) at Wed Feb 8 13:05:33 2023.
## For each parameter, n_eff is a crude measure of effective sample size,
## and Rhat is the potential scale reduction factor on split chains (at
## convergence, Rhat=1).
```
To get the posterior samples in a tidy format we use a function from the `tidybayes` package:
```
tidy_samples <- tidybayes::tidy_draws(stan_fit_binomial) %>% select(theta)
tidy_samples
```
```
## # A tibble: 10,000 × 1
## theta
## <dbl>
## 1 0.382
## 2 0.374
## 3 0.370
## 4 0.334
## 5 0.338
## 6 0.357
## 7 0.331
## 8 0.330
## 9 0.325
## 10 0.380
## # … with 9,990 more rows
```
We can then `pull` out the column `theta` as a vector and feed it into the summary function from the `aida` package to get our key Bayesian estimates:
```
Bayes_estimates <- tidy_samples %>%
pull(theta) %>%
aida::summarize_sample_vector("theta")
Bayes_estimates
```
```
## # A tibble: 1 × 4
## Parameter `|95%` mean `95%|`
## <chr> <dbl> <dbl> <dbl>
## 1 theta 0.300 0.352 0.405
```
Figure [9\.9](Ch-03-03-estimation-algorithms.html#fig:ch-03-03-binomial-posterior) moreover shows a density plot derived from the MCMC samples, together with the estimated 95% HDI and the true posterior distribution (in back), as derived by conjugacy.
Figure 9\.9: Posterior over bias \\(\\theta\\) given \\(k\=109\\) and \\(N\=311\\) approximated by samples from Stan, with estimated 95% credible interval (red area). The black curve shows the true posterior, derived through conjugacy.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/ch-03-04-parameter-estimation-normal.html |
9\.4 Estimating the parameters of a Normal distribution
-------------------------------------------------------
To keep matters simple and the sample size low (so as to better see effects of different priors; more on this below), we look at a (boring) fictitious data set, which we imagine being measurements of height of two species of flowers, unflowerly named species ‘A’ and ‘B’.
```
# fictitious data from height measurements (25 flowers of two species each in cm)
heights_A <- c(6.94, 11.77, 8.97, 12.2, 8.48,
9.29, 13.03, 13.58, 7.63, 11.47,
10.24, 8.99, 8.29, 10.01, 9.47,
9.92, 6.83, 11.6, 10.29, 10.7,
11, 8.68, 11.71, 10.09, 9.7)
heights_B <- c(11.45, 11.89, 13.35, 11.56, 13.78,
12.12, 10.41, 11.99, 12.27, 13.43,
10.91, 9.13, 9.25, 9.94, 13.5,
11.26, 10.38, 13.78, 9.35, 11.67,
11.32, 11.98, 12.92, 12.03, 12.02)
```
On the assumption that the metric measurements for flower ‘A’ come from a normal distribution, the goal is to estimate credible values for that normal distribution’s parameters \\(\\mu\_{A}\\) and \\(\\sigma\_{A}\\); and similarly for flower ‘B’.
The “research question” of interest is whether it is credible that the mean of heights for flower ‘A’ is smaller than that of ‘B’ \- or, in other words, whether the difference in means \\(\\delta \= \\mu\_{B} \- \\mu\_{A}\\) is credibly positive.
Here are relevant summary statistics for this case, and a plot, both of which seem to support the conjecture that flower ‘A’ is smaller, on average, than flower ‘B’.
```
# bring data into a more practical format
ffm_data <- tibble(
A = heights_A,
B = heights_B
) %>%
pivot_longer(
cols = everything(),
names_to = 'species',
values_to = 'height'
)
# some summary statistics
ffm_data %>%
group_by(species) %>%
summarise(
mean = mean(height),
std_dev = sd(height)
)
```
```
## # A tibble: 2 × 3
## species mean std_dev
## <chr> <dbl> <dbl>
## 1 A 10.0 1.76
## 2 B 11.7 1.38
```
```
ffm_data %>%
ggplot(aes(x = height)) +
geom_density(aes(color = species), size = 2) +
geom_rug() +
facet_grid(~species, scales = "free") +
theme(legend.position = 'none')
```
The remainder of this chapter will introduce two models for inferring the parameters of a (single) normal distribution, both of which are set\-up in such a way that it is possible to compute a closed\-form solution for the posterior distributions over \\(\\mu\\) and \\(\\sigma\\):
(i) a model with uninformative priors, and
(ii) a model with conjugate priors.
These two models are also explained in the video below.
### 9\.4\.1 Uninformative priors
The model with uninformative priors is shown in Figure [9\.10](ch-03-04-parameter-estimation-normal.html#fig:ch-03-03-estimation-normal-uninformative-model).
Figure 9\.10: A model to infer the parameter of a normal distribution with non\-informative priors.
The posterior for variance \\(\\sigma^{2}\\) and mean \\(\\mu\\) for this model with uninformative priors is as follows:
\\\[
\\begin{align\*}
P(\\mu, \\sigma^2 \\mid \\mathbf{y})
\& \= {\\color{7F2615}{P(\\sigma^2 \| \\mathbf{y})}} \\ \\ \\ {\\color{353585}{P(\\mu \\mid \\sigma^2, \\mathbf{y})}} \& \\text{with:} \\\\
\\sigma^2 \\mid \\mathbf{y} \& \\sim \\mathrm{Inv}\\text{\-}\\chi^2 \\left(n\-1,\\ s^2 \\right) \\\\
\\mu \\mid \\sigma^2, \\mathbf{y} \& \\sim \\mathrm{Normal} \\left (\\bar{y} \\mid \\frac{\\sigma}{\\sqrt{n}} \\right)
\\end{align\*}
\\]
The `aida` package provides the convenience function `aida::get_samples_single_noninformative`, which we use below but also show explicitly first. It takes a vector `data_vector` (like `height_A`) of metric observations as input and returns `n_samples` samples from the posterior.
```
get_samples_single_noninformative <- function(data_vector, n_samples = 1000) {
# determine sample variance
s_squared <- var(data_vector)
# posterior samples of the variance
var_samples <- extraDistr::rinvchisq(
n = n_samples,
nu = length(data_vector) - 1,
tau = s_squared
)
# posterior samples of the mean given the sampled variance
mu_samples <- map_dbl(
var_samples,
function(var) rnorm(
n = 1,
mean = mean(data_vector),
sd = sqrt(var / length(data_vector))
)
)
# return pairs of values
tibble(
mu = mu_samples,
sigma = sqrt(var_samples)
)
}
```
If we apply this function for the data of flower ‘A’, we get samples of likely pairs consisting of means *and* standard deviations (each row is one pair of associated samples):
```
aida::get_samples_single_noninformative(heights_A, n_samples = 5)
```
```
## # A tibble: 5 × 2
## mu sigma
## <dbl> <dbl>
## 1 10.0 1.71
## 2 10.3 1.91
## 3 10.3 2.08
## 4 10.6 1.28
## 5 9.44 1.92
```
By taking more samples from this 2\-dimensional (joint) posterior distribution a scatter point reveals its approximate shape.
```
# take 10,000 samples from the posterior
post_samples_A_noninfo <- aida::get_samples_single_noninformative(data_vector = heights_A, n_samples = 10000)
# look at a scatter plot
post_samples_A_noninfo %>%
ggplot(aes(x = sigma, y = mu)) +
geom_point(alpha = 0.4, color = "lightblue")
```
The plot below shows the marginal distributions of each variable, \\(\\mu\\) and \\(\\sigma\\), separately:
```
post_samples_A_noninfo %>%
pivot_longer(cols = everything(), names_to = "parameter", values_to = "value") %>%
ggplot(aes(x = value)) +
geom_density() +
facet_grid(~ parameter, scales = "free")
```
As usual, we can also produce the relevant Bayesian summary statistics for our samples, like so:
```
rbind(
aida::summarize_sample_vector(post_samples_A_noninfo$mu, "mu"),
aida::summarize_sample_vector(post_samples_A_noninfo$sigma, "sigma")
)
```
```
## # A tibble: 2 × 4
## Parameter `|95%` mean `95%|`
## <chr> <dbl> <dbl> <dbl>
## 1 mu 9.30 10.0 10.8
## 2 sigma 1.34 1.82 2.37
```
### 9\.4\.2 Conjugate priors
The model with uninformative priors is useful when modelers have no or wish to not include any *a priori* assumptions about \\(\\mu\\) and \\(\\sigma\\).
When prior assumptions are relevant, we can use a slightly more complex model with conjugate priors.
The model is shown in Figure [9\.11](ch-03-04-parameter-estimation-normal.html#fig:ch-03-03-estimation-normal-conjugate).
Figure 9\.11: Model with conjugate priors.
With this prior structure, the posterior is of the form:
\\\[
\\begin{align\*}
P(\\mu, \\sigma^2 \\mid \\mathbf{y})
\& \= {\\color{7F2615}{P(\\sigma^2 \| \\mathbf{y})}} \\ \\ \\ {\\color{353585}{P(\\mu \\mid \\sigma^2, \\mathbf{y})}} \& \\text{with:} \\\\
\\sigma^2 \\mid \\mathbf{y} \& \\sim {\\color{7F2615}{\\mathrm{Inv}\\text{\-}\\chi^2 \\left({\\color{3F9786}{\\nu\_1}},\\ {\\color{3F9786}{\\sigma^2\_1}} \\right)}} \\\\
\\mu \\mid \\sigma^2, \\mathbf{y} \& \\sim {\\color{353585}{\\mathrm{Normal} \\left ({\\color{3F9786}{\\mu\_1}}, \\frac{\\sigma}{\\sqrt{{\\color{3F9786}{\\kappa\_1}}}} \\right)}} \& \\text{where:} \\\\
{\\color{3F9786}{\\nu\_1}} \& \= \\nu\_0 \+ n \\\\
\\nu\_n{\\color{3F9786}{\\sigma\_1^2}} \& \= \\nu\_0 \\sigma\_0^2 \+ (n\-1\) s^2 \+ \\frac{\\kappa\_0 \\ n}{\\kappa\_0 \+ n} (\\bar{y} \- \\mu\_0\)^2 \\\\
{\\color{3F9786}{\\mu\_1}} \& \= \\frac{\\kappa\_0}{\\kappa\_0 \+ n} \\mu\_0 \+ \\frac{n}{\\kappa\_0 \+ n} \\bar{y} \\\\
{\\color{3F9786}{\\kappa\_1}} \& \= \\kappa\_0 \+ n
\\end{align\*}
\\]
**Exercise 9\.7**
The `aida` package provides the convenience function `aida::sample_Norm_inv_chisq` for sampling from the ‘normal inverse\-\\(\\chi^2\\)’ prior. Here is the source code of this function:
```
sample_Norm_inv_chisq <- function(
n_samples = 10000,
nu = 1,
var = 1,
mu = 0,
kappa = 1
)
{
var_samples <- extraDistr::rinvchisq(
n = n_samples,
nu = nu,
tau = var
)
mu_samples <- map_dbl(
var_samples,
function(s) rnorm(
n = 1,
mean = mu,
sd = sqrt(s / kappa)
)
)
tibble(
sigma = sqrt(var_samples),
mu = mu_samples
)
}
```
In the code below, we use this function to plot 10,000 samples from the prior with a particular set of parameter values. Notice the line `filter(abs(value) <= 10)` which is useful for an informative plot (try commenting it out: what does that tell you about the range of values reasonably likely to get sampled?).
```
# samples from the prior
samples_prior_1 <- aida::sample_Norm_inv_chisq(
nu = 1,
var = 1, # a priori "variance of the variance"
mu = 0,
kappa = 1
)
samples_prior_1 %>%
pivot_longer(cols = everything(), names_to = "parameter", values_to = "value") %>%
filter(abs(value) <= 10) %>%
ggplot(aes(x = value)) +
geom_density() +
facet_grid(~parameter, scales = "free")
```
To get comfortable with this ‘normal inverse\-\\(\\chi^2\\)’ distribution, fill in the `XXX` in the following code box (possibly removing or altering parts of the plotting code if you need to) to find parameter values that encode a prior belief according to which credible values of \\(\\sigma\\) are not much bigger than (very roughly) 7\.5, and credible values of \\(\\mu\\) lie (very roughly) in the range of 15 to 25\. (Hint: intuit what the meaning of each parameter value is by a trial\-error\-think method.) The plot you generate could look roughly like the one below.
(Motivation for the exercise: you should get familiar with this distribution, and also realize that it is clunky and that you might want to use a different prior structure in order to encode specific beliefs … which is exactly why we might want to be more flexible and go beyond conjugate priors in some cases.)
```
# samples from the prior
samples_prior_2 <- aida::sample_Norm_inv_chisq(
nu = XXX,
var = XXX,
mu = XXX,
kappa = XXX
)
samples_prior_2 %>%
pivot_longer(cols = everything(), names_to = "parameter", values_to = "value") %>%
filter(!(parameter == "mu" & (value >= 40 | value <= 0))) %>%
filter(!(parameter == "sigma" & value >= 10)) %>%
ggplot(aes(x = value)) +
geom_density() +
facet_grid(~parameter, scales = "free")
```
Solution
```
# samples from the prior
samples_prior_2 <- aida::sample_Norm_inv_chisq(
nu = 1,
var = 1, # a priori "variance of the variance"
mu = 20,
kappa = 1
)
samples_prior_2 %>%
pivot_longer(cols = everything(), names_to = "parameter", values_to = "value") %>%
filter(!(parameter == "mu" & (value >= 40 | value <= 0))) %>%
filter(!(parameter == "sigma" & value >= 10)) %>%
ggplot(aes(x = value)) +
geom_density() +
facet_grid(~parameter, scales = "free")
```
Here is another convenience function from the `aida` package for obtaining posterior samples for the conjugate prior model, taking as input a specification of the prior beliefs. Again, we first show the function explicitly before applying it to the flower data set.
```
get_samples_single_normal_conjugate <- function(
data_vector,
nu = 1,
var = 1,
mu = 0,
kappa = 1,
n_samples = 1000
)
{
n <- length(data_vector)
aida::sample_Norm_inv_chisq(
n_samples = n_samples,
nu = nu + n,
var = (nu * var + (n - 1) * var(data_vector) + (kappa * n) / (kappa + n)) / (nu + n),
mu = kappa / (kappa + n) * mu + n / (kappa + n) * mean(data_vector),
kappa = kappa + n
)
}
```
The code below calls this function to obtain samples from the posterior for two different models.
This will help illustrate the effect of priors on the posterior once more, especially for a case like the one at hand where we have only rather few data observations.
```
# posterior samples for prior 1
post_samples_A_conjugate_1 <- aida::get_samples_single_normal_conjugate(
heights_A,
nu = 1,
var = 1,
mu = 0,
kappa = 1,
n_samples = 10000
)
# posterior samples for prior 2
post_samples_A_conjugate_2 <- aida::get_samples_single_normal_conjugate(
heights_A,
nu = 1,
var = 1/1000,
mu = 40,
kappa = 10,
n_samples = 10000
)
rbind(
aida::summarize_sample_vector(post_samples_A_conjugate_1$mu, "mu") %>% mutate(model = 1),
aida::summarize_sample_vector(post_samples_A_conjugate_1$sigma, "sigma") %>% mutate(model = 1),
aida::summarize_sample_vector(post_samples_A_conjugate_2$mu, "mu") %>% mutate(model = 2),
aida::summarize_sample_vector(post_samples_A_conjugate_2$sigma, "sigma") %>% mutate(model = 2)
)
```
```
## # A tibble: 4 × 5
## Parameter `|95%` mean `95%|` model
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 mu 8.97 9.65 10.4 1
## 2 sigma 1.32 1.76 2.27 1
## 3 mu 18.0 18.6 19.2 2
## 4 sigma 1.36 1.82 2.36 2
```
The posterior is a mixture of prior and likelihood.
The prior for model 1 is rather weak (high variance, low \\(\\kappa\\) leading to a large range of plausible values for \\(\\mu\\)).
The prior for model 2 is rather biased.
The credible values of \\(\\mu\\) are rather high.
### 9\.4\.3 Estimating the difference between group means
The ulterior “research question” to address is: should we believe that flowers of type B are higher, on average, than flowers of type A?
To address this question, it suffices to take samples for \\(\\mu\_{A}\\) and \\(\\mu\_{B}\\), obtained by one of the methods introduced in the previous sections (using the same model for both flower types, unless we have a good reason not to), and then to inspect the vector of differences between samples \\(\\delta \= \\mu\_{B} \- \\mu\_{A}\\).
If the derived samples of \\(\\delta\\) are credibly bigger than zero, there is reason to believe that there is a difference between flower types such that ‘B’ is bigger than ‘A’.
So, let’s use the (conjugate) prior of model 1 from above to also take 10,000 samples from the posterior when conditioning with the data in `heights_B`. Store the results in a vector called `post_samples_B_conjugate_1`.
```
post_samples_B_conjugate_1 <- aida::get_samples_single_normal_conjugate(
heights_B,
nu = 1,
var = 1,
mu = 0,
kappa = 1,
n_samples = 10000
)
```
The summary of the difference vector gives us information about credible values of \\(\\delta \= \\mu\_{B} \- \\mu\_{A}\\).
```
delta_flower_heights <- post_samples_B_conjugate_1$mu - post_samples_A_conjugate_1$mu
aida::summarize_sample_vector(delta_flower_heights, name = 'delta')
```
```
## # A tibble: 1 × 4
## Parameter `|95%` mean `95%|`
## <chr> <dbl> <dbl> <dbl>
## 1 delta 0.694 1.57 2.43
```
We might conclude from this that a positive difference in height is credible.
More on Bayesian testing of such hypotheses about parameter values in Chapter [11](ch-03-07-hypothesis-testing-Bayes.html#ch-03-07-hypothesis-testing-Bayes).
### 9\.4\.1 Uninformative priors
The model with uninformative priors is shown in Figure [9\.10](ch-03-04-parameter-estimation-normal.html#fig:ch-03-03-estimation-normal-uninformative-model).
Figure 9\.10: A model to infer the parameter of a normal distribution with non\-informative priors.
The posterior for variance \\(\\sigma^{2}\\) and mean \\(\\mu\\) for this model with uninformative priors is as follows:
\\\[
\\begin{align\*}
P(\\mu, \\sigma^2 \\mid \\mathbf{y})
\& \= {\\color{7F2615}{P(\\sigma^2 \| \\mathbf{y})}} \\ \\ \\ {\\color{353585}{P(\\mu \\mid \\sigma^2, \\mathbf{y})}} \& \\text{with:} \\\\
\\sigma^2 \\mid \\mathbf{y} \& \\sim \\mathrm{Inv}\\text{\-}\\chi^2 \\left(n\-1,\\ s^2 \\right) \\\\
\\mu \\mid \\sigma^2, \\mathbf{y} \& \\sim \\mathrm{Normal} \\left (\\bar{y} \\mid \\frac{\\sigma}{\\sqrt{n}} \\right)
\\end{align\*}
\\]
The `aida` package provides the convenience function `aida::get_samples_single_noninformative`, which we use below but also show explicitly first. It takes a vector `data_vector` (like `height_A`) of metric observations as input and returns `n_samples` samples from the posterior.
```
get_samples_single_noninformative <- function(data_vector, n_samples = 1000) {
# determine sample variance
s_squared <- var(data_vector)
# posterior samples of the variance
var_samples <- extraDistr::rinvchisq(
n = n_samples,
nu = length(data_vector) - 1,
tau = s_squared
)
# posterior samples of the mean given the sampled variance
mu_samples <- map_dbl(
var_samples,
function(var) rnorm(
n = 1,
mean = mean(data_vector),
sd = sqrt(var / length(data_vector))
)
)
# return pairs of values
tibble(
mu = mu_samples,
sigma = sqrt(var_samples)
)
}
```
If we apply this function for the data of flower ‘A’, we get samples of likely pairs consisting of means *and* standard deviations (each row is one pair of associated samples):
```
aida::get_samples_single_noninformative(heights_A, n_samples = 5)
```
```
## # A tibble: 5 × 2
## mu sigma
## <dbl> <dbl>
## 1 10.0 1.71
## 2 10.3 1.91
## 3 10.3 2.08
## 4 10.6 1.28
## 5 9.44 1.92
```
By taking more samples from this 2\-dimensional (joint) posterior distribution a scatter point reveals its approximate shape.
```
# take 10,000 samples from the posterior
post_samples_A_noninfo <- aida::get_samples_single_noninformative(data_vector = heights_A, n_samples = 10000)
# look at a scatter plot
post_samples_A_noninfo %>%
ggplot(aes(x = sigma, y = mu)) +
geom_point(alpha = 0.4, color = "lightblue")
```
The plot below shows the marginal distributions of each variable, \\(\\mu\\) and \\(\\sigma\\), separately:
```
post_samples_A_noninfo %>%
pivot_longer(cols = everything(), names_to = "parameter", values_to = "value") %>%
ggplot(aes(x = value)) +
geom_density() +
facet_grid(~ parameter, scales = "free")
```
As usual, we can also produce the relevant Bayesian summary statistics for our samples, like so:
```
rbind(
aida::summarize_sample_vector(post_samples_A_noninfo$mu, "mu"),
aida::summarize_sample_vector(post_samples_A_noninfo$sigma, "sigma")
)
```
```
## # A tibble: 2 × 4
## Parameter `|95%` mean `95%|`
## <chr> <dbl> <dbl> <dbl>
## 1 mu 9.30 10.0 10.8
## 2 sigma 1.34 1.82 2.37
```
### 9\.4\.2 Conjugate priors
The model with uninformative priors is useful when modelers have no or wish to not include any *a priori* assumptions about \\(\\mu\\) and \\(\\sigma\\).
When prior assumptions are relevant, we can use a slightly more complex model with conjugate priors.
The model is shown in Figure [9\.11](ch-03-04-parameter-estimation-normal.html#fig:ch-03-03-estimation-normal-conjugate).
Figure 9\.11: Model with conjugate priors.
With this prior structure, the posterior is of the form:
\\\[
\\begin{align\*}
P(\\mu, \\sigma^2 \\mid \\mathbf{y})
\& \= {\\color{7F2615}{P(\\sigma^2 \| \\mathbf{y})}} \\ \\ \\ {\\color{353585}{P(\\mu \\mid \\sigma^2, \\mathbf{y})}} \& \\text{with:} \\\\
\\sigma^2 \\mid \\mathbf{y} \& \\sim {\\color{7F2615}{\\mathrm{Inv}\\text{\-}\\chi^2 \\left({\\color{3F9786}{\\nu\_1}},\\ {\\color{3F9786}{\\sigma^2\_1}} \\right)}} \\\\
\\mu \\mid \\sigma^2, \\mathbf{y} \& \\sim {\\color{353585}{\\mathrm{Normal} \\left ({\\color{3F9786}{\\mu\_1}}, \\frac{\\sigma}{\\sqrt{{\\color{3F9786}{\\kappa\_1}}}} \\right)}} \& \\text{where:} \\\\
{\\color{3F9786}{\\nu\_1}} \& \= \\nu\_0 \+ n \\\\
\\nu\_n{\\color{3F9786}{\\sigma\_1^2}} \& \= \\nu\_0 \\sigma\_0^2 \+ (n\-1\) s^2 \+ \\frac{\\kappa\_0 \\ n}{\\kappa\_0 \+ n} (\\bar{y} \- \\mu\_0\)^2 \\\\
{\\color{3F9786}{\\mu\_1}} \& \= \\frac{\\kappa\_0}{\\kappa\_0 \+ n} \\mu\_0 \+ \\frac{n}{\\kappa\_0 \+ n} \\bar{y} \\\\
{\\color{3F9786}{\\kappa\_1}} \& \= \\kappa\_0 \+ n
\\end{align\*}
\\]
**Exercise 9\.7**
The `aida` package provides the convenience function `aida::sample_Norm_inv_chisq` for sampling from the ‘normal inverse\-\\(\\chi^2\\)’ prior. Here is the source code of this function:
```
sample_Norm_inv_chisq <- function(
n_samples = 10000,
nu = 1,
var = 1,
mu = 0,
kappa = 1
)
{
var_samples <- extraDistr::rinvchisq(
n = n_samples,
nu = nu,
tau = var
)
mu_samples <- map_dbl(
var_samples,
function(s) rnorm(
n = 1,
mean = mu,
sd = sqrt(s / kappa)
)
)
tibble(
sigma = sqrt(var_samples),
mu = mu_samples
)
}
```
In the code below, we use this function to plot 10,000 samples from the prior with a particular set of parameter values. Notice the line `filter(abs(value) <= 10)` which is useful for an informative plot (try commenting it out: what does that tell you about the range of values reasonably likely to get sampled?).
```
# samples from the prior
samples_prior_1 <- aida::sample_Norm_inv_chisq(
nu = 1,
var = 1, # a priori "variance of the variance"
mu = 0,
kappa = 1
)
samples_prior_1 %>%
pivot_longer(cols = everything(), names_to = "parameter", values_to = "value") %>%
filter(abs(value) <= 10) %>%
ggplot(aes(x = value)) +
geom_density() +
facet_grid(~parameter, scales = "free")
```
To get comfortable with this ‘normal inverse\-\\(\\chi^2\\)’ distribution, fill in the `XXX` in the following code box (possibly removing or altering parts of the plotting code if you need to) to find parameter values that encode a prior belief according to which credible values of \\(\\sigma\\) are not much bigger than (very roughly) 7\.5, and credible values of \\(\\mu\\) lie (very roughly) in the range of 15 to 25\. (Hint: intuit what the meaning of each parameter value is by a trial\-error\-think method.) The plot you generate could look roughly like the one below.
(Motivation for the exercise: you should get familiar with this distribution, and also realize that it is clunky and that you might want to use a different prior structure in order to encode specific beliefs … which is exactly why we might want to be more flexible and go beyond conjugate priors in some cases.)
```
# samples from the prior
samples_prior_2 <- aida::sample_Norm_inv_chisq(
nu = XXX,
var = XXX,
mu = XXX,
kappa = XXX
)
samples_prior_2 %>%
pivot_longer(cols = everything(), names_to = "parameter", values_to = "value") %>%
filter(!(parameter == "mu" & (value >= 40 | value <= 0))) %>%
filter(!(parameter == "sigma" & value >= 10)) %>%
ggplot(aes(x = value)) +
geom_density() +
facet_grid(~parameter, scales = "free")
```
Solution
```
# samples from the prior
samples_prior_2 <- aida::sample_Norm_inv_chisq(
nu = 1,
var = 1, # a priori "variance of the variance"
mu = 20,
kappa = 1
)
samples_prior_2 %>%
pivot_longer(cols = everything(), names_to = "parameter", values_to = "value") %>%
filter(!(parameter == "mu" & (value >= 40 | value <= 0))) %>%
filter(!(parameter == "sigma" & value >= 10)) %>%
ggplot(aes(x = value)) +
geom_density() +
facet_grid(~parameter, scales = "free")
```
Here is another convenience function from the `aida` package for obtaining posterior samples for the conjugate prior model, taking as input a specification of the prior beliefs. Again, we first show the function explicitly before applying it to the flower data set.
```
get_samples_single_normal_conjugate <- function(
data_vector,
nu = 1,
var = 1,
mu = 0,
kappa = 1,
n_samples = 1000
)
{
n <- length(data_vector)
aida::sample_Norm_inv_chisq(
n_samples = n_samples,
nu = nu + n,
var = (nu * var + (n - 1) * var(data_vector) + (kappa * n) / (kappa + n)) / (nu + n),
mu = kappa / (kappa + n) * mu + n / (kappa + n) * mean(data_vector),
kappa = kappa + n
)
}
```
The code below calls this function to obtain samples from the posterior for two different models.
This will help illustrate the effect of priors on the posterior once more, especially for a case like the one at hand where we have only rather few data observations.
```
# posterior samples for prior 1
post_samples_A_conjugate_1 <- aida::get_samples_single_normal_conjugate(
heights_A,
nu = 1,
var = 1,
mu = 0,
kappa = 1,
n_samples = 10000
)
# posterior samples for prior 2
post_samples_A_conjugate_2 <- aida::get_samples_single_normal_conjugate(
heights_A,
nu = 1,
var = 1/1000,
mu = 40,
kappa = 10,
n_samples = 10000
)
rbind(
aida::summarize_sample_vector(post_samples_A_conjugate_1$mu, "mu") %>% mutate(model = 1),
aida::summarize_sample_vector(post_samples_A_conjugate_1$sigma, "sigma") %>% mutate(model = 1),
aida::summarize_sample_vector(post_samples_A_conjugate_2$mu, "mu") %>% mutate(model = 2),
aida::summarize_sample_vector(post_samples_A_conjugate_2$sigma, "sigma") %>% mutate(model = 2)
)
```
```
## # A tibble: 4 × 5
## Parameter `|95%` mean `95%|` model
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 mu 8.97 9.65 10.4 1
## 2 sigma 1.32 1.76 2.27 1
## 3 mu 18.0 18.6 19.2 2
## 4 sigma 1.36 1.82 2.36 2
```
The posterior is a mixture of prior and likelihood.
The prior for model 1 is rather weak (high variance, low \\(\\kappa\\) leading to a large range of plausible values for \\(\\mu\\)).
The prior for model 2 is rather biased.
The credible values of \\(\\mu\\) are rather high.
### 9\.4\.3 Estimating the difference between group means
The ulterior “research question” to address is: should we believe that flowers of type B are higher, on average, than flowers of type A?
To address this question, it suffices to take samples for \\(\\mu\_{A}\\) and \\(\\mu\_{B}\\), obtained by one of the methods introduced in the previous sections (using the same model for both flower types, unless we have a good reason not to), and then to inspect the vector of differences between samples \\(\\delta \= \\mu\_{B} \- \\mu\_{A}\\).
If the derived samples of \\(\\delta\\) are credibly bigger than zero, there is reason to believe that there is a difference between flower types such that ‘B’ is bigger than ‘A’.
So, let’s use the (conjugate) prior of model 1 from above to also take 10,000 samples from the posterior when conditioning with the data in `heights_B`. Store the results in a vector called `post_samples_B_conjugate_1`.
```
post_samples_B_conjugate_1 <- aida::get_samples_single_normal_conjugate(
heights_B,
nu = 1,
var = 1,
mu = 0,
kappa = 1,
n_samples = 10000
)
```
The summary of the difference vector gives us information about credible values of \\(\\delta \= \\mu\_{B} \- \\mu\_{A}\\).
```
delta_flower_heights <- post_samples_B_conjugate_1$mu - post_samples_A_conjugate_1$mu
aida::summarize_sample_vector(delta_flower_heights, name = 'delta')
```
```
## # A tibble: 1 × 4
## Parameter `|95%` mean `95%|`
## <chr> <dbl> <dbl> <dbl>
## 1 delta 0.694 1.57 2.43
```
We might conclude from this that a positive difference in height is credible.
More on Bayesian testing of such hypotheses about parameter values in Chapter [11](ch-03-07-hypothesis-testing-Bayes.html#ch-03-07-hypothesis-testing-Bayes).
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-03-06-model-comparison-case-study.html |
10\.1 Case study: recall models
-------------------------------
As a running example for this chapter, we borrow from Myung ([2003](#ref-Myung2003:Tutorial-on-Max)) and consider a fictitious data set of recall rates and two models to explain this data.
As for the data, for each time point (in seconds) \\(t \\in \\{1, 3, 6, 9, 12, 18\\}\\), we have 100 (binary) observations of whether a previously memorized item was recalled correctly.
```
# time after memorization (in seconds)
t <- c(1, 3, 6, 9, 12, 18)
# proportion (out of 100) of correct recall
y <- c(.94, .77, .40, .26, .24, .16)
# number of observed correct recalls (out of 100)
obs <- y * 100
```
A visual representation of this data set is here:
We are interested in comparing two theoretically different models for this data. Models differ in their assumption about the functional relationship between recall probability and time. The **exponential model** assumes that the recall probability \\(\\theta\_t\\) at time \\(t\\) is an exponential decay function with parameters \\(a\\) and \\(b\\):
\\\[\\theta\_t(a, b) \= a \\exp (\-bt), \\ \\ \\ \\ \\text{where } a,b\>0 \\]
Taking the binary nature of the data (recalled / not recalled) into account, this results in the following likelihood function for the exponential model:
\\\[
\\begin{aligned}
P(k \\mid a, b, N , M\_{\\text{exp}}) \& \= \\text{Binom}(k,N, a \\exp (\-bt)), \\ \\ \\ \\ \\text{where } a,b\>0
\\end{aligned}
\\]
In contrast, the **power model** assumes that the relationship is that of a power function:
\\\[\\theta\_t(c, d) \= ct^{\-d}, \\ \\ \\ \\ \\text{where } c,d\>0 \\]
The resulting likelihood function for the power model is:
\\\[
\\begin{aligned}
P(k \\mid c, d, N , M\_{\\text{pow}}) \& \= \\text{Binom}(k,N, c\\ t^{\-d}), \\ \\ \\ \\ \\text{where } c,d\>0
\\end{aligned}
\\]
These models therefore make different (parameterized) predictions about the time course of forgetting/recall. Figure [10\.1](Chap-03-06-model-comparison-case-study.html#fig:Chap-03-06-model-comparison-model-predictions) shows the predictions of each model for \\(\\theta\_t\\) for different parameter values:
Figure 10\.1: Examples of predictions of the exponential and the power model of forgetting for different values of each model’s parameters.
The research question of relevance is: which of these two models is a better model for the observed data?
We are going to look at the Akaike information criterion (AIC) first, which only considers the models’ likelihood functions and is therefore a non\-Bayesian method.
We will see that AIC scores are easy to compute, but give numbers that are hard to interpret or only approximation of quantities that have a clear interpretation.
Then we look at a Bayesian method, using Bayes factors, which does take priors over model parameters into account.
We will see that Bayes factors are much harder to compute, but do directly calculate quantities that are intuitively interpretable.
We will also see that AIC scores only very indirectly take a model’s complexity into account.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-03-06-model-comparison-AIC.html |
10\.2 Akaike Information Criterion
----------------------------------
A wide\-spread non\-Bayesian approach to model comparison is to use the **Akaike information criterion (AIC)**. The AIC is the most common instance of a class of measures for model comparison known as *information criteria*, which all draw on information\-theoretic notions to compare how good each model is.
If \\(M\_i\\) is a model, specified here only by its likelihood function \\(P(D \\mid \\theta\_i, M\_i)\\), with \\(k\_i\\) model parameters in parameter vector \\(\\theta\_i\\), and if \\(D\_\\text{obs}\\) is the observed data, then the AIC score of model \\(M\_i\\) given \\(D\_\\text{obs}\\) is defined as:
\\\[
\\begin{aligned}
\\text{AIC}(M\_i, D\_\\text{obs}) \& \= 2k\_i \- 2\\log P(D\_\\text{obs} \\mid \\hat{\\theta\_i}, M\_i)
\\end{aligned}
\\]
Here, \\(\\hat{\\theta}\_i \= \\arg \\max\_{\\theta\_i} P(D\_\\text{obs} \\mid \\theta\_i, M\_i)\\) is the best\-fitting parameter vector, i.e., the maximum likelihood estimate (MLE), and \\(k\\) is the number of free parameters in model \\(M\_i\\).
The lower an AIC score, the better the model (in comparison to other models for the same data \\(D\_\\text{obs}\\)). All else equal, the higher the number of free parameters \\(k\_i\\), the worse the model’s AIC score. The first summand in the definition above can, therefore, be conceived of as a measure of **model complexity**. As for the second summand, think of \\(\- \\log P(D\_\\text{obs} \\mid \\hat{\\theta}\_i, M\_i)\\) as a measure of (information\-theoretic) surprisal: how surprising is the observed data \\(D\_\\text{obs}\\) from the point of view of model \\(M\\) under the most favorable circumstances (that is, the MLE of \\(\\theta\_i\\)). The higher the probability \\(P(D\_\\text{obs} \\mid \\hat{\\theta}\_i, M\_i)\\), the better the model \\(M\_i\\)’s AIC score, all else equal.
To apply AIC\-based model comparison to the recall models, we first need to compute the MLE of each model (see Chapter [9\.1\.3](ch-03-03-estimation-bayes.html#ch-03-04-parameter-estimation-conjugacy)). Here are functions that return the negative log\-likelihood of each model, for any (suitable) pair of parameter values:
```
# generic neg-log-LH function (covers both models)
nLL_generic <- function(par, model_name) {
w1 <- par[1]
w2 <- par[2]
# make sure paramters are in acceptable range
if (w1 < 0 | w2 < 0 | w1 > 20 | w2 > 20) {
return(NA)
}
# calculate predicted recall rates for given parameters
if (model_name == "exponential") {
theta <- w1 * exp(-w2 * t) # exponential model
} else {
theta <- w1 * t^(-w2) # power model
}
# avoid edge cases of infinite log-likelihood
theta[theta <= 0.0] <- 1.0e-4
theta[theta >= 1.0] <- 1 - 1.0e-4
# return negative log-likelihood of data
- sum(dbinom(x = obs, prob = theta, size = 100, log = T))
}
# negative log likelihood of exponential model
nLL_exp <- function(par) {nLL_generic(par, "exponential")}
# negative log likelihood of power model
nLL_pow <- function(par) {nLL_generic(par, "power")}
```
These functions are then optimized with R’s built\-in function `optim`. The results are shown in the table below.
```
# getting the best fitting values
bestExpo <- optim(nLL_exp, par = c(1, 0.5))
bestPow <- optim(nLL_pow, par = c(0.5, 0.2))
MLEstimates <- data.frame(model = rep(c("exponential", "power"), each = 2),
parameter = c("a", "b", "c", "d"),
value = c(bestExpo$par, bestPow$par))
MLEstimates
```
```
## model parameter value
## 1 exponential a 1.0701722
## 2 exponential b 0.1308151
## 3 power c 0.9531330
## 4 power d 0.4979154
```
The MLE\-predictions of each model are shown in Figure [10\.2](Chap-03-06-model-comparison-AIC.html#fig:Chap-03-06-model-comparison-MLE-fits) below, alongside the observed data.
Figure 10\.2: Predictions of the exponential and the power model under best\-fitting parameter values.
By visual inspection of Figure [10\.2](Chap-03-06-model-comparison-AIC.html#fig:Chap-03-06-model-comparison-MLE-fits) alone, it is impossible to say with confidence which model is better. Numbers might help see more fine\-grained differences.
So, let’s look at the log\-likelihood and the corresponding probability of the data for each model under each model’s best fitting parameter values.
```
predExp <- expo(t, a, b)
predPow <- power(t, c, d)
modelStats <- tibble(
model = c("expo", "power"),
`log likelihood` = round(c(-bestExpo$value, -bestPow$value), 3),
probability = signif(exp(c(-bestExpo$value, -bestPow$value)), 3),
# sum of squared errors
SS = round(c(sum((predExp - y)^2), sum((predPow - y)^2)), 3)
)
modelStats
```
```
## # A tibble: 2 × 4
## model `log likelihood` probability SS
## <chr> <dbl> <dbl> <dbl>
## 1 expo -18.7 7.82e- 9 0.019
## 2 power -26.7 2.47e-12 0.057
```
The exponential model has a higher log\-likelihood, a higher probability, and a lower sum of squares. This suggests that the exponential model is better.
The AIC\-score of these models is a direct function of the negative log\-likelihood. Since both models have the same number of parameters, we arrive at the same verdict as before: based on a comparison of AIC\-scores, the exponential model is the better model.
```
get_AIC <- function(optim_fit) {
2 * length(optim_fit$par) + 2 * optim_fit$value
}
AIC_scores <- tibble(
AIC_exponential = get_AIC(bestExpo),
AIC_power = get_AIC(bestPow)
)
AIC_scores
```
```
## # A tibble: 1 × 2
## AIC_exponential AIC_power
## <dbl> <dbl>
## 1 41.3 57.5
```
How should we interpret the difference in AIC\-scores? Some suggest that differences in AIC\-scores larger than 10 should be treated as implying that the weaker model has practically no empirical support ([Burnham and Anderson 2002](#ref-BurnhamAnderson2002:Model-Selection)). Adopting such a criterion, we would therefore favor the exponential model based on the data observed.
But we could also try to walk a more nuanced, more quantitative road.
We would ideally want to know the *absolute probability* of \\(M\_i\\) given the data: \\(P(M\_i \\mid D)\\).
Unfortunately, to calculate this (by Bayes rule), we would need to normalize by quantifying over *all* models. Alternatively, we look at the relative probability of a small selection of models.
Indeed, we can look at relative AIC\-scores in terms of so\-called **Akaike weights** ([Wagenmakers and Farrell 2004](#ref-WagenmakersFarrell2004:AIC-model-selec); [Burnham and Anderson 2002](#ref-BurnhamAnderson2002:Model-Selection)) to derive an approximation of \\(P(M\_i \\mid D)\\), at least for the case where we only consider a small set of candidate models.
So, if we want to compare models \\(M\_1, \\dots, M\_n\\) and \\(\\text{AIC}(M\_i, D)\\) is the AIC\-score of model \\(M\_i\\) for observed data \\(D\\), then the **Akaike weight of model \\(M\_i\\)** is defined as:
\\\[
\\begin{aligned}
w\_{\\text{AIC}}(M\_i, D) \& \= \\frac{\\exp (\- 0\.5 \* \\Delta\_{\\text{AIC}}(M\_i,D) )} {\\sum\_{j\=1}^k\\exp (\- 0\.5 \* \\Delta\_{\\text{AIC}}(M\_j,D) )}\\, \\ \\ \\ \\ \\text{where} \\\\
\\Delta\_{\\text{AIC}}(M\_i,D) \& \= \\text{AIC}(M\_i, D) \- \\min\_j \\text{AIC}(M\_j, D)
\\end{aligned}
\\]
Akaike weights are relative and normalized measures, and may serve as an approximate measure of a model’s posterior probability given the data:
\\\[ P(M\_i \\mid D) \\approx w\_{\\text{AIC}}(M\_i, D) \\]
For the running example at hand, this would mean that we could conclude that the posterior probability of the exponential model is approximately:
```
delta_AIC_power <- AIC_scores$AIC_power - AIC_scores$AIC_exponential
delta_AIC_exponential <- 0
Akaike_weight_exponential <- exp(-0.5 * delta_AIC_exponential) /
(exp(-0.5 * delta_AIC_exponential) + exp(-0.5 * delta_AIC_power))
Akaike_weight_exponential
```
```
## [1] 0.9996841
```
We can interpret this numerical result as indicating that, given a universe in which only the exponential and the power model exist, the posterior probability of the exponential model is almost 1 (assuming, implicitly, that both models are equally likely *a priori*).
We would conclude from this approximate quantitative assessment that the empirical evidence supplied by the given data in favor of the exponential model is very strong.
Our approximation is better the more data we have. We will see a method below, the Bayesian method using Bayes factors, which computes \\(P(M\_i \\mid D)\\) in a non\-approximate way.
**Exercise 11\.1**
1. Describe what the following variables represent in the AIC formula:
\\\[
\\begin{aligned}
\\text{AIC}(M\_i, D\_\\text{obs}) \& \= 2k\_i \- 2\\log P(D\_\\text{obs} \\mid \\hat{\\theta\_i}, M\_i)
\\end{aligned}
\\]
1. \\(k\_i\\) stands for:
2. \\(\\hat{\\theta\_i}\\) stands for:
3. \\(P(D\_\\text{obs} \\mid \\hat{\\theta\_i}, M\_i)\\) stands for:
Solution
1. the number of free parameters in model \\(M\_{i}\\);
2. the parameter vector obtained by maximum likelihood estimation for model \\(M\_{i}\\) and data \\(D\_{\\text{obs}}\\);
3. the likelihood of the data \\(D\_{\\text{obs}}\\) under the best fitting parameters of a model \\(M\_{i}\\).
2. Do you see that there is something “circular” in the definition of AICs? (Hint: What do we use the data \\(D\_{\\text{obs}}\\) for?)
Solution
We use the same data twice! We use \\(D\_{\\text{obs}}\\) to find the best fitting parameter values, and then we ask how likely \\(D\_{\\text{obs}}\\) is given the best fitting parameter values. If model comparison is about how well a model explains the data, then this is a rather circular measure: we quantify how well a model explains or predicts a data set after having “trained / optimized” the model for exactly this data set.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/Chap-03-06-model-comparison-BF.html |
10\.3 Bayes factors
-------------------
At the end of the previous section, we saw that we can use the AIC\-approach to calculate an approximate value of the posterior probability \\(P(M\_{i} \\mid D)\\) for model \\(M\_{i}\\) given data \\(D\\). The Bayes factor approach is similar to this, but avoids taking priors over models into the equation by focusing on *the extent to which data \\(D\\) changes our beliefs about which model is more likely*.
Take two Bayesian models:
* \\(M\_1\\) has prior \\(P(\\theta\_1 \\mid M\_1\)\\) and likelihood \\(P(D \\mid \\theta\_1, M\_1\)\\)
* \\(M\_2\\) has prior \\(P(\\theta\_2 \\mid M\_2\)\\) and likelihood \\(P(D \\mid \\theta\_2, M\_2\)\\)
Using Bayes rule, we compute the posterior odds of models (given the data) as the product of the likelihood ratio and the prior odds.
\\\[\\underbrace{\\frac{P(M\_1 \\mid D)}{P(M\_2 \\mid D)}}\_{\\text{posterior odds}} \= \\underbrace{\\frac{P(D \\mid M\_1\)}{P(D \\mid M\_2\)}}\_{\\text{Bayes factor}} \\ \\underbrace{\\frac{P(M\_1\)}{P(M\_2\)}}\_{\\text{prior odds}}\\]
The likelihood ratio is also called the **Bayes factor**. Formally, the Bayes factor is the factor by which a rational agent changes her prior odds in the light of observed data to arrive at the posterior odds. More intuitively, the Bayes factor quantifies the strength of evidence given by the data about the models of interest. It expresses this evidence in terms of the models’ relative prior predictive accuracy. To see the latter, let’s expand the Bayes factor as what it actually is: the ratio of marginal likelihoods.
\\\[
\\frac{P(D \\mid M\_1\)}{P(D \\mid M\_2\)} \= \\frac{\\int P(\\theta\_1 \\mid M\_1\) \\ P(D \\mid \\theta\_1, M\_1\) \\text{ d}\\theta\_1}{\\int P(\\theta\_2 \\mid M\_2\) \\ P(D \\mid \\theta\_2, M\_2\) \\text{ d}\\theta\_2}
\\]
Three insights are to be gained from this expansion. Firstly, the Bayes factor is a measure of how well each model would have predicted the data *ex ante*, i.e., before having seen any data. In this way, it is diametrically opposed to a concept like AIC, which relies on models’ maximum likelihood fits (therefore *using the data*, so being *ex post*).
Secondly, the marginal likelihood of a model is exactly the quantity that we identified (in the context of parameter estimation) as being very hard to compute, especially for complex models. The fact that marginal likelihoods are hard to compute was the reason that methods like MCMC sampling are useful, since they give posterior samples *without* requiring the calculation of marginal likelihoods.
It follows that Bayes factors can be very difficult to compute in general.
However, for many prominent models, it is possible to calculate Bayes factors analytically if the right kinds of priors are specified ([Rouder et al. 2009](#ref-RouderSpeckman2009:Bayesian-t-test); [Rouder and Morey 2012](#ref-RouderMorey2012:Default-Bayes-F); [Gronau, Ly, and Wagenmakers 2019](#ref-GronauLy2019:Informed-Bayesi)).
We will see an example of this in Chapter [11](ch-03-07-hypothesis-testing-Bayes.html#ch-03-07-hypothesis-testing-Bayes).
Also, as we will see in the following there are very clever approaches to computing Bayes factors in special cases and good algorithms for approximating marginal likelihoods also for complex models.
Thirdly, Bayes factor model comparison implicitly (and quite vigorously) punishes model complexity, but in a more sophisticated manner than just counting free parameters. To appreciate this intuitively, imagine a model with a large parameter set and a very diffuse, uninformative prior that spreads its probability over a wide range of parameter values. Since Bayes factors are computed based on *ex ante* predictions, a diffuse model is punished for its imprecision of prior predictions because we integrate over all parameters (weighted by priors) and their associated likelihood.
As for notation, we write:
\\\[\\text{BF}\_{12} \= \\frac{P(D \\mid M\_1\)}{P(D \\mid M\_2\)}\\]
for the Bayes factor in favor of model \\(M\_1\\) over model \\(M\_2\\). This quantity can take on positive values, which are often translated into natural language as follows:
| \\(BF\_{12}\\) | interpretation |
| --- | --- |
| 1 | irrelevant data |
| 1 \- 3 | hardly worth ink or breath |
| 3 \- 6 | anecdotal |
| 6 \- 10 | now we’re talking: substantial |
| 10 \- 30 | strong |
| 30 \- 100 | very strong |
| 100 \+ | decisive (bye, bye \\(M\_2\\)!) |
As \\(\\text{BF}\_{12} \= \\text{BF}\_{21}^{\-1}\\), it suffices to give this translation into natural language only for values \\(\\ge 1\\).
Bayes Factors have a nice property: We can retrieve the Bayes Factor for models \\(M\_{0}\\) and \\(M\_{2}\\) when we know the Bayes Factors of \\(M\_{0}\\) and \\(M\_{2}\\) each to another model \\(M\_{1}\\).
**Proposition 10\.1 ('Transitivity' of Bayes Factors)** For any three models \\(M\_{0}\\), \\(M\_{1}\\), and \\(M\_{2}\\): \\(\\mathrm{BF}\_{02} \= \\mathrm{BF}\_{01} \\ \\mathrm{BF}\_{12}\\).
Show proof.
*Proof*. For any two models \\(M\_{i}\\) and \\(M\_{j}\\), the Bayes Factor \\(\\mathrm{BF}\_{ij}\\) is given as the factor by which the prior odds and the posterior odds differ:
\\\[\\begin{align\*}
% \\label{eq:observation\-BF\-1}
\\mathrm{BF}\_{ij} \= \\frac{P(M\_{i} \\mid D)}{P(M\_{j} \\mid D)} \\frac{P(M\_{j})}{P(M\_{i})}\\,,
\\end{align\*}\\]
which can be rewritten as:
\\\[\\begin{align\*}
% \\label{eq:observation\-BFs\-2}
\\frac{P(M\_{i} \\mid D)}{P(M\_{i})} \= \\mathrm{BF}\_{ij} \\frac{P(M\_{j} \\mid D)}{P(M\_{j})}\\,.
\\end{align\*}\\]
Using these observations, we find that:
\\\[\\begin{align\*}
\\mathrm{BF}\_{02}
\& \= \\frac{P(M\_{0} \\mid D)}{P(M\_{2} \\mid D)} \\frac{P(M\_{2})}{P(M\_{0})}
\= \\frac{P(M\_{0} \\mid D)}{P(M\_{0})} \\frac{P(M\_{2})}{P(M\_{2} \\mid D)}
\\\\
\& \= \\mathrm{BF}\_{01} \\frac{P(M\_{1} \\mid D)}{P(M\_{1})} \\ \\frac{1}{\\mathrm{BF}\_{21}} \\frac{P(M\_{1}) }{P(M\_{1} \\mid D)}
\= \\mathrm{BF}\_{01} \\ \\mathrm{BF}\_{12}
\\end{align\*}\\]
There are at least two general approaches to calculating or approximating Bayes factors, paired here with a (non\-exhaustive) list of example methods:
1. get each model’s marginal likelihood
* grid approximation (see Section [10\.3\.1](Chap-03-06-model-comparison-BF.html#Chap-03-06-model-comparison-BF-grid))
* by Monte Carlo sampling (see Section [10\.3\.2](Chap-03-06-model-comparison-BF.html#Chap-03-06-model-comparison-BF-naiveMC))
* bridge sampling (see Section [10\.3\.3](Chap-03-06-model-comparison-BF.html#Chap-03-06-model-comparison-BF-bridge))
2. get Bayes factor directly
* Savage\-Dickey method (see Section [11\.4\.1](ch-03-05-Bayesian-testing-comparison.html#ch-03-07-hypothesis-testing-Bayes-Savage-Dickey))
* using encompassing models (see Section [11\.4\.2](ch-03-05-Bayesian-testing-comparison.html#ch-03-07-hypothesis-testing-Bayes-encompassing-models))
### 10\.3\.1 Grid approximation
We can use *grid approximation* to approximate a model’s marginal likelihood if the model is small enough, say, no more than 4\-5 free parameters.
Grid approximation considers discrete values for each parameter evenly spaced over the whole range of plausible parameter values, thereby approximating the integral in the definition of marginal likelihoods.
Let’s calculate an example for the comparison of the exponential and the power model of forgetting.
To begin with, we need to define a prior over parameters to obtain Bayesian versions of the exponential and power model.
Here, we assume flat priors over a reasonable range of parameter values for simplicity. For the exponential model, we choose:
\\\[
\\begin{aligned}
P(k \\mid a, b, N, M\_{\\text{exp}}) \& \= \\text{Binom}(k,N, a \\exp (\-bt\_i)) \\\\
P(a \\mid M\_{\\text{exp}}) \& \= \\text{Uniform}(a, 0, 1\.5\) \\\\
P(b \\mid M\_{\\text{exp}}) \& \= \\text{Uniform}(b, 0, 1\.5\)
\\end{aligned}
\\]
The (Bayesian) power model is given by:
\\\[
\\begin{aligned}
P(k \\mid c, d, N, M\_{\\text{pow}}) \& \= \\text{Binom}(k,N, c\\ t\_i^{\-d}) \\\\
P(c \\mid M\_{\\text{pow}}) \& \= \\text{Uniform}(c, 0, 1\.5\) \\\\
P(d \\mid M\_{\\text{pow}}) \& \= \\text{Uniform}(d, 0, 1\.5\)
\\end{aligned}
\\]
We can also express these models in code, like so:
```
# prior exponential model
priorExp <- function(a, b){
dunif(a, 0, 1.5) * dunif(b, 0, 1.5)
}
# likelihood function exponential model
lhExp <- function(a, b){
theta <- a * exp(-b * t)
theta[theta <= 0.0] <- 1.0e-5
theta[theta >= 1.0] <- 1 - 1.0e-5
prod(dbinom(x = obs, prob = theta, size = 100))
}
# prior power model
priorPow <- function(c, d){
dunif(c, 0, 1.5) * dunif(d, 0, 1.5)
}
# likelihood function power model
lhPow <- function(c, d){
theta <- c * t^(-d)
theta[theta <= 0.0] <- 1.0e-5
theta[theta >= 1.0] <- 1 - 1.0e-5
prod(dbinom(x = obs, prob = theta, size = 100))
}
```
To approximate each model’s marginal likelihood via grid approximation, we consider equally spaced values for both parameters (a tighly knit grid), assess the prior and likelihood for each parameter pair and finally take the sum over all of the visited values:
```
# make sure the functions accept vector input
lhExp <- Vectorize(lhExp)
lhPow <- Vectorize(lhPow)
# define the step size of the grid
stepsize <- 0.01
# calculate the marginal likelihood
marg_lh <- expand.grid(
x = seq(0.005, 1.495, by = stepsize),
y = seq(0.005, 1.495, by = stepsize)
) %>%
mutate(
lhExp = lhExp(x, y), priExp = 1 / length(x), # uniform priors!
lhPow = lhPow(x, y), priPow = 1 / length(x)
)
# output result
str_c(
"BF in favor of exponential model: ",
with(marg_lh, sum(priExp * lhExp) / sum(priPow * lhPow)) %>% round(2)
)
```
```
## [1] "BF in favor of exponential model: 1221.39"
```
Based on this computation, we would be entitled to conclude that the data provide overwhelming evidence in favor of the exponential model. The result tells us that a rational agent should adjust her prior odds by a factor of more than 1000 in favor of the exponential model when updating her beliefs with the data. In other words, the data tilt our beliefs very strongly towards the exponential model, no matter what we believed initially. In this sense, the data provide strong evidence for the exponential model.
### 10\.3\.2 Naive Monte Carlo
For simple models (with maybe 4\-5 free parameters), we can also use naive Monte Carlo sampling to approximate Bayes factors. In particular, we can approximate the marginal likelihood by taking samples from the prior, calculating the likelihood of the data for each sampled parameter tuple, and then averaging over all calculated likelihoods:
\\\[P(D, M\_i) \= \\int P(D \\mid \\theta, M\_i) \\ P(\\theta \\mid M\_i) \\ \\text{d}\\theta \\approx \\frac{1}{n} \\sum^{n}\_{\\theta\_j \\sim P(\\theta \\mid M\_i)} P(D \\mid \\theta\_j, M\_i)\\]
Here is a calculation using one million samples from the prior of each model:
```
nSamples <- 1000000
# sample from the prior
a <- runif(nSamples, 0, 1.5)
b <- runif(nSamples, 0, 1.5)
# calculate likelihood of data for each sample
lhExpVec <- lhExp(a, b)
lhPowVec <- lhPow(a, b)
# compute marginal likelihoods
str_c(
"BF in favor of exponential model: ",
round(mean(lhExpVec) / mean(lhPowVec), 2)
)
```
```
## [1] "BF in favor of exponential model: 1218.39"
```
We can also check the time course of our MC\-estimate by a plot like that in Figure [10\.3](Chap-03-06-model-comparison-BF.html#fig:Chap-03-06-model-comparison-MC-estimate-time).
The plot shows the current estimate of the Bayes factor on the \\(y\\)\-axis after having taken the number of samples given on the \\(x\\)\-axis.
We see that the initial calculations (after only 10,000 samples) are far off, but that the approximation finally gets reasonably close to the value calculated by grid approximation, which is shown as the red line.
Figure 10\.3: Temporal development (as more samples come in) of the Monte Carlo estimate of the Bayes factor in favor of the exponential model over the power model of forgetting. The red horizontal line indicates the Bayes factor estimate obtained previously via grid approximation.
**Exercise 11\.3**
Which statements concerning Bayes Factors (BF) are correct?
1. The Bayes Factor shows the absolute probability of a particular model to be a good explanation of the observed data.
2. If \\(BF\_{12} \= 11\\), one should conclude that there is strong evidence in favor of \\(M\_1\\).
3. Grid approximation allows us to compare no more than five models simultaneously.
4. With the Naive Monte Carlo method, we can only approximate the BF for models with continuous parameters.
5. BF computation penalizes more complex models.
Solution
Statements b. and e. are correct.
### 10\.3\.3 Excursion: Bridge sampling
For more complex models (e.g., high\-dimensional/hierarchical parameter spaces), naive Monte Carlo methods can be highly inefficient. If random sampling of parameter values from the priors is unlikely to deliver values for which the likelihood of the data is reasonably high, most naive MC samples will contribute very little information to the overall estimate of the marginal likelihood. For this reason, there are better sampling\-based procedures which preferentially sample *a posteriori* credible parameter values (given the data) and use clever math to compensate for using the wrong distribution to sample from. This is the main idea behind approaches like [importance sampling](https://en.wikipedia.org/wiki/Importance_sampling). A very promising approach is in particular **bridge sampling**, which also has its own R package ([Gronau et al. 2017](#ref-GronauSarafoglou2017:A-tutorial-on-b)).
We will not go into the formal details of this method, but just showcase here an application of the `bridgesampling` package.
This approach requires samples from the posterior, which we can obtain using Stan (see Section [9\.3\.2](Ch-03-03-estimation-algorithms.html#ch-03-03-estimation-Stan)).
Towards this end, we first assemble the data for input to the Stan program in a list:
```
forgetting_data <- list(
N = 100,
k = obs,
t = t
)
```
The models are implemented in Stan. We here only show the exponential model.
```
data {
int<lower=1> N ;
int<lower=0,upper=N> k[6] ;
int<lower=0> t[6];
}
parameters {
real<lower=0,upper=1.5> a ;
real<lower=0,upper=1.5> b ;
}
model {
// likelihood
for (i in 1:6) {
target += binomial_lpmf(k[i] | N, a * exp(-b * t[i])) ;
}
}
```
We then use Stan to obtain samples from the posterior in the usual way. To get reliable estimates of Bayes factors via bridge sampling, we should take a much larger number of samples than we usually would for a reliable estimation of, say, the posterior means and credible intervals.
```
stan_fit_expon <- rstan::stan(
# where is the Stan code
file = 'models_stan/model_comp_exponential_forgetting.stan',
# data to supply to the Stan program
data = forgetting_data,
# how many iterations of MCMC
iter = 20000,
# how many warmup steps
warmup = 2000
)
```
```
stan_fit_power <- rstan::stan(
# where is the Stan code
file = 'models_stan/model_comp_power_forgetting.stan',
# data to supply to the Stan program
data = forgetting_data,
# how many iterations of MCMC
iter = 20000,
# how many warmup steps
warmup = 2000
)
```
The `bridgesampling` package can then be used to calculate each model’s marginal likelihood.
```
expon_bridge <- bridgesampling::bridge_sampler(stan_fit_expon, silent = T)
power_bridge <- bridgesampling::bridge_sampler(stan_fit_power, silent = T)
```
We then obtain an estimate of the Bayes factor in favor of the exponential model with this function:
```
bridgesampling::bf(expon_bridge, power_bridge)
```
```
## Estimated Bayes factor in favor of expon_bridge over power_bridge: 1220.25382
```
### 10\.3\.1 Grid approximation
We can use *grid approximation* to approximate a model’s marginal likelihood if the model is small enough, say, no more than 4\-5 free parameters.
Grid approximation considers discrete values for each parameter evenly spaced over the whole range of plausible parameter values, thereby approximating the integral in the definition of marginal likelihoods.
Let’s calculate an example for the comparison of the exponential and the power model of forgetting.
To begin with, we need to define a prior over parameters to obtain Bayesian versions of the exponential and power model.
Here, we assume flat priors over a reasonable range of parameter values for simplicity. For the exponential model, we choose:
\\\[
\\begin{aligned}
P(k \\mid a, b, N, M\_{\\text{exp}}) \& \= \\text{Binom}(k,N, a \\exp (\-bt\_i)) \\\\
P(a \\mid M\_{\\text{exp}}) \& \= \\text{Uniform}(a, 0, 1\.5\) \\\\
P(b \\mid M\_{\\text{exp}}) \& \= \\text{Uniform}(b, 0, 1\.5\)
\\end{aligned}
\\]
The (Bayesian) power model is given by:
\\\[
\\begin{aligned}
P(k \\mid c, d, N, M\_{\\text{pow}}) \& \= \\text{Binom}(k,N, c\\ t\_i^{\-d}) \\\\
P(c \\mid M\_{\\text{pow}}) \& \= \\text{Uniform}(c, 0, 1\.5\) \\\\
P(d \\mid M\_{\\text{pow}}) \& \= \\text{Uniform}(d, 0, 1\.5\)
\\end{aligned}
\\]
We can also express these models in code, like so:
```
# prior exponential model
priorExp <- function(a, b){
dunif(a, 0, 1.5) * dunif(b, 0, 1.5)
}
# likelihood function exponential model
lhExp <- function(a, b){
theta <- a * exp(-b * t)
theta[theta <= 0.0] <- 1.0e-5
theta[theta >= 1.0] <- 1 - 1.0e-5
prod(dbinom(x = obs, prob = theta, size = 100))
}
# prior power model
priorPow <- function(c, d){
dunif(c, 0, 1.5) * dunif(d, 0, 1.5)
}
# likelihood function power model
lhPow <- function(c, d){
theta <- c * t^(-d)
theta[theta <= 0.0] <- 1.0e-5
theta[theta >= 1.0] <- 1 - 1.0e-5
prod(dbinom(x = obs, prob = theta, size = 100))
}
```
To approximate each model’s marginal likelihood via grid approximation, we consider equally spaced values for both parameters (a tighly knit grid), assess the prior and likelihood for each parameter pair and finally take the sum over all of the visited values:
```
# make sure the functions accept vector input
lhExp <- Vectorize(lhExp)
lhPow <- Vectorize(lhPow)
# define the step size of the grid
stepsize <- 0.01
# calculate the marginal likelihood
marg_lh <- expand.grid(
x = seq(0.005, 1.495, by = stepsize),
y = seq(0.005, 1.495, by = stepsize)
) %>%
mutate(
lhExp = lhExp(x, y), priExp = 1 / length(x), # uniform priors!
lhPow = lhPow(x, y), priPow = 1 / length(x)
)
# output result
str_c(
"BF in favor of exponential model: ",
with(marg_lh, sum(priExp * lhExp) / sum(priPow * lhPow)) %>% round(2)
)
```
```
## [1] "BF in favor of exponential model: 1221.39"
```
Based on this computation, we would be entitled to conclude that the data provide overwhelming evidence in favor of the exponential model. The result tells us that a rational agent should adjust her prior odds by a factor of more than 1000 in favor of the exponential model when updating her beliefs with the data. In other words, the data tilt our beliefs very strongly towards the exponential model, no matter what we believed initially. In this sense, the data provide strong evidence for the exponential model.
### 10\.3\.2 Naive Monte Carlo
For simple models (with maybe 4\-5 free parameters), we can also use naive Monte Carlo sampling to approximate Bayes factors. In particular, we can approximate the marginal likelihood by taking samples from the prior, calculating the likelihood of the data for each sampled parameter tuple, and then averaging over all calculated likelihoods:
\\\[P(D, M\_i) \= \\int P(D \\mid \\theta, M\_i) \\ P(\\theta \\mid M\_i) \\ \\text{d}\\theta \\approx \\frac{1}{n} \\sum^{n}\_{\\theta\_j \\sim P(\\theta \\mid M\_i)} P(D \\mid \\theta\_j, M\_i)\\]
Here is a calculation using one million samples from the prior of each model:
```
nSamples <- 1000000
# sample from the prior
a <- runif(nSamples, 0, 1.5)
b <- runif(nSamples, 0, 1.5)
# calculate likelihood of data for each sample
lhExpVec <- lhExp(a, b)
lhPowVec <- lhPow(a, b)
# compute marginal likelihoods
str_c(
"BF in favor of exponential model: ",
round(mean(lhExpVec) / mean(lhPowVec), 2)
)
```
```
## [1] "BF in favor of exponential model: 1218.39"
```
We can also check the time course of our MC\-estimate by a plot like that in Figure [10\.3](Chap-03-06-model-comparison-BF.html#fig:Chap-03-06-model-comparison-MC-estimate-time).
The plot shows the current estimate of the Bayes factor on the \\(y\\)\-axis after having taken the number of samples given on the \\(x\\)\-axis.
We see that the initial calculations (after only 10,000 samples) are far off, but that the approximation finally gets reasonably close to the value calculated by grid approximation, which is shown as the red line.
Figure 10\.3: Temporal development (as more samples come in) of the Monte Carlo estimate of the Bayes factor in favor of the exponential model over the power model of forgetting. The red horizontal line indicates the Bayes factor estimate obtained previously via grid approximation.
**Exercise 11\.3**
Which statements concerning Bayes Factors (BF) are correct?
1. The Bayes Factor shows the absolute probability of a particular model to be a good explanation of the observed data.
2. If \\(BF\_{12} \= 11\\), one should conclude that there is strong evidence in favor of \\(M\_1\\).
3. Grid approximation allows us to compare no more than five models simultaneously.
4. With the Naive Monte Carlo method, we can only approximate the BF for models with continuous parameters.
5. BF computation penalizes more complex models.
Solution
Statements b. and e. are correct.
### 10\.3\.3 Excursion: Bridge sampling
For more complex models (e.g., high\-dimensional/hierarchical parameter spaces), naive Monte Carlo methods can be highly inefficient. If random sampling of parameter values from the priors is unlikely to deliver values for which the likelihood of the data is reasonably high, most naive MC samples will contribute very little information to the overall estimate of the marginal likelihood. For this reason, there are better sampling\-based procedures which preferentially sample *a posteriori* credible parameter values (given the data) and use clever math to compensate for using the wrong distribution to sample from. This is the main idea behind approaches like [importance sampling](https://en.wikipedia.org/wiki/Importance_sampling). A very promising approach is in particular **bridge sampling**, which also has its own R package ([Gronau et al. 2017](#ref-GronauSarafoglou2017:A-tutorial-on-b)).
We will not go into the formal details of this method, but just showcase here an application of the `bridgesampling` package.
This approach requires samples from the posterior, which we can obtain using Stan (see Section [9\.3\.2](Ch-03-03-estimation-algorithms.html#ch-03-03-estimation-Stan)).
Towards this end, we first assemble the data for input to the Stan program in a list:
```
forgetting_data <- list(
N = 100,
k = obs,
t = t
)
```
The models are implemented in Stan. We here only show the exponential model.
```
data {
int<lower=1> N ;
int<lower=0,upper=N> k[6] ;
int<lower=0> t[6];
}
parameters {
real<lower=0,upper=1.5> a ;
real<lower=0,upper=1.5> b ;
}
model {
// likelihood
for (i in 1:6) {
target += binomial_lpmf(k[i] | N, a * exp(-b * t[i])) ;
}
}
```
We then use Stan to obtain samples from the posterior in the usual way. To get reliable estimates of Bayes factors via bridge sampling, we should take a much larger number of samples than we usually would for a reliable estimation of, say, the posterior means and credible intervals.
```
stan_fit_expon <- rstan::stan(
# where is the Stan code
file = 'models_stan/model_comp_exponential_forgetting.stan',
# data to supply to the Stan program
data = forgetting_data,
# how many iterations of MCMC
iter = 20000,
# how many warmup steps
warmup = 2000
)
```
```
stan_fit_power <- rstan::stan(
# where is the Stan code
file = 'models_stan/model_comp_power_forgetting.stan',
# data to supply to the Stan program
data = forgetting_data,
# how many iterations of MCMC
iter = 20000,
# how many warmup steps
warmup = 2000
)
```
The `bridgesampling` package can then be used to calculate each model’s marginal likelihood.
```
expon_bridge <- bridgesampling::bridge_sampler(stan_fit_expon, silent = T)
power_bridge <- bridgesampling::bridge_sampler(stan_fit_power, silent = T)
```
We then obtain an estimate of the Bayes factor in favor of the exponential model with this function:
```
bridgesampling::bf(expon_bridge, power_bridge)
```
```
## Estimated Bayes factor in favor of expon_bridge over power_bridge: 1220.25382
```
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/data-and-models-for-this-chapter.html |
11\.2 Data and models for this chapter
--------------------------------------
This chapter uses two case studies as running examples: the (fictitious) 24/7 coin\-flip example analyzed with the Binomial model, and data from the [Simon task](app-93-data-sets-simon-task.html#app-93-data-sets-simon-task) analyzed with a so\-called Bayesian \\(t\\)\-test model.
### 11\.2\.1 24/7
We will use the same (old) example of binomial data: \\(k \= 7\\) heads out of \\(N \= 24\\) coin flips.
Just as before, we will use the standard binomial model with a flat Beta prior, shown below in graphical notation:
Figure 11\.2: The Binomial Model (repeated from before).
We are interested in the following hypotheses:
1. **Point\-valued**: \\(\\theta\_c \= 0\.5\\)
2. **ROPE\-d** \\(\\theta\_c \= \\in \[0\.5 \- \\epsilon; 0\.5 \+ \\epsilon]\\) with \\(\\epsilon \= 0\.01\\)
3. **Directional** \\(\\theta\_c \< 0\.5\\)
### 11\.2\.2 Simon task
The Simon task is a classic experimental design to investigate interference of, intuitively put, task\-relevant properties and task\-irrelevant properties.
Chapter [D.2](app-93-data-sets-simon-task.html#app-93-data-sets-simon-task) introduces the experiment and the (cleaned) data we analyze here.
```
data_simon_cleaned <- aida::data_ST
```
The most important columns in this data set for our current purposes are:
* `RT`: The reaction time for each trial.
* `condition`: Whether the trial was a congruent or an incongruent trial.
Concretely, we are interested in comparing the mean reaction times across conditions:
Figure 11\.3: Distribution of reaction times of correct answers in the congruent and incongruent condition of the Simon task. Vertical lines indicate the mean of each condition.
In order to compare the means of continuous measurements between two groups we will use a so\-called **\\(t\\)\-test model**. (The reason why this is called a “\\(t\\)\-test model” is historical and will become clear in Chapter [16](ch-05-01-frequentist-hypothesis-testing.html#ch-05-01-frequentist-hypothesis-testing).)
There are different variations of Bayesian \\(t\\)\-test models.
Here, we use the one proposed by Gönen et al. ([2005](#ref-GoenenJohnson2005)), which enables us to compute Bayes factor model comparison for point\-valued hypotheses analytically.
The model is shown in Figure [11\.4](data-and-models-for-this-chapter.html#fig:ch-03-06-comparison-t-test-ST).
Figure 11\.4: Bayesian \\(t\\)\-test model following Gönen et al. ([2005](#ref-GoenenJohnson2005)) for inferences about the difference in means in the Simon task data.
The model in Figure [11\.4](data-and-models-for-this-chapter.html#fig:ch-03-06-comparison-t-test-ST) assumes that there are two vectors \\(y\_1\\) and \\(y\_2\\) of continuous measurements.
In our case, these are the continuous measurements of reaction times in the incongruent (\\(y\_1\\)) and congruent (\\(y\_2\\)) group.
The model further assumes that all measurements in \\(y\_1\\) and \\(y\_2\\) are samples from two normal distributions, one for each group, with shared variance but possibly different means.
The means of the two normal distributions are represented in terms of the midpoint \\(\\mu\\) between the means of either group.
The model is set\-up in such a way that there is a difference parameter \\(\\delta\\) which specifies the *standardized difference between group means*.
Standardization here means that the difference between the means is represented in relation to the variance of the measurements in each group (which is assumed to be the same in both groups).
The free variables in this model are therefore: the average of the group means \\(\\mu\\), the standardized difference \\(\\delta\\) of the group means from each other, and the common variance \\(\\sigma\\) of measurements in each group.
The priors for these parameters are chosen in such a way as to enable direct calculation of Bayes factors for point\-valued hypotheses.
Notice that, by explicitly representing the difference parameter \\(\\delta\\) in the model, it is possible to put different kinds of *a priori* assumptions about the likely differences between groups directly into the model, namely in the form of \\(\\mu\_g\\) and \\(g\\), which are not free model parameters, but will be set by us modelers, here as \\(\\mu\_g \= 0\\) and \\(g \= 1\\).
We focus on the first hypothesis spelled out in Chapter [D.2](app-93-data-sets-simon-task.html#app-93-data-sets-simon-task), namely that the correct choices are faster in the congruent condition than in the incongruent condition.
So, based on this data and model, we are interested in the following statistical hypotheses:
1. **Point\-valued**: \\(\\delta \= 0\\)
2. **ROPE\-d** \\(\\delta \= \\in \[0 \- \\epsilon; 0 \+ \\epsilon]\\) with \\(\\epsilon \= 0\.1\\)
3. **Directional** \\(\\delta \> 0\\)
**Exercise 11\.1**
Paraphrase the three hypotheses given for the 24/7 data and the three hypotheses given for the Simon task in your own words.
Solution
24/7:
1. **Point\-valued**: \\(\\theta\_c \= 0\.5\\) \- the coin is fair, with a bias of exactly 0\.5
2. **ROPE\-d** \\(\\theta\_c \= \\in \[0\.5 \- \\epsilon; 0\.5 \+ \\epsilon]\\) with \\(\\epsilon \= 0\.01\\) \- the coins bias lies between 0\.49 and 0\.51
3. **Directional** \\(\\theta\_c \< 0\.5\\) \- the coin is biased towards tails
Simon task:
1. **Point\-valued**: \\(\\delta \= 0\\) \- the difference between the means of reaction times in both groups is 0
2. **ROPE\-d** \\(\\delta \= \\in \[0 \- \\epsilon; 0 \+ \\epsilon]\\) with \\(\\epsilon \= 0\.1\\) \- the absolute difference between the means of reaction times in both groups is no bigger than 10% of the variance in both groups
3. **Directional** \\(\\delta \> 0\\) \- the mean reaction time in group 1 is bigger than the mean reaction time in group 2
### 11\.2\.1 24/7
We will use the same (old) example of binomial data: \\(k \= 7\\) heads out of \\(N \= 24\\) coin flips.
Just as before, we will use the standard binomial model with a flat Beta prior, shown below in graphical notation:
Figure 11\.2: The Binomial Model (repeated from before).
We are interested in the following hypotheses:
1. **Point\-valued**: \\(\\theta\_c \= 0\.5\\)
2. **ROPE\-d** \\(\\theta\_c \= \\in \[0\.5 \- \\epsilon; 0\.5 \+ \\epsilon]\\) with \\(\\epsilon \= 0\.01\\)
3. **Directional** \\(\\theta\_c \< 0\.5\\)
### 11\.2\.2 Simon task
The Simon task is a classic experimental design to investigate interference of, intuitively put, task\-relevant properties and task\-irrelevant properties.
Chapter [D.2](app-93-data-sets-simon-task.html#app-93-data-sets-simon-task) introduces the experiment and the (cleaned) data we analyze here.
```
data_simon_cleaned <- aida::data_ST
```
The most important columns in this data set for our current purposes are:
* `RT`: The reaction time for each trial.
* `condition`: Whether the trial was a congruent or an incongruent trial.
Concretely, we are interested in comparing the mean reaction times across conditions:
Figure 11\.3: Distribution of reaction times of correct answers in the congruent and incongruent condition of the Simon task. Vertical lines indicate the mean of each condition.
In order to compare the means of continuous measurements between two groups we will use a so\-called **\\(t\\)\-test model**. (The reason why this is called a “\\(t\\)\-test model” is historical and will become clear in Chapter [16](ch-05-01-frequentist-hypothesis-testing.html#ch-05-01-frequentist-hypothesis-testing).)
There are different variations of Bayesian \\(t\\)\-test models.
Here, we use the one proposed by Gönen et al. ([2005](#ref-GoenenJohnson2005)), which enables us to compute Bayes factor model comparison for point\-valued hypotheses analytically.
The model is shown in Figure [11\.4](data-and-models-for-this-chapter.html#fig:ch-03-06-comparison-t-test-ST).
Figure 11\.4: Bayesian \\(t\\)\-test model following Gönen et al. ([2005](#ref-GoenenJohnson2005)) for inferences about the difference in means in the Simon task data.
The model in Figure [11\.4](data-and-models-for-this-chapter.html#fig:ch-03-06-comparison-t-test-ST) assumes that there are two vectors \\(y\_1\\) and \\(y\_2\\) of continuous measurements.
In our case, these are the continuous measurements of reaction times in the incongruent (\\(y\_1\\)) and congruent (\\(y\_2\\)) group.
The model further assumes that all measurements in \\(y\_1\\) and \\(y\_2\\) are samples from two normal distributions, one for each group, with shared variance but possibly different means.
The means of the two normal distributions are represented in terms of the midpoint \\(\\mu\\) between the means of either group.
The model is set\-up in such a way that there is a difference parameter \\(\\delta\\) which specifies the *standardized difference between group means*.
Standardization here means that the difference between the means is represented in relation to the variance of the measurements in each group (which is assumed to be the same in both groups).
The free variables in this model are therefore: the average of the group means \\(\\mu\\), the standardized difference \\(\\delta\\) of the group means from each other, and the common variance \\(\\sigma\\) of measurements in each group.
The priors for these parameters are chosen in such a way as to enable direct calculation of Bayes factors for point\-valued hypotheses.
Notice that, by explicitly representing the difference parameter \\(\\delta\\) in the model, it is possible to put different kinds of *a priori* assumptions about the likely differences between groups directly into the model, namely in the form of \\(\\mu\_g\\) and \\(g\\), which are not free model parameters, but will be set by us modelers, here as \\(\\mu\_g \= 0\\) and \\(g \= 1\\).
We focus on the first hypothesis spelled out in Chapter [D.2](app-93-data-sets-simon-task.html#app-93-data-sets-simon-task), namely that the correct choices are faster in the congruent condition than in the incongruent condition.
So, based on this data and model, we are interested in the following statistical hypotheses:
1. **Point\-valued**: \\(\\delta \= 0\\)
2. **ROPE\-d** \\(\\delta \= \\in \[0 \- \\epsilon; 0 \+ \\epsilon]\\) with \\(\\epsilon \= 0\.1\\)
3. **Directional** \\(\\delta \> 0\\)
**Exercise 11\.1**
Paraphrase the three hypotheses given for the 24/7 data and the three hypotheses given for the Simon task in your own words.
Solution
24/7:
1. **Point\-valued**: \\(\\theta\_c \= 0\.5\\) \- the coin is fair, with a bias of exactly 0\.5
2. **ROPE\-d** \\(\\theta\_c \= \\in \[0\.5 \- \\epsilon; 0\.5 \+ \\epsilon]\\) with \\(\\epsilon \= 0\.01\\) \- the coins bias lies between 0\.49 and 0\.51
3. **Directional** \\(\\theta\_c \< 0\.5\\) \- the coin is biased towards tails
Simon task:
1. **Point\-valued**: \\(\\delta \= 0\\) \- the difference between the means of reaction times in both groups is 0
2. **ROPE\-d** \\(\\delta \= \\in \[0 \- \\epsilon; 0 \+ \\epsilon]\\) with \\(\\epsilon \= 0\.1\\) \- the absolute difference between the means of reaction times in both groups is no bigger than 10% of the variance in both groups
3. **Directional** \\(\\delta \> 0\\) \- the mean reaction time in group 1 is bigger than the mean reaction time in group 2
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/ch-03-05-Bayes-testing-estimation.html |
11\.3 Testing via posterior estimation
--------------------------------------
The general logic of Bayesian hypothesis testing via parameter estimation is this.
Let \\(M\\) be the assumed model for observed data \\(D\_{\\text{obs}}\\).
We use Bayesian posterior inference to calculate or approximate the posterior \\(P\_M(\\theta \\mid M)\\).
We then look at an interval\-based estimate, most usually a Bayesian credible interval, and compare the hypothesis in question to the region of *a posteriori* most probable values for the parameter(s) targeted by the hypothesis.
Concretely, for point\-valued hypotheses we can use the following approach. Let \\(\\Theta\\) be the parameter space of a model \\(M\\). We are interested in some component \\(\\Theta\_i\\) and our hypothesis is \\(\\Theta\_i \= \\theta^\*\_i\\) for some specific value \\(\\theta^\*\_i\\). A simple (but crude and controversial) way of addressing this point\-valued hypothesis based on observed data \\(D\\) is to look at whether \\(\\theta^\*\_i\\) lies inside some credible interval for parameter \\(\\Theta\_i\\) in the posterior derived by updating with data \\(D\\). A customary choice here are 95% credible intervals, but also other choices, e.g., 80% credible intervals, are used.
If a categorical decision rule is needed, we can:
* **accept** the point\-valued hypothesis if \\(\\theta^\*\\) is *inside* of the credible interval; and
* **reject** the point\-valued hypothesis if \\(\\theta^\*\\) is *outside* of the credible interval.
Kruschke ([2015](#ref-kruschke2015)) extends this approach to also address ROPE\-d hypotheses. He argues that we should *not* be concerned with point\-valued hypotheses, but rather with intervals constructed around the point\-value of interest. Kruschke, therefore, suggests looking at a **region of practical equivalence** (ROPE), usually defined by some \\(\\epsilon\\)\-region around \\(\\theta^\*\_i\\):
\\\[\\text{ROPE}(\\theta^\*\_i) \= \[\\theta^\*\_i\- \\epsilon, \\theta^\*\_i\+ \\epsilon]\\]
The choice of \\(\\epsilon\\) is context\-dependent and requires an understanding of the scale at which parameter values \\(\\Theta\_i\\) differ. If the parameter of interest is, for example, a difference \\(\\delta\\) in the means of reaction times, like in the Simon task, this parameter is intuitively interpretable. We can say, for instance, that an \\(\\epsilon\\)\-region of \\(\\pm 15\\text{ms}\\) is really so short that any value in \\(\[\-15\\text{ms}; 15\\text{ms}]\\) would be regarded as identical to \\(0\\) for all practical purposes because of what we know about reaction times and their potential differences. However, with parameters that are less clearly anchored to a concrete physical measurement about which we have solid distributional knowledge and/or reliable intuitions, fixing the size of the ROPE can be more difficult. For the bias of a coin flip, for instance, which we want to test at the point value \\(\\theta^\* \= 0\.5\\) (testing the coin for fairness), we might want to consider a ROPE like \\(\[0\.49; 0\.51]\\), although this choice may be less objectively defensible without previous experimental evidence from similar situations.
In Kruschke’s ROPE\-based approach where \\(\\epsilon \> 0\\), the decision about a point\-valued hypothesis becomes ternary. If \\(\[l;u]\\) is an interval\-based estimate of parameter \\(\\Theta\_i\\) and \\(\[\\theta^\*\_i \- \\epsilon; \\theta^\*\_i \+ \\epsilon]\\) is the ROPE around the point\-value of interest, we would:
* **accept** the point\-valued hypothesis iff \\(\[l;u]\\) is contained entirely in \\(\[\\theta^\*\_i \- \\epsilon; \\theta^\*\_i \+ \\epsilon]\\);
* **reject** the point\-valued hypothesis iff \\(\[l;u]\\) and \\(\[\\theta^\*\_i \- \\epsilon; \\theta^\*\_i \+ \\epsilon]\\) have no overlap; and
* **withhold judgement** otherwise.
Going beyond Kruschke’s approach to ROPE\-d hypotheses, it is possible to extend this ternary decision logic also to cover directional hypotheses.
### 11\.3\.1 Example: 24/7
For the Binomial model and the 24/7 data, we know that the posterior is of the form \\(\\text{Beta}(8,18\)\\).
Here is a plot of the posterior (repeated from before) which also includes the 95% credible interval for the coin bias \\(\\theta\_c\\).
To address our point\-valued hypothesis of \\(\\theta\_{c} \= 0\.5\\) that the coin is fair, we just have to check if the critical value of 0\.5 is inside or outside the 95% credible interval.
In the case at hand, it is not.
We would therefore, by the binary decision logic of this approach, *reject* the hypothesis \\(\\theta\_{c} \= 0\.5\\) that the coin is fair.
(Notice that while, strictly speaking, this approach does not pay attention to how closely the credible interval includes or excludes the critical value, we should normally take into account that the boundaries of the credible intervals are uncertain estimates based on posterior samples.)
Using the ROPE\-approach of Kruschke, we notice that our ROPE of \\(\\theta \= 0\.5 \\pm 0\.01\\) is also fully outside of the 95% HDI.
Here too, we conclude that the idea of an “approximately fair coin” is sufficiently unlikely to act as if it was false.
In other words, by the ternary decision logic of this approach, we would reject the ROPE\-d hypothesis \\(\\theta \= 0\.5 \\pm 0\.01\\).
(In practice, especially when we are uncertain about how exactly to pin down \\(\\epsilon\\), we might also sometimes want to give the range of \\(\\epsilon\\) values for which the ROPE\-d hypothesis would be accepted or rejected. So, here we could also say that for any \\(\\epsilon \< 0\.016\\) we would reject the ROPE\-d hypothesis.)
The directional hypothesis that the coin is biased towards tails \\(\\theta\_c \< 0\.5\\) contains the 95% credible interval in its entirety.
We would therefore, following the ternary decision logic, *accept* this hypothesis based on the model and data.
### 11\.3\.2 Example: Simon Task
We use `Stan` to draw samples from the posterior distribution.
We start with assembling the data:
```
simon_data_4_Stan <- list(
y1 = data_simon_cleaned %>% filter(condition == "incongruent") %>% pull(RT),
N1 = nrow(data_simon_cleaned %>% filter(condition == "incongruent")),
y2 = data_simon_cleaned %>% filter(condition == "congruent") %>% pull(RT),
N2 = nrow(data_simon_cleaned %>% filter(condition == "congruent"))
)
```
Here is the model from Figure [11\.4](data-and-models-for-this-chapter.html#fig:ch-03-06-comparison-t-test-ST) implemented in `Stan`.
```
data {
int<lower=1> N1 ;
int<lower=1> N2 ;
vector[N1] y1 ;
vector[N2] y2 ;
}
parameters {
real mu ;
real<lower=0> sigma ;
real delta ;
}
model {
# priors
target += log(1/sigma) ;
delta ~ normal(0, 1) ;
# likelihood
y1 ~ normal(mu + sigma*delta/2, sigma^2) ;
y2 ~ normal(mu - sigma*delta/2, sigma^2) ;
}
```
```
# sampling
stan_fit_ttest <- rstan::stan(
# where is the Stan code
file = 'models_stan/ttest_model.stan',
# data to supply to the Stan program
data = simon_data_4_Stan,
# how many iterations of MCMC
# more samples b/c of following approximations
iter = 20000,
# how many warmup steps
warmup = 1000
)
```
Here is a concise summary of the relevant parameters:
```
Bayes_estimates_ST <- rstan::As.mcmc.list(
stan_fit_ttest, pars = c('delta', 'mu', 'sigma')
) %>%
aida::summarize_mcmc_list()
Bayes_estimates_ST
```
```
## # A tibble: 3 × 4
## Parameter `|95%` mean `95|%`
## <fct> <dbl> <dbl> <dbl>
## 1 delta 1.45 1.71 1.97
## 2 mu 460. 462. 463.
## 3 sigma 9.63 9.68 9.72
```
Figure [11\.5](ch-03-05-Bayes-testing-estimation.html#fig:ch-03-07-hypothesis-testing-Bayes-tt2-posterior) shows the posterior distribution over \\(\\delta\\) and the 95% HDI (in red).
Figure 11\.5: Posterior density of the \\(\\delta\\) parameter in the Bayesian \\(t\\)\-test model for Simon task data with the 95% HDI (in red).
For the point\-valued estimate of \\(\\delta \= 0\\), which is clearly outside of the 95% credible interval, the binary decision criterion would have us *reject* the hypothesis that the difference between group means is precisely zero.
For a ROPE\-d hypothesis \\(\\delta \= 0 \\pm 0\.1\\), we reach the same conclusion by the ternary decision rule of Kruschke, since the entire ROPE is outside of the credible interval.
The directional hypothesis that \\(\\delta \> 0\\) is *accepted* by the ternary decision approach.
**Exercise 11\.2**
In this exercise, we will recap the decision rules of the two approaches introduced in this chapter. Using the binary approach for point\-valued hypotheses, there are two possible outcomes, namely rejecting \\(H\_0\\) and failing to reject \\(H\_0\\). Following Kruschke’s ROPE approach, we can also withhold judgment. Use pen and paper to draw examples of the situations a\-e given below. For each case, draw any distribution representing the posterior (e.g., a bell\-shaped curve), the approximate 95% HDI and an arbitrary point value of interest \\(\\theta^\*\\). For tasks c\-e, also draw an arbitrary ROPE around the point value.
Concretely, we’d like you to sketch…
1. …one instance where we would not reject a point\-valued hypothesis \\(H\_0: \\theta \= \\theta^\*\\).
2. …one instance where we would reject a point\-valued hypothesis \\(H\_0: \\theta \= \\theta^\*\\).
3. …two instances where we would not reject a ROPE\-d hypothesis \\(H\_0: \\theta \= \\theta^\* \\pm \\epsilon\\).
4. …two instances where we would reject a ROPE\-d hypothesis \\(H\_0: \\theta \= \\theta^\* \\pm \\epsilon\\).
5. …two instances where we would withhold judgement regarding a ROPE\-d hypothesis \\(H\_0: \\theta \= \\theta^\* \\pm \\epsilon\\).
Solution
One solution to this exercise might look as follows.
The red shaded area under the curves shows the 95% credible interval. The black dots represent (arbitrary) point values of interest, and the horizontal bars in panels (c)\-(e) depict the ROPE around a given point value.
### 11\.3\.1 Example: 24/7
For the Binomial model and the 24/7 data, we know that the posterior is of the form \\(\\text{Beta}(8,18\)\\).
Here is a plot of the posterior (repeated from before) which also includes the 95% credible interval for the coin bias \\(\\theta\_c\\).
To address our point\-valued hypothesis of \\(\\theta\_{c} \= 0\.5\\) that the coin is fair, we just have to check if the critical value of 0\.5 is inside or outside the 95% credible interval.
In the case at hand, it is not.
We would therefore, by the binary decision logic of this approach, *reject* the hypothesis \\(\\theta\_{c} \= 0\.5\\) that the coin is fair.
(Notice that while, strictly speaking, this approach does not pay attention to how closely the credible interval includes or excludes the critical value, we should normally take into account that the boundaries of the credible intervals are uncertain estimates based on posterior samples.)
Using the ROPE\-approach of Kruschke, we notice that our ROPE of \\(\\theta \= 0\.5 \\pm 0\.01\\) is also fully outside of the 95% HDI.
Here too, we conclude that the idea of an “approximately fair coin” is sufficiently unlikely to act as if it was false.
In other words, by the ternary decision logic of this approach, we would reject the ROPE\-d hypothesis \\(\\theta \= 0\.5 \\pm 0\.01\\).
(In practice, especially when we are uncertain about how exactly to pin down \\(\\epsilon\\), we might also sometimes want to give the range of \\(\\epsilon\\) values for which the ROPE\-d hypothesis would be accepted or rejected. So, here we could also say that for any \\(\\epsilon \< 0\.016\\) we would reject the ROPE\-d hypothesis.)
The directional hypothesis that the coin is biased towards tails \\(\\theta\_c \< 0\.5\\) contains the 95% credible interval in its entirety.
We would therefore, following the ternary decision logic, *accept* this hypothesis based on the model and data.
### 11\.3\.2 Example: Simon Task
We use `Stan` to draw samples from the posterior distribution.
We start with assembling the data:
```
simon_data_4_Stan <- list(
y1 = data_simon_cleaned %>% filter(condition == "incongruent") %>% pull(RT),
N1 = nrow(data_simon_cleaned %>% filter(condition == "incongruent")),
y2 = data_simon_cleaned %>% filter(condition == "congruent") %>% pull(RT),
N2 = nrow(data_simon_cleaned %>% filter(condition == "congruent"))
)
```
Here is the model from Figure [11\.4](data-and-models-for-this-chapter.html#fig:ch-03-06-comparison-t-test-ST) implemented in `Stan`.
```
data {
int<lower=1> N1 ;
int<lower=1> N2 ;
vector[N1] y1 ;
vector[N2] y2 ;
}
parameters {
real mu ;
real<lower=0> sigma ;
real delta ;
}
model {
# priors
target += log(1/sigma) ;
delta ~ normal(0, 1) ;
# likelihood
y1 ~ normal(mu + sigma*delta/2, sigma^2) ;
y2 ~ normal(mu - sigma*delta/2, sigma^2) ;
}
```
```
# sampling
stan_fit_ttest <- rstan::stan(
# where is the Stan code
file = 'models_stan/ttest_model.stan',
# data to supply to the Stan program
data = simon_data_4_Stan,
# how many iterations of MCMC
# more samples b/c of following approximations
iter = 20000,
# how many warmup steps
warmup = 1000
)
```
Here is a concise summary of the relevant parameters:
```
Bayes_estimates_ST <- rstan::As.mcmc.list(
stan_fit_ttest, pars = c('delta', 'mu', 'sigma')
) %>%
aida::summarize_mcmc_list()
Bayes_estimates_ST
```
```
## # A tibble: 3 × 4
## Parameter `|95%` mean `95|%`
## <fct> <dbl> <dbl> <dbl>
## 1 delta 1.45 1.71 1.97
## 2 mu 460. 462. 463.
## 3 sigma 9.63 9.68 9.72
```
Figure [11\.5](ch-03-05-Bayes-testing-estimation.html#fig:ch-03-07-hypothesis-testing-Bayes-tt2-posterior) shows the posterior distribution over \\(\\delta\\) and the 95% HDI (in red).
Figure 11\.5: Posterior density of the \\(\\delta\\) parameter in the Bayesian \\(t\\)\-test model for Simon task data with the 95% HDI (in red).
For the point\-valued estimate of \\(\\delta \= 0\\), which is clearly outside of the 95% credible interval, the binary decision criterion would have us *reject* the hypothesis that the difference between group means is precisely zero.
For a ROPE\-d hypothesis \\(\\delta \= 0 \\pm 0\.1\\), we reach the same conclusion by the ternary decision rule of Kruschke, since the entire ROPE is outside of the credible interval.
The directional hypothesis that \\(\\delta \> 0\\) is *accepted* by the ternary decision approach.
**Exercise 11\.2**
In this exercise, we will recap the decision rules of the two approaches introduced in this chapter. Using the binary approach for point\-valued hypotheses, there are two possible outcomes, namely rejecting \\(H\_0\\) and failing to reject \\(H\_0\\). Following Kruschke’s ROPE approach, we can also withhold judgment. Use pen and paper to draw examples of the situations a\-e given below. For each case, draw any distribution representing the posterior (e.g., a bell\-shaped curve), the approximate 95% HDI and an arbitrary point value of interest \\(\\theta^\*\\). For tasks c\-e, also draw an arbitrary ROPE around the point value.
Concretely, we’d like you to sketch…
1. …one instance where we would not reject a point\-valued hypothesis \\(H\_0: \\theta \= \\theta^\*\\).
2. …one instance where we would reject a point\-valued hypothesis \\(H\_0: \\theta \= \\theta^\*\\).
3. …two instances where we would not reject a ROPE\-d hypothesis \\(H\_0: \\theta \= \\theta^\* \\pm \\epsilon\\).
4. …two instances where we would reject a ROPE\-d hypothesis \\(H\_0: \\theta \= \\theta^\* \\pm \\epsilon\\).
5. …two instances where we would withhold judgement regarding a ROPE\-d hypothesis \\(H\_0: \\theta \= \\theta^\* \\pm \\epsilon\\).
Solution
One solution to this exercise might look as follows.
The red shaded area under the curves shows the 95% credible interval. The black dots represent (arbitrary) point values of interest, and the horizontal bars in panels (c)\-(e) depict the ROPE around a given point value.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/ch-03-05-Bayesian-testing-comparison.html |
11\.4 Testing via model comparison
----------------------------------
Testing hypotheses based on parameter estimation, and in particular the categorical decision rules for accepting or rejecting hypotheses outlined in the previous section, only give a very coarse\-grained picture.
Bayesian analysis is about providing quantitative information about uncertainty and evidence, which are intuitive and easily interpretable.
So, we would also like to have a quantitative assessment of the evidence for or against a hypothesis provided by some data against the background of a given model.
This is what the comparison\-based approaches to Bayesian hypothesis testing give us.
Here is some further motivation why model comparison might be a good replacement for “testing via estimation”.
A statistical hypothesis \\(H\\) is basically an event: a subset of parameter values are picked out of the whole parameter space.
After observing data \\(D\_\\text{obs}\\) and based on model \\(M\\), the ideal measure to have is \\(P\_M(H \\mid D\_\\text{obs})\\): given data and model, how likely is the hypothesis in question?
The problem with this posterior formulation \\(P\_M(H \\mid D\_\\text{obs})\\) is that, for it to be meaningful, it must quantify over the set of all alternative hypotheses.
If \\(H\\) is a point\-valued hypothesis over a single parameter, the set of all alternative hypotheses could comprise all other logically possible point\-valued hypotheses for the same parameter.
But then, if that parameter is a continuous parameter, the posterior density at \\(P\_M(H \\mid D\_\\text{obs})\\) is not meaningfully interpretable as a probability (mass).
If \\(H\\) is an interval\-based hypothesis, the posterior \\(P\_M(H \\mid D\_\\text{obs})\\) would be meaningfully interpretable as a probability (mass), but still the question of what exactly the space of alternatives is is left implicit.
Moreover, the posterior \\(P\_M(H \\mid D\_\\text{obs})\\) is influenced by the model’s prior over \\(H\\).
So, a nominally high value of \\(P\_M(H \\mid D\_\\text{obs})\\) is as such uninteresting because we would need to take the prior \\(P\_M(H)\\) into account as well.
This is why a comparison\-based approach to Bayesian hypothesis testing explicitly compares two models:
* **The null model** \\(M\_0\\) is the model that incorporates the assumption of the hypothesis \\(H\\) to be tested. For example, the null model would put prior probability zero on those parameter values which are ruled out by \\(H\\).
* **The alternative model** \\(M\_1\\) is an explicitly formulated model which incorporates some contextually or technically useful alternative to \\(M\_0\\).
The comparison\-based approach to hypothesis testing then quantifies, using Bayes factors, the evidence that \\(D\_\\text{obs}\\) provides for or against \\(M\_0\\) (the model representing the “null hypothesis”) over the alternative model \\(M\_1\\) (the model representing the alternative hypothesis).
In this way, by looking at the ratio:
\\\[
BF\_{01} \= \\frac{P(D\_\\text{obs} \\mid M\_0\)}{P(D\_\\text{obs} \\mid M\_1\)}
\\]
this approach is independent of the prior probability assigned to models \\(P(M\_0\)\\) and \\(P(M\_1\)\\).
Notice, however, that it is *not* independent of the priors over \\(\\theta\\) used in \\(M\_1\\)!
When the null hypothesis is point\-valued, the alternative model is *not* based on the complement \\(\\theta \\neq \\theta^\*\\), but on the technically much more practical and also conceptually more plausible alternative model that assumes that \\(\\theta\\) is free to range over a larger interval including, but not limited to \\(\\theta^\*\\). We can then use the so\-called Savage\-Dickey method, described in Section [11\.4\.1](ch-03-05-Bayesian-testing-comparison.html#ch-03-07-hypothesis-testing-Bayes-Savage-Dickey), to compare the null and the alternative models as so\-called *nested models*.
When the null hypothesis is interval\-valued, the alternative model can be conceived as based on the complement of the null hypothesis. We can then use an extension of the Savage\-Dickey method based on a so\-called encompassing model, described in Section [11\.4\.2](ch-03-05-Bayesian-testing-comparison.html#ch-03-07-hypothesis-testing-Bayes-encompassing-models), where we construe both the null model and the alternative model as nested under a third, well, encompassing model.
This chapter shows how Bayes factors can be approximated based on samples from the posterior following both of these approaches.
### 11\.4\.1 The Savage\-Dickey method
The Savage\-Dickey method is a very convenient way of computing Bayes factors for *nested models*, especially when models only differ with respect to one parameter.
Suppose that there are \\(n\\) continuous parameters of interest \\(\\theta \= \\langle \\theta\_1, \\dots, \\theta\_n \\rangle\\). \\(M\_1\\) is a (Bayesian) model defined by \\(P(\\theta \\mid M\_1\)\\) and \\(P(D \\mid \\theta, M\_1\)\\). \\(M\_0\\) is **properly nested** under \\(M\_1\\) if:
* \\(M\_0\\) assigns fixed values to parameters \\(\\theta\_i \= x\_i, \\dots, \\theta\_n \= x\_n\\)
* \\(P(D \\mid \\theta\_1, \\dots, \\theta\_{i\-1}, M\_0\) \= P(D \\mid \\theta\_1, \\dots, \\theta\_{i\-1}, \\theta\_i \= x\_i, \\dots, \\theta\_n \= x\_n, M\_1\)\\)
* \\(\\lim\_{\\theta\_i \\rightarrow x\_i, \\dots, \\theta\_n \\rightarrow x\_n} P(\\theta\_1, \\dots, \\theta\_{i\-1} \\mid \\theta\_i, \\dots, \\theta\_n, M\_1\) \= P(\\theta\_1, \\dots, \\theta\_{i\-1} \\mid M\_0\)\\)
Intuitively put, \\(M\_0\\) is properly nested under \\(M\_1\\), if \\(M\_0\\) is a special case of \\(M\_1\\) which fixes certain parameters to specific point\-values.
Notice that the last condition is satisfied in particular when \\(M\_1\\)’s prior over \\(\\theta\_1, \\dots, \\theta\_{i\-1}\\) is independent of the values for the remaining parameters.
We can express a point\-valued hypothesis in terms of a model \\(M\_0\\) which is nested under the alternative model \\(M\_1\\), the latter of which assumes that the parameters in question can take more than one value.
For such properly nested models, we can compute a Bayes factor efficiently using the following result.
**Theorem 11\.1 (Savage\-Dickey Bayes factors for nested models)** Let \\(M\_0\\) be properly nested under \\(M\_1\\) s.t. \\(M\_0\\) fixes \\(\\theta\_i \= x\_i, \\dots, \\theta\_n \= x\_n\\). The Bayes factor \\(\\text{BF}\_{01}\\) in favor of \\(M\_0\\) over \\(M\_1\\) is then given by the ratio of posterior probability to prior probability of the parameters \\(\\theta\_i \= x\_i, \\dots, \\theta\_n \= x\_n\\) from the point of view of the nesting model \\(M\_1\\):
\\\[
\\begin{aligned}
\\text{BF}\_{01} \& \= \\frac{P(\\theta\_i \= x\_i, \\dots, \\theta\_n \= x\_n \\mid D, M\_1\)}{P(\\theta\_i \= x\_i, \\dots, \\theta\_n \= x\_n \\mid M\_1\)}
\\end{aligned}
\\]
Show proof.
*Proof*. Let’s assume that \\(M\_0\\) has parameters \\(\\theta \= \\langle\\phi, \\psi\\rangle\\) with \\(\\phi \= \\phi\_0\\), and that \\(M\_1\\) has parameters \\(\\theta \= \\langle\\phi, \\psi \\rangle\\) with \\(\\phi\\) free to vary. If \\(M\_0\\) is properly nested under \\(M\_1\\), we know that \\(\\lim\_{\\phi \\rightarrow \\phi\_0} P(\\psi \\mid \\phi, M\_1\) \= P(\\psi \\mid M\_0\)\\). We can then rewrite the marginal likelihood under \\(M\_0\\) as follows:
\\\[
\\begin{aligned}
P(D \\mid M\_0\) \& \= \\int P(D \\mid \\psi, M\_0\) P(\\psi \\mid M\_0\) \\ \\text{d}\\psi
\& \\text{\[marginalization]}
\\\\
\& \= \\int P(D \\mid \\psi, \\phi \= \\phi\_0, M\_1\) P(\\psi \\mid \\phi \= \\phi\_0, M\_1\) \\ \\text{d}\\psi
\& \\text{\[assumption of nesting]}
\\\\
\& \= P(D \\mid \\phi \= \\phi\_0, M\_1\)
\& \\text{\[marginalization]}
\\\\
\& \= \\frac{P(\\phi \= \\phi\_0 \\mid D, M\_1\) P(D \\mid M\_1\)}{P(\\phi \= \\phi\_0 \\mid M\_1\)}
\& \\text{\[Bayes rule]}
\\end{aligned}
\\]
The result follows if we divide by \\(P(D \\mid M\_1\)\\) on both sides of the equation.
#### 11\.4\.1\.1 Example: 24/7
Here is an example based on the 24/7 data. For a nesting model with a flat prior (\\(\\theta \\sim^{M\_1} \\text{Beta}(1,1\)\\)), and a point hypothesis \\(\\theta\_c \= 0\.5\\), we just have to calculate the prior and posterior probability of the critical value \\(\\theta\_c \= 0\.5\\):
```
# point-value of interest
theta_star <- 0.5
# posterior probability in nesting model
posterior_theta_star <- dbeta(theta_star, 8, 18)
# prior probability in nesting model
prior_theta_star <- dbeta(theta_star, 1, 1)
# Bayes factor (using Savage-Dickey)
BF_01 <- posterior_theta_star / prior_theta_star
BF_01
```
```
## [1] 0.5157351
```
This is very minor evidence in favor of the alternative model (Bayes factor \\(\\text{BF}\_{10} \\approx 1\.94\\)). We would not like to draw any (strong) categorical conclusions from this result regarding the question of whether the coin might be fair. Figure [11\.6](ch-03-05-Bayesian-testing-comparison.html#fig:ch-03-07-hypothesis-testing-Bayes-SD-24-7) also shows the relation between prior and posterior at the point\-value of interest.
Figure 11\.6: Illustration of the Savage\-Dickey method of Bayes factor computation for the 24/7 case.
#### 11\.4\.1\.2 Example: Simon task
In the previous 24/7 example, using the Savage\-Dickey method was particularly easy because we know a closed\-form solution of the precise posterior, so that we could easily calculate the posterior for the critical value without further ado.
When this is not the case, like in the application to the Simon task data, we have to obtain an estimate for the posterior density at the critical value, here: \\(\\delta \= 0\\), from the posterior samples which we obtain from sampling, as we did earlier in this chapter (using Stan).
An approximate method for obtaining this value is implemented in the `polspline` package (using polynomial splines to approximate the posterior curve).
```
# extract the samples for the delta parameter
# from the earlier Stan fit
delta_samples <- tidy_draws_tt2 %>%
filter(Parameter == "delta") %>%
pull(value)
# estimating the posterior density at delta = 0 with polynomial splines
fit.posterior <- polspline::logspline(delta_samples)
posterior_delta_null <- polspline::dlogspline(0, fit.posterior)
# computing the prior density of the point-value of interest
# [NB: the prior on delta was a standard normal]
prior_delta_null <- dnorm(0, 0, 1)
# compute BF via Savage-Dickey
BF_delta_null = posterior_delta_null / prior_delta_null
BF_delta_null
```
```
## [1] 2.148062e-14
```
We conclude from this result that the data provide extremely strong evidence against the null model, which assumes that \\(\\delta \= 0\\), when compared to an alternative model \\(M\_1\\), which assumes that \\(\\delta \\sim \\mathcal{N}(0,1\)\\) in the prior.
**Exercise 11\.3: Bayes factors with the Savage\-Dickey method**
Look at the plot below. You see the prior distribution and the posterior distribution over the \\(\\delta\\) parameter in a Bayesian \\(t\\)\-test model. We are going to use this plot to determine (roughly) the Bayes factor of two models: the full Bayesian \\(t\\)\-test model, and a model nested under this full model which assumes that \\(\\delta \= 0\\).
1. Describe in intuitive terms what it means for a Bayesian model to be nested under another model. It is sufficient to neglect the conditions on the priors.
Solution
A model nested under another model fixes certain parameters to specific values which may take on more than one value in the nesting model.
2. Write down the formula for the Bayes factor in favor of the null model (where \\(\\delta \= 0\\)) over the full model using the Savage\-Dickey theorem.
Solution
\\(BF\_{01}\=\\frac{P(\\delta \= 0 \\mid D, M\_1\)}{P(\\delta \= 0 \\mid M\_1\)}\\).
3. Give a natural language paraphrase of the formula you wrote down above.
Solution
The Bayes factor in favor of the embedded null model over the embedding model is given by the posterior density at \\(\\delta \= 0\\) under the nesting model divided by the prior in the nesting model at \\(\\delta \= 0\\).
4. Now look at the plot above. Give your approximate guess of the Bayes factor in favor of the null model in terms of a fraction of whole integers (something like: \\(\\frac{4}{3}\\) or \\(\\frac{27}{120}\\), …).
Solution
\\(BF\_{01} \\approx \\frac{5}{2}\\) (see plot above).
5. Formulate a conclusion to be drawn from this numerical result about the research hypothesis that the mean of the two groups compared here is identical. Write one concise sentence like you would in a research paper.
Solution
A BF of \\(\\frac{5}{2}\\) is mild evidence in favor of the null model, but conventionally not considered strong enough to be particularly noteworthy.
#### 11\.4\.1\.3 \[Excursion:] Calculating the Bayes factor precisely
under construction
### 11\.4\.2 Encompassing models
The Savage\-Dickey method can be generalized to also cover interval\-valued hypotheses.
The previous literature has focused on inequality\-based intervals/hypotheses (like \\(\\theta \\ge 0\.5\\)) ([Klugkist, Kato, and Hoijtink 2005](#ref-KlugkistKato2005:Bayesian-model); [Wetzels, Grasman, and Wagenmakers 2010](#ref-WetzelsGrasman2010:An-encompassing); [Oh 2014](#ref-Oh2014:Bayesian-compar)), but the method also applies to ROPE\-d hypotheses.
The advantage of this method is that we can use samples from the posterior distribution to approximate integrals, which is more robust than having to estimate point\-values of posterior density.
Following previous work ([Klugkist, Kato, and Hoijtink 2005](#ref-KlugkistKato2005:Bayesian-model); [Wetzels, Grasman, and Wagenmakers 2010](#ref-WetzelsGrasman2010:An-encompassing); [Oh 2014](#ref-Oh2014:Bayesian-compar)), the main idea is to use so\-called **encompassing priors**. Let \\(\\theta\\) be a single parameter of interest (for simplicity[53](#fn53)), which can in principle take on any real value. We are interested in the interval\-based hypotheses:
* \\(H\_0 \\colon \\theta \\in I\_0\\), and
* \\(H\_1 \\colon \\theta \\in I\_{1}\\)
where \\(I\_{0}\\) is an interval, possibly half\-open and \\(I\_1\\) is the “negation” of \\(I\_0\\) (in the sense that \\(I\_1 \= \\left \\{ \\theta \\mid \\theta \\not \\in I\_0 \\right \\}\\).
An **encompassing model** \\(M\_e\\) has a suitable likelihood function \\(P(D \\mid \\theta, \\omega, M\_{e})\\) (where \\(\\omega\\) is a vector of other parameters besides the parameter \\(\\theta\\) of interest, so\-called “nuisance parameters”).
It also defines a prior \\(P(\\theta, \\omega \\mid M\_{e})\\), which does not already rule out \\(H\_{0}\\) or \\(H\_{1}\\).
Generalizing over the Savage\-Dickey approach, we construct *two* models, one for each hypothesis, *both* of which are nested under the encompassing model:
* \\(M\_0\\) has prior \\(P(\\theta, \\omega \\mid M\_0\) \= P(\\theta, \\omega \\mid \\theta \\in I\_0, M\_e)\\)
* \\(M\_1\\) has prior \\(P(\\theta, \\omega \\mid M\_1\) \= P(\\theta, \\omega \\mid \\theta \\in I\_1, M\_e)\\)
We assume that the priors over \\(\\theta\\) are independent of the nuisance parameters \\(\\omega\\).
Both \\(M\_0\\) and \\(M\_1\\) have the same likelihood function as \\(M\_e\\).
Figure [11\.7](ch-03-05-Bayesian-testing-comparison.html#fig:ch-03-07-hypothesis-testing-Bayes-encompassing-prior) shows an example of the priors of an encompassing model for two nested models based on a ROPE\-d hypothesis testing approach.
Figure 11\.7: Example of the prior of an encompassing model and the priors of two models nested under it.
If our hypothesis of interest is \\(I\_0\\), which is captured in \\(M\_0\\), there are two comparisons we can make to quantify evidence in favor of or against \\(M\_0\\): we can compare \\(M\_{0}\\) against the encompassing model \\(M\_{e}\\), or against its “negation” \\(M\_{1}\\).
Bayes Factors for both comparisons can be easily expressed with the encompassing\-models approach, as shown in Theorems [11\.2](ch-03-05-Bayesian-testing-comparison.html#thm:encompassing-BG-against-encompassing) and [11\.3](ch-03-05-Bayesian-testing-comparison.html#thm:encompassing-BG-against-alternative).
Essentially, we can express Bayes Factors in terms of statements regarding the prior or posterior propability of \\(I\_0\\) and \\(I\_1\\) from the point of view of the encompassing model alone.
This means that we can approximate these Bayes Factors by just setting up one model, the encompassing model, and retrieving prior and posterior samples for it.
Concretely, Theorem [11\.2](ch-03-05-Bayesian-testing-comparison.html#thm:encompassing-BG-against-encompassing) states that the Bayes Factor in favor of \\(M\_i\\), when compared against the encompassing model \\(M\_e\\) is the ratio of the posterior probability of \\(\\theta\\) being in \\(I\_i\\) divided by the prior probability, both from the perspective of \\(M\_e\\).
**Theorem 11\.2** The Bayes Factor in favor of nested model \\(M\_{i}\\) over encompassing model \\(M\_{e}\\) is:
\\\[\\begin{align\*}
\\mathrm{BF}\_{ie} \= \\frac{P(\\theta \\in I\_{i} \\mid D, M\_{e})}{P(\\theta \\in I\_{i} \\mid M\_{e})}
\\end{align\*}\\]
Show proof.
*Proof*. The following is only a sketch of a proof.
Important doemal details are glossed over.
For more detail, see ([Klugkist and Hoijtink 2007](#ref-KlugkistHoijtink2007:The-Bayes-facto)).
We start by making three observations which hold for any model \\(M\_{i}\\), \\(i \\in \\left \\{ 0,1 \\right \\}\\), and any pair of vectors of parameter values \\(\\theta\\prime, \\omega\\prime\\) such that \\(P(\\theta\\prime, \\omega\\prime \\mid D, M\_{i}) \\neq 0\\) (which entails that \\(\\theta\\prime \\in I\_{I}\\), \\(P(\\theta\\prime, \\omega\\prime \\mid M\_{i}) \> 0\\) and \\(P(D \\mid \\theta\\prime, \\omega\\prime, M\_{i}) \> 0\\)):
* **Observation 1:** The definition of the posterior:
\\\[\\begin{align\*}
P(\\theta\\prime, \\omega\\prime \\mid D, M\_{i}) \= \\frac{P(D \\mid \\theta\\prime, \\omega\\prime, M\_{i}) \\ P(\\theta\\prime, \\omega\\prime \\mid M\_{i})}{P(D \\mid M\_{i})}
\\end{align\*}\\]
can be rewritten as:
\\\[\\begin{align\*}
P(D \\mid M\_{i}) \= \\frac{P(D \\mid \\theta\\prime, \\omega\\prime, M\_{i}) \\ P(\\theta\\prime, \\omega\\prime \\mid M\_{i})}{P(\\theta\\prime, \\omega\\prime \\mid D, M\_{i})}
\\end{align\*}\\]
This also holds for model \\(M\_{e}\\).
* **Observation 2:** The prior for \\(\\theta\\prime, \\omega\\prime\\) in \\(M\_{i}\\) can be expressed in terms of the priors in \\(M\_{e}\\) as:
\\\[\\begin{align\*}
P(\\theta\\prime, \\omega\\prime \\mid M\_{i}) \= \\frac{P(\\theta\\prime, \\omega\\prime \\mid M\_{e})}{P(\\theta \\in I\_{i} \\mid M\_{e})}
\\end{align\*}\\]
* **Observation 3:** The posterior for \\(\\theta\\prime, \\omega\\prime\\) in \\(M\_{i}\\) can be expressed in terms of the posteriors in \\(M\_{e}\\) as:
\\\[\\begin{align\*}
P(\\theta\\prime, \\omega\\prime \\mid D, M\_{i}) \= \\frac{P(\\theta\\prime, \\omega\\prime \\mid D, M\_{e})}{P(\\theta \\in I\_{i} \\mid D, M\_{e})}
\\end{align\*}\\]
With these observations in place, we can rewrite the Bayes Factor \\(\\mathrm{BF}\_{ie}\\) in terms of a pair of vectors of parameter values \\(\\theta\\prime, \\omega\\prime\\) (for which \\(P(\\theta\\prime, \\omega\\prime \\mid D, M\_{i}) \\neq 0\\)) as:
\\\[\\begin{align\*}
\\mathrm{BF}\_{ie}
\& \= \\frac{P(D \\mid M\_{i})}{P(D \\mid M\_{e})}
\\\\
\& \= \\frac{P(D \\mid \\theta\\prime, \\omega\\prime, M\_{i}) \\ P(\\theta\\prime, \\omega\\prime \\mid M\_{i})\\ / \\ P(\\theta\\prime, \\omega\\prime \\mid D, M\_{i})}
{P(D \\mid \\theta\\prime, \\omega\\prime, M\_{e}) \\ P(\\theta\\prime, \\omega\\prime \\mid M\_{e})\\ / \\ P(\\theta\\prime, \\omega\\prime \\mid D, M\_{e})}
\& \\textcolor{gray}{\[\\text{by Obs.\~1}]}
\\\\
\& \= \\frac{ P(\\theta\\prime, \\omega\\prime \\mid M\_{i})\\ / \\ P(\\theta\\prime, \\omega\\prime \\mid D, M\_{i}))}
{ P(\\theta\\prime, \\omega\\prime \\mid M\_{e})\\ / \\ P(\\theta\\prime, \\omega\\prime \\mid D, M\_{e})}
\& \\textcolor{gray}{\[\\text{by def.\~(identity of LH)}]}
\\\\
\& \= \\frac{P(\\theta \\in I\_{i} \\mid D, M\_{e})}{P(\\theta \\in I\_{i} \\mid M\_{e})}
\& \\textcolor{gray}{\[\\text{by Obs.\~1 \\\& 2}]}
\\end{align\*}\\]
Theorem [11\.2](ch-03-05-Bayesian-testing-comparison.html#thm:encompassing-BG-against-encompassing) states that the Bayes Factor in favor of \\(M\_0\\), when compared against the alternative “negated” model \\(M\_1\\) is the ratio of the posterior *odds* of \\(\\theta\\) being in \\(I\_0\\) divided by the prior *odds*, both from the perspective of \\(M\_e\\).
**Theorem 11\.3** The Bayes Factor in favor of model \\(M\_{0}\\) over alternative model \\(M\_{1}\\) is:
\\\[\\begin{align\*}
\\mathrm{BF}\_{01} \= \\frac{P(\\theta \\in I\_{0} \\mid D, M\_{e})}{P(\\theta \\in I\_{1} \\mid D, M\_{e})} \\ \\frac{P(\\theta \\in I\_{1} \\mid M\_{e})}{P(\\theta \\in I\_{0} \\mid M\_{e})}
\\end{align\*}\\]
Show proof.
*Proof*. This result follows as a direct corollary from Theorem [11\.2](ch-03-05-Bayesian-testing-comparison.html#thm:encompassing-BG-against-encompassing) and Proposition [10\.1](Chap-03-06-model-comparison-BF.html#prp:transitivity-BF).
Which comparison should be used for quantifying evidence in favor of or against \\(M\_0\\): the encompassing model \\(M\_e\\) or the alternative, “negation” model \\(M\_1\\)?
There are good reasons for taking \\(M\_1\\).
Here is why.
Suppose we hypothesize that a coin is biased towards heads, i.e., we consider the interval\-valued hypothesis of interest \\(H\_0\\) that \\(\\theta \> 0\.5\\), where \\(\\theta\\) is the parameter of a Binomial likelihood function.
Suppose we see \\(k \= 100\\) from \\(N\=100\\) tosses landing heads.
That is, intuitively, extremely strong evidence in favor of our hypothesis.
But if, as may be prudent, the encompassing model is neutral between our hypothesis and its negation, so that \\(P(\\theta \> 0\.5 \\mid M\_{e}) \= 0\.5\\), the biggest Bayes Factor that we could possibly attain in favor of \\(\\theta \> 0,5\\) over the encompassing model, no matter what data we observe, is 2\.
This is because, by Theorem [11\.2](ch-03-05-Bayesian-testing-comparison.html#thm:encompassing-BG-against-encompassing), the numerator can at most be 1 and the denominator is fixed, by assumption, to be 0\.5\.
That does not seem like an intuitive way of quantifying the evidence in favor of \\(\\theta \> 0\.5\\) when observing \\(k\=100\\) out of \\(N\=100\\), which seem quite overwhelming.
Instead, by Theorem [11\.3](ch-03-05-Bayesian-testing-comparison.html#thm:encompassing-BG-against-alternative), the Bayes Factor for a comparison of \\(\\theta \> 0\.5\\) against \\(\\theta \\le 0\.5\\) is virtually infinite, reflecting the intuition that this data set provides overwhelming support for the idea that \\(\\theta \> 0\.5\\).
Based on considerations like these, it seems that the more intuitive comparison is against the negation of an interval\-valued hypothesis, not against the encompassing model.
#### 11\.4\.2\.1 Example: 24/7
The Bayes factor using the ROPE\-d method to compute the interval\-valued hypothesis \\(\\theta \= 0\.5 \\pm \\epsilon\\) is:
```
# set the scene
theta_null <- 0.5
epsilon <- 0.01 # epsilon margin for ROPE
upper <- theta_null + epsilon # upper bound of ROPE
lower <- theta_null - epsilon # lower bound of ROPE
# calculate prior odds of the ROPE-d hypothesis
prior_of_hypothesis <- pbeta(upper, 1, 1) - pbeta(lower, 1, 1)
prior_odds <- prior_of_hypothesis / (1 - prior_of_hypothesis)
# calculate posterior odds of the ROPE-d hypothesis
posterior_of_hypothesis <- pbeta(upper, 8, 18) - pbeta(lower, 8, 18)
posterior_odds <- posterior_of_hypothesis / (1 - posterior_of_hypothesis)
# calculate Bayes factor
bf_ROPEd_hypothesis <- posterior_odds / prior_odds
bf_ROPEd_hypothesis
```
```
## [1] 0.5133012
```
This is unnoteworthy evidence in favor of the alternative hypothesis (Bayes factor \\(\\text{BF}\_{10} \\approx 1\.95\\)).
Notice that the reason why the alternative hypothesis does not fare better in this analysis is because it also includes a lot of parameter values (\\(\\theta \> 0\.5\\)) which explain the observed data even more poorly than the values included in the null hypothesis.
We can also use this approach to test the directional hypothesis that \\(\\theta \< 0\.5\\).
```
# calculate prior odds of the ROPE-d hypothesis
# [trivial in the case at hand, but just to be explicit]
prior_of_hypothesis <- pbeta(0.5, 1, 1)
prior_odds <- prior_of_hypothesis / (1 - prior_of_hypothesis)
# calculate posterior odds of the ROPE-d hypothesis
posterior_of_hypothesis <- pbeta(0.5, 8, 18)
posterior_odds <- posterior_of_hypothesis / (1 - posterior_of_hypothesis)
# calculate Bayes factor
bf_directional_hypothesis <- posterior_odds / prior_odds
bf_directional_hypothesis
```
```
## [1] 45.20512
```
Here we should conclude that the data provide substantial evidence in favor of the assumption that the coin is biased towards tails, when compared against the alternative assumption that it is biased towards heads.
If the dichotomy is “heads bias vs tails bias” the data clearly tilts our beliefs towards the “tails bias” possibility.
#### 11\.4\.2\.2 Example: Simon task
Using posterior samples, we can also do similar calculations for the Simon task.
Let’s first approximate the Bayes factor in favor of the ROPE\-d hypothesis \\(\\delta \= 0 \\pm 0\.1\\) when compared against the alternative hypothesis \\(\\delta \\not \\in 0 \\pm 0\.1\\).
```
# estimating the BF for ROPE-d hypothesis with encompassing priors
delta_null <- 0
epsilon <- 0.1 # epsilon margin for ROPE
upper <- delta_null + epsilon # upper bound of ROPE
lower <- delta_null - epsilon # lower bound of ROPE
# calculate prior odds of the ROPE-d hypothesis
prior_of_hypothesis <- pnorm(upper, 0, 1) - pnorm(lower, 0, 1)
prior_odds <- prior_of_hypothesis / (1 - prior_of_hypothesis)
# calculate posterior odds of the ROPE-d hypothesis
posterior_of_hypothesis <- mean( lower <= delta_samples & delta_samples <= upper )
posterior_odds <- posterior_of_hypothesis / (1 - posterior_of_hypothesis)
# calculate Bayes factor
bf_ROPEd_hypothesis <- posterior_odds / prior_odds
bf_ROPEd_hypothesis
```
```
## [1] 0
```
This is overwhelming evidence against the ROPE\-d hypothesis that \\(\\delta \= 0 \\pm 0\.1\\).
We can also use this approach to test the directional hypothesis that \\(\\delta \> 0\.5\\).
```
# calculate prior odds of the ROPE-d hypothesis
# [trivial in the case at hand, but just to be explicit]
prior_of_hypothesis <- 1 - pnorm(0, 0, 1)
prior_odds <- prior_of_hypothesis / (1 - prior_of_hypothesis)
# calculate posterior odds of the ROPE-d hypothesis
posterior_of_hypothesis <- mean( delta_samples >= 0.5 )
posterior_odds <- posterior_of_hypothesis / (1 - posterior_of_hypothesis)
# calculate Bayes factor
bf_directional_hypothesis <- posterior_odds / prior_odds
bf_directional_hypothesis
```
```
## [1] Inf
```
Modulo imprecision induced by sampling, we see that the evidence in favor of the directional hypothesis \\(\\delta \> 0\.5\\) is immense.
**Exercise 11\.4: True or False?**
Decide for the following statements whether they are true or false.
1. An encompassing model for addressing ROPE\-d hypotheses needs two competing models nested under it.
2. A Bayes factor of \\(BF\_{01} \= 20\\) constitutes strong evidence in favor of the alternative hypothesis.
3. A Bayes factor of \\(BF\_{10} \= 20\\) constitutes minor evidence in favor of the alternative hypothesis.
4. We can compute the BF in favor of the alternative hypothesis with \\(BF\_{10} \= \\frac{1}{BF\_{01}}\\).
Solution
Statements a. and d. are correct.
### 11\.4\.1 The Savage\-Dickey method
The Savage\-Dickey method is a very convenient way of computing Bayes factors for *nested models*, especially when models only differ with respect to one parameter.
Suppose that there are \\(n\\) continuous parameters of interest \\(\\theta \= \\langle \\theta\_1, \\dots, \\theta\_n \\rangle\\). \\(M\_1\\) is a (Bayesian) model defined by \\(P(\\theta \\mid M\_1\)\\) and \\(P(D \\mid \\theta, M\_1\)\\). \\(M\_0\\) is **properly nested** under \\(M\_1\\) if:
* \\(M\_0\\) assigns fixed values to parameters \\(\\theta\_i \= x\_i, \\dots, \\theta\_n \= x\_n\\)
* \\(P(D \\mid \\theta\_1, \\dots, \\theta\_{i\-1}, M\_0\) \= P(D \\mid \\theta\_1, \\dots, \\theta\_{i\-1}, \\theta\_i \= x\_i, \\dots, \\theta\_n \= x\_n, M\_1\)\\)
* \\(\\lim\_{\\theta\_i \\rightarrow x\_i, \\dots, \\theta\_n \\rightarrow x\_n} P(\\theta\_1, \\dots, \\theta\_{i\-1} \\mid \\theta\_i, \\dots, \\theta\_n, M\_1\) \= P(\\theta\_1, \\dots, \\theta\_{i\-1} \\mid M\_0\)\\)
Intuitively put, \\(M\_0\\) is properly nested under \\(M\_1\\), if \\(M\_0\\) is a special case of \\(M\_1\\) which fixes certain parameters to specific point\-values.
Notice that the last condition is satisfied in particular when \\(M\_1\\)’s prior over \\(\\theta\_1, \\dots, \\theta\_{i\-1}\\) is independent of the values for the remaining parameters.
We can express a point\-valued hypothesis in terms of a model \\(M\_0\\) which is nested under the alternative model \\(M\_1\\), the latter of which assumes that the parameters in question can take more than one value.
For such properly nested models, we can compute a Bayes factor efficiently using the following result.
**Theorem 11\.1 (Savage\-Dickey Bayes factors for nested models)** Let \\(M\_0\\) be properly nested under \\(M\_1\\) s.t. \\(M\_0\\) fixes \\(\\theta\_i \= x\_i, \\dots, \\theta\_n \= x\_n\\). The Bayes factor \\(\\text{BF}\_{01}\\) in favor of \\(M\_0\\) over \\(M\_1\\) is then given by the ratio of posterior probability to prior probability of the parameters \\(\\theta\_i \= x\_i, \\dots, \\theta\_n \= x\_n\\) from the point of view of the nesting model \\(M\_1\\):
\\\[
\\begin{aligned}
\\text{BF}\_{01} \& \= \\frac{P(\\theta\_i \= x\_i, \\dots, \\theta\_n \= x\_n \\mid D, M\_1\)}{P(\\theta\_i \= x\_i, \\dots, \\theta\_n \= x\_n \\mid M\_1\)}
\\end{aligned}
\\]
Show proof.
*Proof*. Let’s assume that \\(M\_0\\) has parameters \\(\\theta \= \\langle\\phi, \\psi\\rangle\\) with \\(\\phi \= \\phi\_0\\), and that \\(M\_1\\) has parameters \\(\\theta \= \\langle\\phi, \\psi \\rangle\\) with \\(\\phi\\) free to vary. If \\(M\_0\\) is properly nested under \\(M\_1\\), we know that \\(\\lim\_{\\phi \\rightarrow \\phi\_0} P(\\psi \\mid \\phi, M\_1\) \= P(\\psi \\mid M\_0\)\\). We can then rewrite the marginal likelihood under \\(M\_0\\) as follows:
\\\[
\\begin{aligned}
P(D \\mid M\_0\) \& \= \\int P(D \\mid \\psi, M\_0\) P(\\psi \\mid M\_0\) \\ \\text{d}\\psi
\& \\text{\[marginalization]}
\\\\
\& \= \\int P(D \\mid \\psi, \\phi \= \\phi\_0, M\_1\) P(\\psi \\mid \\phi \= \\phi\_0, M\_1\) \\ \\text{d}\\psi
\& \\text{\[assumption of nesting]}
\\\\
\& \= P(D \\mid \\phi \= \\phi\_0, M\_1\)
\& \\text{\[marginalization]}
\\\\
\& \= \\frac{P(\\phi \= \\phi\_0 \\mid D, M\_1\) P(D \\mid M\_1\)}{P(\\phi \= \\phi\_0 \\mid M\_1\)}
\& \\text{\[Bayes rule]}
\\end{aligned}
\\]
The result follows if we divide by \\(P(D \\mid M\_1\)\\) on both sides of the equation.
#### 11\.4\.1\.1 Example: 24/7
Here is an example based on the 24/7 data. For a nesting model with a flat prior (\\(\\theta \\sim^{M\_1} \\text{Beta}(1,1\)\\)), and a point hypothesis \\(\\theta\_c \= 0\.5\\), we just have to calculate the prior and posterior probability of the critical value \\(\\theta\_c \= 0\.5\\):
```
# point-value of interest
theta_star <- 0.5
# posterior probability in nesting model
posterior_theta_star <- dbeta(theta_star, 8, 18)
# prior probability in nesting model
prior_theta_star <- dbeta(theta_star, 1, 1)
# Bayes factor (using Savage-Dickey)
BF_01 <- posterior_theta_star / prior_theta_star
BF_01
```
```
## [1] 0.5157351
```
This is very minor evidence in favor of the alternative model (Bayes factor \\(\\text{BF}\_{10} \\approx 1\.94\\)). We would not like to draw any (strong) categorical conclusions from this result regarding the question of whether the coin might be fair. Figure [11\.6](ch-03-05-Bayesian-testing-comparison.html#fig:ch-03-07-hypothesis-testing-Bayes-SD-24-7) also shows the relation between prior and posterior at the point\-value of interest.
Figure 11\.6: Illustration of the Savage\-Dickey method of Bayes factor computation for the 24/7 case.
#### 11\.4\.1\.2 Example: Simon task
In the previous 24/7 example, using the Savage\-Dickey method was particularly easy because we know a closed\-form solution of the precise posterior, so that we could easily calculate the posterior for the critical value without further ado.
When this is not the case, like in the application to the Simon task data, we have to obtain an estimate for the posterior density at the critical value, here: \\(\\delta \= 0\\), from the posterior samples which we obtain from sampling, as we did earlier in this chapter (using Stan).
An approximate method for obtaining this value is implemented in the `polspline` package (using polynomial splines to approximate the posterior curve).
```
# extract the samples for the delta parameter
# from the earlier Stan fit
delta_samples <- tidy_draws_tt2 %>%
filter(Parameter == "delta") %>%
pull(value)
# estimating the posterior density at delta = 0 with polynomial splines
fit.posterior <- polspline::logspline(delta_samples)
posterior_delta_null <- polspline::dlogspline(0, fit.posterior)
# computing the prior density of the point-value of interest
# [NB: the prior on delta was a standard normal]
prior_delta_null <- dnorm(0, 0, 1)
# compute BF via Savage-Dickey
BF_delta_null = posterior_delta_null / prior_delta_null
BF_delta_null
```
```
## [1] 2.148062e-14
```
We conclude from this result that the data provide extremely strong evidence against the null model, which assumes that \\(\\delta \= 0\\), when compared to an alternative model \\(M\_1\\), which assumes that \\(\\delta \\sim \\mathcal{N}(0,1\)\\) in the prior.
**Exercise 11\.3: Bayes factors with the Savage\-Dickey method**
Look at the plot below. You see the prior distribution and the posterior distribution over the \\(\\delta\\) parameter in a Bayesian \\(t\\)\-test model. We are going to use this plot to determine (roughly) the Bayes factor of two models: the full Bayesian \\(t\\)\-test model, and a model nested under this full model which assumes that \\(\\delta \= 0\\).
1. Describe in intuitive terms what it means for a Bayesian model to be nested under another model. It is sufficient to neglect the conditions on the priors.
Solution
A model nested under another model fixes certain parameters to specific values which may take on more than one value in the nesting model.
2. Write down the formula for the Bayes factor in favor of the null model (where \\(\\delta \= 0\\)) over the full model using the Savage\-Dickey theorem.
Solution
\\(BF\_{01}\=\\frac{P(\\delta \= 0 \\mid D, M\_1\)}{P(\\delta \= 0 \\mid M\_1\)}\\).
3. Give a natural language paraphrase of the formula you wrote down above.
Solution
The Bayes factor in favor of the embedded null model over the embedding model is given by the posterior density at \\(\\delta \= 0\\) under the nesting model divided by the prior in the nesting model at \\(\\delta \= 0\\).
4. Now look at the plot above. Give your approximate guess of the Bayes factor in favor of the null model in terms of a fraction of whole integers (something like: \\(\\frac{4}{3}\\) or \\(\\frac{27}{120}\\), …).
Solution
\\(BF\_{01} \\approx \\frac{5}{2}\\) (see plot above).
5. Formulate a conclusion to be drawn from this numerical result about the research hypothesis that the mean of the two groups compared here is identical. Write one concise sentence like you would in a research paper.
Solution
A BF of \\(\\frac{5}{2}\\) is mild evidence in favor of the null model, but conventionally not considered strong enough to be particularly noteworthy.
#### 11\.4\.1\.3 \[Excursion:] Calculating the Bayes factor precisely
under construction
#### 11\.4\.1\.1 Example: 24/7
Here is an example based on the 24/7 data. For a nesting model with a flat prior (\\(\\theta \\sim^{M\_1} \\text{Beta}(1,1\)\\)), and a point hypothesis \\(\\theta\_c \= 0\.5\\), we just have to calculate the prior and posterior probability of the critical value \\(\\theta\_c \= 0\.5\\):
```
# point-value of interest
theta_star <- 0.5
# posterior probability in nesting model
posterior_theta_star <- dbeta(theta_star, 8, 18)
# prior probability in nesting model
prior_theta_star <- dbeta(theta_star, 1, 1)
# Bayes factor (using Savage-Dickey)
BF_01 <- posterior_theta_star / prior_theta_star
BF_01
```
```
## [1] 0.5157351
```
This is very minor evidence in favor of the alternative model (Bayes factor \\(\\text{BF}\_{10} \\approx 1\.94\\)). We would not like to draw any (strong) categorical conclusions from this result regarding the question of whether the coin might be fair. Figure [11\.6](ch-03-05-Bayesian-testing-comparison.html#fig:ch-03-07-hypothesis-testing-Bayes-SD-24-7) also shows the relation between prior and posterior at the point\-value of interest.
Figure 11\.6: Illustration of the Savage\-Dickey method of Bayes factor computation for the 24/7 case.
#### 11\.4\.1\.2 Example: Simon task
In the previous 24/7 example, using the Savage\-Dickey method was particularly easy because we know a closed\-form solution of the precise posterior, so that we could easily calculate the posterior for the critical value without further ado.
When this is not the case, like in the application to the Simon task data, we have to obtain an estimate for the posterior density at the critical value, here: \\(\\delta \= 0\\), from the posterior samples which we obtain from sampling, as we did earlier in this chapter (using Stan).
An approximate method for obtaining this value is implemented in the `polspline` package (using polynomial splines to approximate the posterior curve).
```
# extract the samples for the delta parameter
# from the earlier Stan fit
delta_samples <- tidy_draws_tt2 %>%
filter(Parameter == "delta") %>%
pull(value)
# estimating the posterior density at delta = 0 with polynomial splines
fit.posterior <- polspline::logspline(delta_samples)
posterior_delta_null <- polspline::dlogspline(0, fit.posterior)
# computing the prior density of the point-value of interest
# [NB: the prior on delta was a standard normal]
prior_delta_null <- dnorm(0, 0, 1)
# compute BF via Savage-Dickey
BF_delta_null = posterior_delta_null / prior_delta_null
BF_delta_null
```
```
## [1] 2.148062e-14
```
We conclude from this result that the data provide extremely strong evidence against the null model, which assumes that \\(\\delta \= 0\\), when compared to an alternative model \\(M\_1\\), which assumes that \\(\\delta \\sim \\mathcal{N}(0,1\)\\) in the prior.
**Exercise 11\.3: Bayes factors with the Savage\-Dickey method**
Look at the plot below. You see the prior distribution and the posterior distribution over the \\(\\delta\\) parameter in a Bayesian \\(t\\)\-test model. We are going to use this plot to determine (roughly) the Bayes factor of two models: the full Bayesian \\(t\\)\-test model, and a model nested under this full model which assumes that \\(\\delta \= 0\\).
1. Describe in intuitive terms what it means for a Bayesian model to be nested under another model. It is sufficient to neglect the conditions on the priors.
Solution
A model nested under another model fixes certain parameters to specific values which may take on more than one value in the nesting model.
2. Write down the formula for the Bayes factor in favor of the null model (where \\(\\delta \= 0\\)) over the full model using the Savage\-Dickey theorem.
Solution
\\(BF\_{01}\=\\frac{P(\\delta \= 0 \\mid D, M\_1\)}{P(\\delta \= 0 \\mid M\_1\)}\\).
3. Give a natural language paraphrase of the formula you wrote down above.
Solution
The Bayes factor in favor of the embedded null model over the embedding model is given by the posterior density at \\(\\delta \= 0\\) under the nesting model divided by the prior in the nesting model at \\(\\delta \= 0\\).
4. Now look at the plot above. Give your approximate guess of the Bayes factor in favor of the null model in terms of a fraction of whole integers (something like: \\(\\frac{4}{3}\\) or \\(\\frac{27}{120}\\), …).
Solution
\\(BF\_{01} \\approx \\frac{5}{2}\\) (see plot above).
5. Formulate a conclusion to be drawn from this numerical result about the research hypothesis that the mean of the two groups compared here is identical. Write one concise sentence like you would in a research paper.
Solution
A BF of \\(\\frac{5}{2}\\) is mild evidence in favor of the null model, but conventionally not considered strong enough to be particularly noteworthy.
#### 11\.4\.1\.3 \[Excursion:] Calculating the Bayes factor precisely
under construction
### 11\.4\.2 Encompassing models
The Savage\-Dickey method can be generalized to also cover interval\-valued hypotheses.
The previous literature has focused on inequality\-based intervals/hypotheses (like \\(\\theta \\ge 0\.5\\)) ([Klugkist, Kato, and Hoijtink 2005](#ref-KlugkistKato2005:Bayesian-model); [Wetzels, Grasman, and Wagenmakers 2010](#ref-WetzelsGrasman2010:An-encompassing); [Oh 2014](#ref-Oh2014:Bayesian-compar)), but the method also applies to ROPE\-d hypotheses.
The advantage of this method is that we can use samples from the posterior distribution to approximate integrals, which is more robust than having to estimate point\-values of posterior density.
Following previous work ([Klugkist, Kato, and Hoijtink 2005](#ref-KlugkistKato2005:Bayesian-model); [Wetzels, Grasman, and Wagenmakers 2010](#ref-WetzelsGrasman2010:An-encompassing); [Oh 2014](#ref-Oh2014:Bayesian-compar)), the main idea is to use so\-called **encompassing priors**. Let \\(\\theta\\) be a single parameter of interest (for simplicity[53](#fn53)), which can in principle take on any real value. We are interested in the interval\-based hypotheses:
* \\(H\_0 \\colon \\theta \\in I\_0\\), and
* \\(H\_1 \\colon \\theta \\in I\_{1}\\)
where \\(I\_{0}\\) is an interval, possibly half\-open and \\(I\_1\\) is the “negation” of \\(I\_0\\) (in the sense that \\(I\_1 \= \\left \\{ \\theta \\mid \\theta \\not \\in I\_0 \\right \\}\\).
An **encompassing model** \\(M\_e\\) has a suitable likelihood function \\(P(D \\mid \\theta, \\omega, M\_{e})\\) (where \\(\\omega\\) is a vector of other parameters besides the parameter \\(\\theta\\) of interest, so\-called “nuisance parameters”).
It also defines a prior \\(P(\\theta, \\omega \\mid M\_{e})\\), which does not already rule out \\(H\_{0}\\) or \\(H\_{1}\\).
Generalizing over the Savage\-Dickey approach, we construct *two* models, one for each hypothesis, *both* of which are nested under the encompassing model:
* \\(M\_0\\) has prior \\(P(\\theta, \\omega \\mid M\_0\) \= P(\\theta, \\omega \\mid \\theta \\in I\_0, M\_e)\\)
* \\(M\_1\\) has prior \\(P(\\theta, \\omega \\mid M\_1\) \= P(\\theta, \\omega \\mid \\theta \\in I\_1, M\_e)\\)
We assume that the priors over \\(\\theta\\) are independent of the nuisance parameters \\(\\omega\\).
Both \\(M\_0\\) and \\(M\_1\\) have the same likelihood function as \\(M\_e\\).
Figure [11\.7](ch-03-05-Bayesian-testing-comparison.html#fig:ch-03-07-hypothesis-testing-Bayes-encompassing-prior) shows an example of the priors of an encompassing model for two nested models based on a ROPE\-d hypothesis testing approach.
Figure 11\.7: Example of the prior of an encompassing model and the priors of two models nested under it.
If our hypothesis of interest is \\(I\_0\\), which is captured in \\(M\_0\\), there are two comparisons we can make to quantify evidence in favor of or against \\(M\_0\\): we can compare \\(M\_{0}\\) against the encompassing model \\(M\_{e}\\), or against its “negation” \\(M\_{1}\\).
Bayes Factors for both comparisons can be easily expressed with the encompassing\-models approach, as shown in Theorems [11\.2](ch-03-05-Bayesian-testing-comparison.html#thm:encompassing-BG-against-encompassing) and [11\.3](ch-03-05-Bayesian-testing-comparison.html#thm:encompassing-BG-against-alternative).
Essentially, we can express Bayes Factors in terms of statements regarding the prior or posterior propability of \\(I\_0\\) and \\(I\_1\\) from the point of view of the encompassing model alone.
This means that we can approximate these Bayes Factors by just setting up one model, the encompassing model, and retrieving prior and posterior samples for it.
Concretely, Theorem [11\.2](ch-03-05-Bayesian-testing-comparison.html#thm:encompassing-BG-against-encompassing) states that the Bayes Factor in favor of \\(M\_i\\), when compared against the encompassing model \\(M\_e\\) is the ratio of the posterior probability of \\(\\theta\\) being in \\(I\_i\\) divided by the prior probability, both from the perspective of \\(M\_e\\).
**Theorem 11\.2** The Bayes Factor in favor of nested model \\(M\_{i}\\) over encompassing model \\(M\_{e}\\) is:
\\\[\\begin{align\*}
\\mathrm{BF}\_{ie} \= \\frac{P(\\theta \\in I\_{i} \\mid D, M\_{e})}{P(\\theta \\in I\_{i} \\mid M\_{e})}
\\end{align\*}\\]
Show proof.
*Proof*. The following is only a sketch of a proof.
Important doemal details are glossed over.
For more detail, see ([Klugkist and Hoijtink 2007](#ref-KlugkistHoijtink2007:The-Bayes-facto)).
We start by making three observations which hold for any model \\(M\_{i}\\), \\(i \\in \\left \\{ 0,1 \\right \\}\\), and any pair of vectors of parameter values \\(\\theta\\prime, \\omega\\prime\\) such that \\(P(\\theta\\prime, \\omega\\prime \\mid D, M\_{i}) \\neq 0\\) (which entails that \\(\\theta\\prime \\in I\_{I}\\), \\(P(\\theta\\prime, \\omega\\prime \\mid M\_{i}) \> 0\\) and \\(P(D \\mid \\theta\\prime, \\omega\\prime, M\_{i}) \> 0\\)):
* **Observation 1:** The definition of the posterior:
\\\[\\begin{align\*}
P(\\theta\\prime, \\omega\\prime \\mid D, M\_{i}) \= \\frac{P(D \\mid \\theta\\prime, \\omega\\prime, M\_{i}) \\ P(\\theta\\prime, \\omega\\prime \\mid M\_{i})}{P(D \\mid M\_{i})}
\\end{align\*}\\]
can be rewritten as:
\\\[\\begin{align\*}
P(D \\mid M\_{i}) \= \\frac{P(D \\mid \\theta\\prime, \\omega\\prime, M\_{i}) \\ P(\\theta\\prime, \\omega\\prime \\mid M\_{i})}{P(\\theta\\prime, \\omega\\prime \\mid D, M\_{i})}
\\end{align\*}\\]
This also holds for model \\(M\_{e}\\).
* **Observation 2:** The prior for \\(\\theta\\prime, \\omega\\prime\\) in \\(M\_{i}\\) can be expressed in terms of the priors in \\(M\_{e}\\) as:
\\\[\\begin{align\*}
P(\\theta\\prime, \\omega\\prime \\mid M\_{i}) \= \\frac{P(\\theta\\prime, \\omega\\prime \\mid M\_{e})}{P(\\theta \\in I\_{i} \\mid M\_{e})}
\\end{align\*}\\]
* **Observation 3:** The posterior for \\(\\theta\\prime, \\omega\\prime\\) in \\(M\_{i}\\) can be expressed in terms of the posteriors in \\(M\_{e}\\) as:
\\\[\\begin{align\*}
P(\\theta\\prime, \\omega\\prime \\mid D, M\_{i}) \= \\frac{P(\\theta\\prime, \\omega\\prime \\mid D, M\_{e})}{P(\\theta \\in I\_{i} \\mid D, M\_{e})}
\\end{align\*}\\]
With these observations in place, we can rewrite the Bayes Factor \\(\\mathrm{BF}\_{ie}\\) in terms of a pair of vectors of parameter values \\(\\theta\\prime, \\omega\\prime\\) (for which \\(P(\\theta\\prime, \\omega\\prime \\mid D, M\_{i}) \\neq 0\\)) as:
\\\[\\begin{align\*}
\\mathrm{BF}\_{ie}
\& \= \\frac{P(D \\mid M\_{i})}{P(D \\mid M\_{e})}
\\\\
\& \= \\frac{P(D \\mid \\theta\\prime, \\omega\\prime, M\_{i}) \\ P(\\theta\\prime, \\omega\\prime \\mid M\_{i})\\ / \\ P(\\theta\\prime, \\omega\\prime \\mid D, M\_{i})}
{P(D \\mid \\theta\\prime, \\omega\\prime, M\_{e}) \\ P(\\theta\\prime, \\omega\\prime \\mid M\_{e})\\ / \\ P(\\theta\\prime, \\omega\\prime \\mid D, M\_{e})}
\& \\textcolor{gray}{\[\\text{by Obs.\~1}]}
\\\\
\& \= \\frac{ P(\\theta\\prime, \\omega\\prime \\mid M\_{i})\\ / \\ P(\\theta\\prime, \\omega\\prime \\mid D, M\_{i}))}
{ P(\\theta\\prime, \\omega\\prime \\mid M\_{e})\\ / \\ P(\\theta\\prime, \\omega\\prime \\mid D, M\_{e})}
\& \\textcolor{gray}{\[\\text{by def.\~(identity of LH)}]}
\\\\
\& \= \\frac{P(\\theta \\in I\_{i} \\mid D, M\_{e})}{P(\\theta \\in I\_{i} \\mid M\_{e})}
\& \\textcolor{gray}{\[\\text{by Obs.\~1 \\\& 2}]}
\\end{align\*}\\]
Theorem [11\.2](ch-03-05-Bayesian-testing-comparison.html#thm:encompassing-BG-against-encompassing) states that the Bayes Factor in favor of \\(M\_0\\), when compared against the alternative “negated” model \\(M\_1\\) is the ratio of the posterior *odds* of \\(\\theta\\) being in \\(I\_0\\) divided by the prior *odds*, both from the perspective of \\(M\_e\\).
**Theorem 11\.3** The Bayes Factor in favor of model \\(M\_{0}\\) over alternative model \\(M\_{1}\\) is:
\\\[\\begin{align\*}
\\mathrm{BF}\_{01} \= \\frac{P(\\theta \\in I\_{0} \\mid D, M\_{e})}{P(\\theta \\in I\_{1} \\mid D, M\_{e})} \\ \\frac{P(\\theta \\in I\_{1} \\mid M\_{e})}{P(\\theta \\in I\_{0} \\mid M\_{e})}
\\end{align\*}\\]
Show proof.
*Proof*. This result follows as a direct corollary from Theorem [11\.2](ch-03-05-Bayesian-testing-comparison.html#thm:encompassing-BG-against-encompassing) and Proposition [10\.1](Chap-03-06-model-comparison-BF.html#prp:transitivity-BF).
Which comparison should be used for quantifying evidence in favor of or against \\(M\_0\\): the encompassing model \\(M\_e\\) or the alternative, “negation” model \\(M\_1\\)?
There are good reasons for taking \\(M\_1\\).
Here is why.
Suppose we hypothesize that a coin is biased towards heads, i.e., we consider the interval\-valued hypothesis of interest \\(H\_0\\) that \\(\\theta \> 0\.5\\), where \\(\\theta\\) is the parameter of a Binomial likelihood function.
Suppose we see \\(k \= 100\\) from \\(N\=100\\) tosses landing heads.
That is, intuitively, extremely strong evidence in favor of our hypothesis.
But if, as may be prudent, the encompassing model is neutral between our hypothesis and its negation, so that \\(P(\\theta \> 0\.5 \\mid M\_{e}) \= 0\.5\\), the biggest Bayes Factor that we could possibly attain in favor of \\(\\theta \> 0,5\\) over the encompassing model, no matter what data we observe, is 2\.
This is because, by Theorem [11\.2](ch-03-05-Bayesian-testing-comparison.html#thm:encompassing-BG-against-encompassing), the numerator can at most be 1 and the denominator is fixed, by assumption, to be 0\.5\.
That does not seem like an intuitive way of quantifying the evidence in favor of \\(\\theta \> 0\.5\\) when observing \\(k\=100\\) out of \\(N\=100\\), which seem quite overwhelming.
Instead, by Theorem [11\.3](ch-03-05-Bayesian-testing-comparison.html#thm:encompassing-BG-against-alternative), the Bayes Factor for a comparison of \\(\\theta \> 0\.5\\) against \\(\\theta \\le 0\.5\\) is virtually infinite, reflecting the intuition that this data set provides overwhelming support for the idea that \\(\\theta \> 0\.5\\).
Based on considerations like these, it seems that the more intuitive comparison is against the negation of an interval\-valued hypothesis, not against the encompassing model.
#### 11\.4\.2\.1 Example: 24/7
The Bayes factor using the ROPE\-d method to compute the interval\-valued hypothesis \\(\\theta \= 0\.5 \\pm \\epsilon\\) is:
```
# set the scene
theta_null <- 0.5
epsilon <- 0.01 # epsilon margin for ROPE
upper <- theta_null + epsilon # upper bound of ROPE
lower <- theta_null - epsilon # lower bound of ROPE
# calculate prior odds of the ROPE-d hypothesis
prior_of_hypothesis <- pbeta(upper, 1, 1) - pbeta(lower, 1, 1)
prior_odds <- prior_of_hypothesis / (1 - prior_of_hypothesis)
# calculate posterior odds of the ROPE-d hypothesis
posterior_of_hypothesis <- pbeta(upper, 8, 18) - pbeta(lower, 8, 18)
posterior_odds <- posterior_of_hypothesis / (1 - posterior_of_hypothesis)
# calculate Bayes factor
bf_ROPEd_hypothesis <- posterior_odds / prior_odds
bf_ROPEd_hypothesis
```
```
## [1] 0.5133012
```
This is unnoteworthy evidence in favor of the alternative hypothesis (Bayes factor \\(\\text{BF}\_{10} \\approx 1\.95\\)).
Notice that the reason why the alternative hypothesis does not fare better in this analysis is because it also includes a lot of parameter values (\\(\\theta \> 0\.5\\)) which explain the observed data even more poorly than the values included in the null hypothesis.
We can also use this approach to test the directional hypothesis that \\(\\theta \< 0\.5\\).
```
# calculate prior odds of the ROPE-d hypothesis
# [trivial in the case at hand, but just to be explicit]
prior_of_hypothesis <- pbeta(0.5, 1, 1)
prior_odds <- prior_of_hypothesis / (1 - prior_of_hypothesis)
# calculate posterior odds of the ROPE-d hypothesis
posterior_of_hypothesis <- pbeta(0.5, 8, 18)
posterior_odds <- posterior_of_hypothesis / (1 - posterior_of_hypothesis)
# calculate Bayes factor
bf_directional_hypothesis <- posterior_odds / prior_odds
bf_directional_hypothesis
```
```
## [1] 45.20512
```
Here we should conclude that the data provide substantial evidence in favor of the assumption that the coin is biased towards tails, when compared against the alternative assumption that it is biased towards heads.
If the dichotomy is “heads bias vs tails bias” the data clearly tilts our beliefs towards the “tails bias” possibility.
#### 11\.4\.2\.2 Example: Simon task
Using posterior samples, we can also do similar calculations for the Simon task.
Let’s first approximate the Bayes factor in favor of the ROPE\-d hypothesis \\(\\delta \= 0 \\pm 0\.1\\) when compared against the alternative hypothesis \\(\\delta \\not \\in 0 \\pm 0\.1\\).
```
# estimating the BF for ROPE-d hypothesis with encompassing priors
delta_null <- 0
epsilon <- 0.1 # epsilon margin for ROPE
upper <- delta_null + epsilon # upper bound of ROPE
lower <- delta_null - epsilon # lower bound of ROPE
# calculate prior odds of the ROPE-d hypothesis
prior_of_hypothesis <- pnorm(upper, 0, 1) - pnorm(lower, 0, 1)
prior_odds <- prior_of_hypothesis / (1 - prior_of_hypothesis)
# calculate posterior odds of the ROPE-d hypothesis
posterior_of_hypothesis <- mean( lower <= delta_samples & delta_samples <= upper )
posterior_odds <- posterior_of_hypothesis / (1 - posterior_of_hypothesis)
# calculate Bayes factor
bf_ROPEd_hypothesis <- posterior_odds / prior_odds
bf_ROPEd_hypothesis
```
```
## [1] 0
```
This is overwhelming evidence against the ROPE\-d hypothesis that \\(\\delta \= 0 \\pm 0\.1\\).
We can also use this approach to test the directional hypothesis that \\(\\delta \> 0\.5\\).
```
# calculate prior odds of the ROPE-d hypothesis
# [trivial in the case at hand, but just to be explicit]
prior_of_hypothesis <- 1 - pnorm(0, 0, 1)
prior_odds <- prior_of_hypothesis / (1 - prior_of_hypothesis)
# calculate posterior odds of the ROPE-d hypothesis
posterior_of_hypothesis <- mean( delta_samples >= 0.5 )
posterior_odds <- posterior_of_hypothesis / (1 - posterior_of_hypothesis)
# calculate Bayes factor
bf_directional_hypothesis <- posterior_odds / prior_odds
bf_directional_hypothesis
```
```
## [1] Inf
```
Modulo imprecision induced by sampling, we see that the evidence in favor of the directional hypothesis \\(\\delta \> 0\.5\\) is immense.
**Exercise 11\.4: True or False?**
Decide for the following statements whether they are true or false.
1. An encompassing model for addressing ROPE\-d hypotheses needs two competing models nested under it.
2. A Bayes factor of \\(BF\_{01} \= 20\\) constitutes strong evidence in favor of the alternative hypothesis.
3. A Bayes factor of \\(BF\_{10} \= 20\\) constitutes minor evidence in favor of the alternative hypothesis.
4. We can compute the BF in favor of the alternative hypothesis with \\(BF\_{10} \= \\frac{1}{BF\_{01}}\\).
Solution
Statements a. and d. are correct.
#### 11\.4\.2\.1 Example: 24/7
The Bayes factor using the ROPE\-d method to compute the interval\-valued hypothesis \\(\\theta \= 0\.5 \\pm \\epsilon\\) is:
```
# set the scene
theta_null <- 0.5
epsilon <- 0.01 # epsilon margin for ROPE
upper <- theta_null + epsilon # upper bound of ROPE
lower <- theta_null - epsilon # lower bound of ROPE
# calculate prior odds of the ROPE-d hypothesis
prior_of_hypothesis <- pbeta(upper, 1, 1) - pbeta(lower, 1, 1)
prior_odds <- prior_of_hypothesis / (1 - prior_of_hypothesis)
# calculate posterior odds of the ROPE-d hypothesis
posterior_of_hypothesis <- pbeta(upper, 8, 18) - pbeta(lower, 8, 18)
posterior_odds <- posterior_of_hypothesis / (1 - posterior_of_hypothesis)
# calculate Bayes factor
bf_ROPEd_hypothesis <- posterior_odds / prior_odds
bf_ROPEd_hypothesis
```
```
## [1] 0.5133012
```
This is unnoteworthy evidence in favor of the alternative hypothesis (Bayes factor \\(\\text{BF}\_{10} \\approx 1\.95\\)).
Notice that the reason why the alternative hypothesis does not fare better in this analysis is because it also includes a lot of parameter values (\\(\\theta \> 0\.5\\)) which explain the observed data even more poorly than the values included in the null hypothesis.
We can also use this approach to test the directional hypothesis that \\(\\theta \< 0\.5\\).
```
# calculate prior odds of the ROPE-d hypothesis
# [trivial in the case at hand, but just to be explicit]
prior_of_hypothesis <- pbeta(0.5, 1, 1)
prior_odds <- prior_of_hypothesis / (1 - prior_of_hypothesis)
# calculate posterior odds of the ROPE-d hypothesis
posterior_of_hypothesis <- pbeta(0.5, 8, 18)
posterior_odds <- posterior_of_hypothesis / (1 - posterior_of_hypothesis)
# calculate Bayes factor
bf_directional_hypothesis <- posterior_odds / prior_odds
bf_directional_hypothesis
```
```
## [1] 45.20512
```
Here we should conclude that the data provide substantial evidence in favor of the assumption that the coin is biased towards tails, when compared against the alternative assumption that it is biased towards heads.
If the dichotomy is “heads bias vs tails bias” the data clearly tilts our beliefs towards the “tails bias” possibility.
#### 11\.4\.2\.2 Example: Simon task
Using posterior samples, we can also do similar calculations for the Simon task.
Let’s first approximate the Bayes factor in favor of the ROPE\-d hypothesis \\(\\delta \= 0 \\pm 0\.1\\) when compared against the alternative hypothesis \\(\\delta \\not \\in 0 \\pm 0\.1\\).
```
# estimating the BF for ROPE-d hypothesis with encompassing priors
delta_null <- 0
epsilon <- 0.1 # epsilon margin for ROPE
upper <- delta_null + epsilon # upper bound of ROPE
lower <- delta_null - epsilon # lower bound of ROPE
# calculate prior odds of the ROPE-d hypothesis
prior_of_hypothesis <- pnorm(upper, 0, 1) - pnorm(lower, 0, 1)
prior_odds <- prior_of_hypothesis / (1 - prior_of_hypothesis)
# calculate posterior odds of the ROPE-d hypothesis
posterior_of_hypothesis <- mean( lower <= delta_samples & delta_samples <= upper )
posterior_odds <- posterior_of_hypothesis / (1 - posterior_of_hypothesis)
# calculate Bayes factor
bf_ROPEd_hypothesis <- posterior_odds / prior_odds
bf_ROPEd_hypothesis
```
```
## [1] 0
```
This is overwhelming evidence against the ROPE\-d hypothesis that \\(\\delta \= 0 \\pm 0\.1\\).
We can also use this approach to test the directional hypothesis that \\(\\delta \> 0\.5\\).
```
# calculate prior odds of the ROPE-d hypothesis
# [trivial in the case at hand, but just to be explicit]
prior_of_hypothesis <- 1 - pnorm(0, 0, 1)
prior_odds <- prior_of_hypothesis / (1 - prior_of_hypothesis)
# calculate posterior odds of the ROPE-d hypothesis
posterior_of_hypothesis <- mean( delta_samples >= 0.5 )
posterior_odds <- posterior_of_hypothesis / (1 - posterior_of_hypothesis)
# calculate Bayes factor
bf_directional_hypothesis <- posterior_odds / prior_odds
bf_directional_hypothesis
```
```
## [1] Inf
```
Modulo imprecision induced by sampling, we see that the evidence in favor of the directional hypothesis \\(\\delta \> 0\.5\\) is immense.
**Exercise 11\.4: True or False?**
Decide for the following statements whether they are true or false.
1. An encompassing model for addressing ROPE\-d hypotheses needs two competing models nested under it.
2. A Bayes factor of \\(BF\_{01} \= 20\\) constitutes strong evidence in favor of the alternative hypothesis.
3. A Bayes factor of \\(BF\_{10} \= 20\\) constitutes minor evidence in favor of the alternative hypothesis.
4. We can compute the BF in favor of the alternative hypothesis with \\(BF\_{10} \= \\frac{1}{BF\_{01}}\\).
Solution
Statements a. and d. are correct.
| Data Science |
michael-franke.github.io | https://michael-franke.github.io/intro-data-analysis/ordinary-least-squares-regression.html |
12\.1 Ordinary least squares regression
---------------------------------------
This section introduces ordinary least squares (OLS) linear regression. The main idea is that we look for the best\-fitting line in a (multi\-dimensional) cloud of points, where “best\-fitting” is defined in terms of a geometrical measure of distance (squared prediction error).
### 12\.1\.1 Prediction without any further information
We are interested in explaining or predicting the murder rates in a city using the [murder data set](app-93-data-sets-murder-data.html#app-93-data-sets-murder-data).
Concretely, we are interested in whether knowing a city’s unemployment rate (stored in variable `unemployment`) helps make better predictions for that city’s murder rate (stored in variable `murder_rate`).
Let’s first plot the murder rate for every city (just numbered consecutively in the order of their appearance in the data set):
Suppose we know the vector \\(y\\) of all observed murder rates but we don’t know which murder rate belongs to which city.
We are given a city to guess its murder rate.
But we cannot tell cities apart.
So we must guess one number as a prediction for any of the cities.
What’s a good guess?
Actually, how good a guess is depends on what we want to do with this guess (the utility function of a decision problem).
For now, let’s just assume that we have a measure of **prediction error** which we would like to minimize with our guesses.
A common measure of **prediction error** uses intuitions about geometric distance and is defined in terms of the **total sum of squares**, where \\(y\\) is the \\(n\\)\-dimensional vector of observed murder rates and \\(\\xi \\in \\mathbb{R}\\) is a single numeric prediction:
\\\[
\\text{TSS}(\\xi) \= \\sum\_{i\=1}^n (y\_i \- \\xi)^2
\\]
This measure of prediction error is what underlies the ordinary least squares approach to regression.
It turns out that the **best prediction** we can make, i.e., the number \\(\\hat{\\xi} \= \\arg \\min\_{\\xi} \\text{TSS}(\\xi)\\) for which TSS is minimized, is the mean \\(\\bar{y}\\) of the original predictions.
So, given the goal of minimizing TSS, our best guess is the mean of the observed murder rates.
**Proposition 12\.1 (Mean minimizes total sum of squares.)** \\\[
\\arg \\min\_{\\xi} \\sum\_{i\=1}^n (y\_i \- \\xi)^2 \= \\frac{1}{n} \\sum\_{i\=1}^n y\_i \= \\bar{y}
\\]
Show proof.
*Proof*. To find a minimum, consider the first derivative of \\(\\text{TSS}(\\xi)\\) and find its zero points:
\\\[
\\begin{align\*}
\& f(\\xi) \= \\sum\_{i\=1}^n (y\_i \- \\xi)^2 \= \\sum\_{i\=1}^n (y\_i^2 \- 2 y\_i \\xi \+ \\xi^2\) \\\\
\& f'(\\xi) \= \\sum\_{i\=1}^n (\-2y\_i \+ 2\\xi) \= 0 \\\\
\\Leftrightarrow \& \\sum\_{i\=1}^n \-2y\_i \= \-2 n \\xi \\\\
\\Leftrightarrow \& \\xi \= \\frac{1}{n} \\sum\_{i\=1}^n y\_i \= \\bar{y} \\\\
\\end{align\*}
\\]
Indeed, the zero point \\(\\xi \= \\bar{y}\\) is a minimum because its second derivative is positive:
\\\[
f''(\\bar{y}) \= 2
\\]
The plot below visualizes the prediction we make based on the naive predictor \\(\\hat{y}\\).
The black dots show the data points, the red line shows the prediction we make (the mean murder rate), the small hollow dots show the specific predictions for each observed value and the gray lines show the distance between our prediction and the actual data observation.
To obtain the TSS for the prediction shown in the plot above, we would need to take each gray line, measure its distance, square this number and sum over all lines (cities).
In the case at hand, the prediction error we make by assuming just the mean as predictor is:
```
y <- murder_data %>% pull(murder_rate)
n <- length(y)
tss_simple <- sum((y - mean(y))^2)
tss_simple
```
```
## [1] 1855.202
```
At this stage, a question might arise:
Why square the distances to obtain the total sum of, well, *squares*?
One intuitive motivation is that we want small deviations from our prediction to have less overall impact than huge deviations.
A technical motivation is that the best solution to OLS estimation corresponds to the best solution under a maximum likelihood approach, if we use a normal distribution as likelihood function.
This is what we will cover in Section [12\.2](Chap-04-01-linear-regression-MLE.html#Chap-04-01-linear-regression-MLE) after having introduced the regression model in full.
### 12\.1\.2 Prediction with knowledge of unemployment rate
We might not be very content with this prediction error. Suppose we could use some piece of information about the random city whose murder rate we are trying to predict. For instance, we might happen to know the value of the variable `unemployment`. How could that help us make a better prediction?
There does seem to be some useful information in the unemployment rate, which may lead to better predictions of the murder rate. We see this in a scatter plot:
Let us assume, for the sake of current illustration, that we expect a very particular functional relationship between the variables `murder_rate` and `unemployment`. For some reason or other, we hypothesize that even with 0% unemployment, the murder rate would be positive, namely at 4 murders per million inhabitants. We further hypothesize that with each increase of 1% in the unemployment percentage, the murder rate per million increases by 2\. The functional relationship between dependent variable \\(y\\) (\= murder rate) and predictor variable \\(x\\) (\= unemployment) can then be expressed as a linear function of the following form, where \\(\\xi \\in \\mathbb{R}^n\\) is now a vector of \\(n\\) predictions (one prediction \\(\\xi\_i\\) for each data observation \\(y\_i\\)):[54](#fn54)
\\\[
\\xi\_i \= 2x\_i \+ 4
\\]
Here is a graphical representation of this particular functional relationship assumed in the equation above. Again, the black dots show the data points, the red line the linear function \\(f(x) \= 2x \+4\\), the small hollow dots show the specific predictions for each observed value \\(x\_i\\) and the gray lines show the distance between our prediction \\(\\xi\_i\\) and the actual data observation \\(y\_i\\). (Notice that there are data points for which the unemployment rate is the same, but we observed different murder rates.)
We can again quantify our prediction error in terms of a sum of squares like we did before. For the case of a prediction vector \\(\\xi\\), the quantity in question is called the **residual sum of squares**.
\\\[
\\text{RSS} \= \\sum\_{i\=1}^n (y\_i \- \\xi\_i)^2
\\]
Here is how we can calculate RSS in R for the particular vector \\(\\xi \\in \\mathbb{R}^n\\) for which \\(\\xi\_{i} \= 2x\_i \+ 4\\):
```
y <- murder_data %>% pull(murder_rate)
x <- murder_data %>% pull(unemployment)
predicted_y <- 2 * x + 4
n <- length(y)
rss_guesswork <- sum((y - predicted_y)^2)
rss_guesswork
```
```
## [1] 1327.74
```
Compared to the previous prediction, which was based on the mean \\(\\bar{y}\\) only, this linear function reduces the prediction error (measured here geometrically in terms of a sum of squares).
This alone could be taken as *prima facie* evidence that knowledge of `unemployment` helps make better predictions about `murder_rate`.
**Exercise 13\.1 \[optional]**
1. Compare RSS and TSS. How / where exactly do these notions differ from each other? Think about which information the difference between the two measures conveys.
Solution
TSS computes the distance between a data point and the overall mean of all data points, whereas RSS computes the distance between a data point and a predictor value specific to this data point.
The difference between RSS and TSS tells us how good our prediction is in comparison to a naive prediction (using just the mean).
2. Is it possible for TSS to be smaller than RSS?
That is, could the error based on a single numeric prediction for all data points be smaller than an error obtained for a linear predictor that has a single prediction for each data point?
Solution
Yes, that’s possible.
The definition of RSS and TSS does not imply that we look at the *optimal* point\-valued or linear predictor.
It is conceivable to choose a good single number and a very bad linear predictor, so that RSS is smaller than TSS.
### 12\.1\.3 Linear regression: general problem formulation
Suppose we have \\(k\\) predictor variables \\(x\_1, \\dots , x\_k\\) and a dependent variable \\(y\\).
We consider the linear relation:
\\\[ \\xi\_i({\\beta}\_0, {\\beta}\_1, \\dots, {\\beta}\_k) \= \\beta\_0 \+ \\beta\_1 x\_{1i} \+ \\dots \+ \\beta\_k x\_{ki} \\]
Often we do not explicitly write \\(\\xi\\) as a function of the parameters \\(\\beta\_0, \\dots \\beta\_k\\), and write instead:
\\\[ \\xi\_i \= \\beta\_0 \+ \\beta\_1 x\_{1i} \+ \\dots \+ \\beta\_k x\_{ki} \\]
The parameters \\(\\beta\_0, \\beta\_1, \\dots, \\beta\_k\\) are called **(regression) coefficients**.
In particular, \\(\\beta\_0\\) is called the **(regression) intercept** and \\(\\beta\_1, \\dots, \\beta\_k\\) are **(regression) slope coefficients**.
The term **simple linear regression** is often used to cover the special case of \\(k\=1\\).
If there is more than one predictor, i.e., \\(k \> 1\\), the term **multiple linear regression** is common.
Based on the predictions of a parameter vector \\(\\langle {\\beta}\_0, {\\beta}\_1, \\dots, {\\beta}\_k\\rangle\\), we consider the residual sum of squares as a measure of prediction error:
\\\[\\text{RSS}\_{\\langle {\\beta}\_0, {\\beta}\_1, \\dots, {\\beta}\_k\\rangle} \= \\sum\_{i \= 1}^k \[y\_i \- \\xi\_i ({\\beta}\_0, {\\beta}\_1, \\dots, {\\beta}\_k) ]^2 \\]
We would like to find the *best parameter values* (denoted traditionally by a hat on the parameter’s variable: \\(\\hat{\\beta}\_i\\)) in the sense of minimizing the residual sum of squares:
\\\[
\\langle \\hat{\\beta}\_0, \\hat{\\beta}\_1, \\dots , \\hat{\\beta}\_k\\rangle \= \\arg \\min\_{\\langle \\beta\_0, \\beta\_1, \\dots, \\beta\_k\\rangle} \\text{RSS}\_{\\langle {\\beta}\_0, {\\beta}\_1, \\dots, {\\beta}\_k\\rangle}
\\]
The prediction corresponding to the best parameter values is denoted by \\(\\hat{\\xi} \\in \\mathbb{R}^n\\) and called the *best linear predictor*:
\\\[ \\hat{\\xi}\_i \= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_{1i} \+ \\dots \+ \\hat{\\beta}\_k x\_{ki}\\]
It is also possible, and often convenient, to state the linear regression model in terms of matrix operations.
Traditionally, we consider a so\-called **predictor matrix** \\(X\\) of size \\(n \\times (k\+1\)\\), where \\(n\\) is the number of observations in the data set and \\(k\\) is the number of predictor variables.
The predictor matrix includes the values for all predictor variables and it also includes an “intercept column” \\((X^{T})\_0\\) for which \\(X\_{i0}\=1\\) for all \\(1 \\le i \\le n\\) so that the intercept \\(\\beta\_0\\) can be treated on a par with the other regression coefficients.[55](#fn55)
Using the predictor matrix \\(X\\), the linear predictor vector \\(\\xi\\) is:
\\\[\\xi \= X \\beta\\]
**Exercise 13\.2**
How can we interpret the parameters \\(a\\) and \\(b\\) of the linear model \\(\\xi\_i \= a x\_i \+ b\\)?
How are these parameters usually called in regression jargon?
Solution
Parameter \\(a\\) is the slope, \\(b\\) the intercept of a simple linear regression.
Parameter \\(a\\) gives the amount of change of \\(y\\) for each unit change in \\(x\\).
Parameter \\(b\\) gives the prediction \\(\\xi\\) for \\(x\=0\\).
### 12\.1\.4 Finding the OLS\-solution
In the above example, where we regressed `murder_rate` against `unemployment`, the model has two regression coefficients: an intercept term and a slope for `unemployment`.
The optimal solution for these delivers the regression line in the graph below.
The total sum of squares for the best fitting parameters is:
```
## [1] 467.6023
```
This is the best prediction we can make based on a linear predictor.
In the following, we discuss several methods of finding the best\-fitting values for regression coefficients that minimize the residual sum of squares.
#### 12\.1\.4\.1 Finding optimal parameters with `optim`
We can use the `optim` function to find the best\-fitting parameter values for our simple linear regression example.
```
# data to be explained / predicted
y <- murder_data %>% pull(murder_rate)
# data to use for prediction / explanation
x <- murder_data %>% pull(unemployment)
# function to calculate residual sum of squares
get_rss = function(y, x, beta_0, beta_1) {
yPred = beta_0 + x * beta_1
sum((y - yPred)^2)
}
# finding best-fitting values for RSS
fit_rss = optim(par = c(0, 1), # initial parameter values
fn = function(par) { # function to minimize
get_rss(y, x, par[1], par[2])
}
)
# output the results
message(
"Best fitting parameter values:",
"\n\tIntercept: ", fit_rss$par[1] %>% round(2),
"\n\tSlope: ", fit_rss$par[2] %>% round(2),
"\nRSS for best fit: ", fit_rss$value %>% round(2)
)
```
```
## Best fitting parameter values:
## Intercept: -28.53
## Slope: 7.08
## RSS for best fit: 467.6
```
#### 12\.1\.4\.2 Fitting OLS regression lines with `lm`
R also has a built\-in function `lm` which fits linear regression models via RSS minimization. Here is how you call this function for the running example:
```
# fit an OLS regression
fit_lm <- lm(
# the formula argument specifies dependent and independent variables
formula = murder_rate ~ unemployment,
# we also need to say where the data (columns) should come from
data = murder_data
)
# output the fitted object
fit_lm
```
```
##
## Call:
## lm(formula = murder_rate ~ unemployment, data = murder_data)
##
## Coefficients:
## (Intercept) unemployment
## -28.53 7.08
```
The output of the fitted object shows the best\-fitting values (compare them to what we obtained before).[56](#fn56)
#### 12\.1\.4\.3 Finding optimal parameter values with math
It is also possible to determine the OLS\-fits by a mathematical derivation. We start with the case of a simple linear regression with just one predictor variable.
**Theorem 12\.1 (OLS solution for simple linear regression)** For a simple linear regression model with just one predictor for a data set with \\(n\\) observations, the solution for:
\\\[\\arg \\min\_{\\langle \\beta\_0, \\beta\_1\\rangle} \\sum\_{i \= 1}^n (y\_i \- (\\beta\_0 \+ \\beta\_1 x\_{i}))^2\\]
is given by:
\\\[
\\begin{aligned}
\\hat{\\beta\_1} \&\= \\frac{Cov(x,y)}{Var(x)} \&
\\hat{\\beta\_0} \&\= \\bar{y} \- \\hat{\\beta}\_1 \\bar{x}
\\end{aligned}
\\]
Show proof.
*Proof*. *(See e.g., [Olive 2017, 57–59](#ref-olive2017))*
Given a set of \\(n\\) observations \\((x\_i, y\_i)\\), we want to find:
\\\[\\langle \\hat{\\beta}\_0, \\hat{\\beta}\_1 \\rangle \= \\arg \\min\_{\\langle \\beta\_0, \\beta\_1 \\rangle} \\sum\_{i \= 1}^n (y\_i \- (\\beta\_0 \+ \\beta\_1 x\_{i}))^2\\]
Let \\(Q\\) denote the RSS function. We want to find the minima of \\(Q\\).
So, we want to find the values \\(\\hat\\beta\_0\\) and \\(\\hat\\beta\_1\\) for which \\(\\frac{\\partial Q}{\\partial \\hat\\beta\_0}\=0\\) and \\(\\frac{\\partial Q}{\\partial \\hat\\beta\_1}\=0\\), since all partial derivatives equal to 0 at the global minimum.
The first condition is:
\\\[ \\begin{align} \\frac{\\partial Q}{\\partial \\hat\\beta\_0}\=\\sum\_{i\=1}^{n}\-2(y\_i\-\\hat\\beta\_0\-\\hat\\beta\_1x\_i)\&\= 0\\\\
\&\=\-\\sum\_{i\=1}^ny\_i\+\\sum\_{i\=1}^n\\hat \\beta\_0\+\\sum\_{i\=1}^n\\hat\\beta\_1x\_i\\\\
\&\=\-\\sum\_{i\=1}^ny\_i\+n\\hat\\beta\_0\+\\sum\_{i\=1}^n\\hat\\beta\_1x\_i
\\end{align}\\]
If we solve for \\(\\hat\\beta\_0\\), this becomes:
\\\[\\begin{align}
\\hat\\beta\_0\&\=\\frac{1}{n}\\sum\_{i\=1}^{n}y\_i\-\\frac{1}{n}\\hat\\beta\_1\\sum\_{i\=1}^{n}x\_i\\\\
\&\=\\bar y \- \\hat\\beta\_1\\bar x
\\end{align}\\]
This solution is indeed a minimum as the second partial derivative is positive:
\\(\\frac{\\partial^2 Q}{\\partial\\hat\\beta\_0^2}\=n\>0\\)
The second condition is:
\\\[ \\begin{align}
\\frac{\\partial Q}{\\partial \\hat\\beta\_1}\& \=\\sum\_{i\=1}^{n}\-2x\_i(y\_i\-\\hat\\beta\_0\-\\hat\\beta\_1x\_i)\=0\\\\
\&\=\\sum\_{i\=1}^{n}(\-x\_iy\_i\+\\hat\\beta\_0x\_i\+\\hat\\beta\_1x\_i^2\)\\\\
\&\=\-\\sum\_{i\=1}^{n}x\_iy\_i\+\\hat\\beta\_0\\sum\_{i\=1}^{n}x\_i\+\\hat\\beta\_1\\sum\_{i\=1}^{n}x\_i^2
\\end{align}\\]
Substitution of \\(\\hat\\beta\_0\\) by (1\.1\.5\) yields:
\\\[ \\begin{align}
0\&\=\-\\sum\_{i\=1}^{n}x\_iy\_i\+(\\bar y \- \\hat\\beta\_1\\bar x)\\sum\_{i\=1}^{n}x\_i\+\\hat\\beta\_1\\sum\_{i\=1}^{n}x\_i^2\\\\
\&\=\-\\sum\_{i\=1}^{n}x\_iy\_i\+\\bar y\\sum\_{i\=1}^{n}x\_i\-\\hat\\beta\_1\\bar x\\sum\_{i\=1}^{n}x\_i\+\\hat\\beta\_1\\sum\_{i\=1}^{n}x\_i^2
\\end{align}\\]
Separating into two sums:
\\\[ \\sum\_{i\=1}^{n}\\left( x\_iy\_i\-x\_i\\bar y\\right)\-\\hat\\beta\_1\\sum\_{i\=1}^{n}\\left(x\_i^2\-x\_i\\bar x\\right)\=0 \\tag{1\.1\.9}\\]
So that:
\\\[ \\hat\\beta\_1 \= \\frac{\\sum\_{i\=1}^{n}\\left( x\_iy\_i\-x\_i\\bar y\\right)}{\\sum\_{i\=1}^{n}\\left( x\_i^2\-x\_i\\bar x\\right)} \= \\frac{\\sum\_{i\=1}^{n}\\left( x\_iy\_i\\right)\-n\\bar x\\bar y}{\\sum\_{i\=1}^{n}\\left( x\_i^2\\right)\-n \\bar x^2} \\tag{1\.1\.10}\\]
Thus:
\\\[ \\sum\_{i\=1}^{n}\\left( \\bar x^2\-x\_i\\bar x\\right)\=0\\]
And:
\\\[ \\sum\_{i\=1}^{n}\\left(\\bar x \\bar y \- y\_i \\bar x\\right)\=0\\]
This can be used in order to expand the previous term and finally to rewrite \\(\\hat\\beta\_1\\) as the ratio of \\(Cov(x,y)\\) to \\(Var(x)\\):
\\\[
\\begin{align}
\\hat\\beta\_1\&\=\\frac{\\sum\_{i\=1}^{n}\\left( x\_iy\_i\-x\_i\\bar y\\right)\+\\sum\_{i\=1}^{n}\\left(\\bar x\\bar y \- y\_i \\bar x\\right)}{\\sum\_{i\=1}^{n}\\left( x\_i^2\-x\_i\\bar x\\right)\+\\sum\_{i\=1}^{n}\\left( \\bar x^2\-x\_i\\bar x\\right)}\=\\frac{\\sum\_{i\=1}^{n}\\left( x\_iy\_i\-x\_i\\bar y\\right)\+0}{\\sum\_{i\=1}^{n}\\left( x\_i^2\-x\_i\\bar x\\right)\+0}\\\\
\\\\
\&\=\\frac{\\frac{1}{n}\\sum\_{i\=1}^{n}\\left( x\_i\-\\bar x\\right) \\left(y\_i\- \\bar y \\right)}{\\frac{1}{n}\\sum\_{i\=1}^{n}\\left( x\_i\-\\bar x\\right)^2}\\\\
\\\\
\&\=\\frac{Cov(x,y)}{Var(x)}
\\end{align}\\]
The solution is indeed a minimum as the second partial derivative is positive:
\\\[\\frac{\\partial^2Q}{\\partial \\hat\\beta\_1^2}\= 2 \\sum\_{i\=1}^{n}x\_i^2 \>0\\]
Let’s use these formulas to calculate regression coefficients for the running example as well:
```
tibble(
beta_1 = cov(x, y) / var(x),
beta_0 = mean(y) - beta_1 * mean(x)
)
```
```
## # A tibble: 1 × 2
## beta_1 beta_0
## <dbl> <dbl>
## 1 7.08 -28.5
```
A similar result also exists for regression with more than one predictor variable, so\-called **multiple linear regression**.
**Theorem 12\.2 (OLS general)** Let \\(X\\) be the \\(n \\times (k\+1\)\\) regression matrix for a linear regression model with \\(k\\) predictor variables for a data set \\(y\\) with \\(n\\) observations. The solution for OLS regression
\\\[
\\hat{\\beta} \= \\langle \\hat{\\beta}\_0, \\hat{\\beta}\_1, \\dots , \\hat{\\beta}\_k\\rangle \= \\arg \\min\_{\\beta} \\sum\_{i \= 1}^k (y\_i \- (X \\beta)\_i)^2
\\]
is given by:
\\\[
\\hat{\\beta} \= (X^T \\ X)^{\-1}\\ X^Ty
\\]
Show proof.
*Proof*. With \\(n\\) observations, the vector \\(\\xi\\) of predicted values for given coefficient vector \\(\\beta\\) is:
\\\[
\\xi\=X \\beta
\\]
More explicitly, this means that:
\\\[
\\begin{align\*}
\\xi\_1\&\=\\beta\_0 \+ \\beta\_{1} X\_{11}\+\\beta\_2 X\_{12} \+ \\ldots \+ \\beta\_k X\_{1k}\\\\
\\xi\_2\&\=\\beta\_0 \+ \\beta\_{1} X\_{21}\+\\beta\_2 X\_{22} \+ \\ldots \+ \\beta\_k X\_{2k}\\\\
\\ldots\\\\
\\xi\_n\&\=\\beta\_0 \+ \\beta\_{1} X\_{n1}\+\\beta\_2 X\_{n2}\+ \\ldots \+ \\beta\_k X\_{nk}
\\end{align\*}
\\]
The OLS estimator is obtained (like in the special case of simple linear regression) by minimizing the residual sum of squares (RSS).
The RSS for the multiple linear regression model is
\\\[
Q\=\\sum\_{i\=1}^n \\left(y\_i\-\\beta\_0 \- \\beta\_1 X\_{i1}\- \\beta\_2 X\_{i2}\-...\-\\beta\_k X\_{ik}\\right)^2
\\]
To find the minimum of \\(Q\\) we calculate the first partial derivative of \\(Q\\) for each \\(\\beta\_j\\):
\\\[\\begin{align}
\\frac{\\partial Q}{\\partial\\beta\_0}\&\=2\\sum\_{i\=1}^n\\left(y\_i\-\\beta\_0\-\\beta\_1 X\_{i1}\-\\beta\_2 X\_{i2}\- \\ldots \-\\beta\_k X\_{ik}\\right)(\-1\)\\\\
\\\\
\\frac{\\partial Q}{\\partial\\beta\_1}\&\=2\\sum\_{i\=1}^n\\left(y\_i\-\\beta\_0\-\\beta\_1 X\_{i1}\-\\beta\_2 X\_{i2}\- \\ldots \-\\beta\_k X\_{ik}\\right)(\- X\_{i1})\\\\
\\\\
\\frac{\\partial Q}{\\partial\\beta\_2}\&\=2\\sum\_{i\=1}^n\\left(y\_i\-\\beta\_0\-\\beta\_1 X\_{i1}\-\\beta\_2 X\_{i2}\- \\ldots \-\\beta\_k X\_{ik}\\right)(\- X\_{i2})\\\\
\\ldots \\\\
\\frac{\\partial Q}{\\partial\\beta\_k}\&\=2\\sum\_{i\=1}^n\\left(y\_i\-\\beta\_0\-\\beta\_1 X\_{i1}\-\\beta\_2 X\_{i2}\- \\ldots \-\\beta\_k X\_{ik}\\right)(\- X\_{ik})
\\end{align}\\]
For the minimum \\(\\hat{\\beta}\\) the derivative of each equation must be zero:
\\\[\\begin{align}
\&\\sum\_{i\=1}^n\\left(y\_i\-\\hat\\beta\_0\-\\hat\\beta\_1 X\_{i1}\-\\hat\\beta\_2 X\_{i2}\- \\ldots \-\\hat\\beta\_k X\_{ik}\\right) \= 0\\\\
\&\\sum\_{i\=1}^n\\left(y\_i\-\\hat\\beta\_0\-\\hat\\beta\_1 X\_{i1}\-\\hat\\beta\_2 X\_{i2}\- \\ldots \-\\hat\\beta\_k X\_{ik}\\right) X\_{i1} \= 0\\\\
\&\\sum\_{i\=1}^n\\left(y\_i\-\\hat\\beta\_0\-\\hat\\beta\_1 X\_{i1}\-\\hat\\beta\_2 X\_{i2}\- \\ldots \-\\hat\\beta\_k X\_{ik}\\right) X\_{i2} \= 0\\\\
\& \\ldots \\\\
\&\\sum\_{i\=1}^n\\left(y\_i\-\\hat\\beta\_0\-\\hat\\beta\_1 X\_{i1}\-\\hat\\beta\_2 X\_{i2}\- \\ldots \-\\hat\\beta\_k X\_{ik}\\right) X\_{ik} \= 0
\\end{align}\\]
Alternatively, we can use matrix notation and combine the above equations into the following form:
\\\[X^Ty\-X^TX\\hat\\beta\=0\\]
Rearranging this, the following expression is known as **normal equations**:
\\\[X^TX\\hat\\beta\=X^Ty\\]
Just for illustration, the system of normal equations in expanded matrix notation is:
\\\[
\\begin{bmatrix}
n \& \\sum\_{i\=1}^n X\_{i1} \& ... \& \\sum\_{i\=1}^n X\_{ik}\\\\
\\sum\_{i\=1}^n X\_{i1} \& \\sum\_{i\=1}^n X\_{i1}^2 \& ... \& \\sum\_{i\=1}^n X\_{i1} X\_{ik}\\\\... \& ... \& ... \& ...\\\\
\\sum\_{i\=1}^n X\_{ik} \& \\sum\_{i\=1}^n X\_{ik} X\_{i1} \& ... \& \\sum\_{i\=1}^n X\_{ik}^2
\\end{bmatrix}
\\begin{bmatrix}
\\hat\\beta\_0 \\\\
\\hat\\beta\_1 \\\\
\\ldots \\\\
\\hat\\beta\_k
\\end{bmatrix}
\=
\\begin{bmatrix}
\\sum\_{i\=1}^ny\_i\\\\\\sum\_{i\=1}^n X\_{i1}y\_i \\\\
\\ldots \\\\
\\sum\_{i\=1}^nX\_{ik}y\_i
\\end{bmatrix}
\\]
The estimator \\(\\hat\\beta\\) can be obtained by rearranging again:
\\\[
\\hat{\\beta} \= (X^T \\ X)^{\-1}\\ X^Ty
\\]
Finally, to see that \\(\\hat\\beta\\) is indeed a global minimizer of the OLS criterion, we check that the second order condition is always a semidefinite positive matrix (details omitted here):
\\\[\\frac{\\partial^2 Q}{\\partial \\mathbf{\\hat\\beta}^2}\=2X'X \>0\.\\]
The availability of these elegant mathematical solutions for OLS\-regression explains why the computation of best\-fitting regression coefficients with a built\-in function like `lm` is lightning fast: it does not rely on optimization with `optim`, sampling methods or other similar computational approaches. Instead, it instantaneously calculates the analytical solution.
### 12\.1\.1 Prediction without any further information
We are interested in explaining or predicting the murder rates in a city using the [murder data set](app-93-data-sets-murder-data.html#app-93-data-sets-murder-data).
Concretely, we are interested in whether knowing a city’s unemployment rate (stored in variable `unemployment`) helps make better predictions for that city’s murder rate (stored in variable `murder_rate`).
Let’s first plot the murder rate for every city (just numbered consecutively in the order of their appearance in the data set):
Suppose we know the vector \\(y\\) of all observed murder rates but we don’t know which murder rate belongs to which city.
We are given a city to guess its murder rate.
But we cannot tell cities apart.
So we must guess one number as a prediction for any of the cities.
What’s a good guess?
Actually, how good a guess is depends on what we want to do with this guess (the utility function of a decision problem).
For now, let’s just assume that we have a measure of **prediction error** which we would like to minimize with our guesses.
A common measure of **prediction error** uses intuitions about geometric distance and is defined in terms of the **total sum of squares**, where \\(y\\) is the \\(n\\)\-dimensional vector of observed murder rates and \\(\\xi \\in \\mathbb{R}\\) is a single numeric prediction:
\\\[
\\text{TSS}(\\xi) \= \\sum\_{i\=1}^n (y\_i \- \\xi)^2
\\]
This measure of prediction error is what underlies the ordinary least squares approach to regression.
It turns out that the **best prediction** we can make, i.e., the number \\(\\hat{\\xi} \= \\arg \\min\_{\\xi} \\text{TSS}(\\xi)\\) for which TSS is minimized, is the mean \\(\\bar{y}\\) of the original predictions.
So, given the goal of minimizing TSS, our best guess is the mean of the observed murder rates.
**Proposition 12\.1 (Mean minimizes total sum of squares.)** \\\[
\\arg \\min\_{\\xi} \\sum\_{i\=1}^n (y\_i \- \\xi)^2 \= \\frac{1}{n} \\sum\_{i\=1}^n y\_i \= \\bar{y}
\\]
Show proof.
*Proof*. To find a minimum, consider the first derivative of \\(\\text{TSS}(\\xi)\\) and find its zero points:
\\\[
\\begin{align\*}
\& f(\\xi) \= \\sum\_{i\=1}^n (y\_i \- \\xi)^2 \= \\sum\_{i\=1}^n (y\_i^2 \- 2 y\_i \\xi \+ \\xi^2\) \\\\
\& f'(\\xi) \= \\sum\_{i\=1}^n (\-2y\_i \+ 2\\xi) \= 0 \\\\
\\Leftrightarrow \& \\sum\_{i\=1}^n \-2y\_i \= \-2 n \\xi \\\\
\\Leftrightarrow \& \\xi \= \\frac{1}{n} \\sum\_{i\=1}^n y\_i \= \\bar{y} \\\\
\\end{align\*}
\\]
Indeed, the zero point \\(\\xi \= \\bar{y}\\) is a minimum because its second derivative is positive:
\\\[
f''(\\bar{y}) \= 2
\\]
The plot below visualizes the prediction we make based on the naive predictor \\(\\hat{y}\\).
The black dots show the data points, the red line shows the prediction we make (the mean murder rate), the small hollow dots show the specific predictions for each observed value and the gray lines show the distance between our prediction and the actual data observation.
To obtain the TSS for the prediction shown in the plot above, we would need to take each gray line, measure its distance, square this number and sum over all lines (cities).
In the case at hand, the prediction error we make by assuming just the mean as predictor is:
```
y <- murder_data %>% pull(murder_rate)
n <- length(y)
tss_simple <- sum((y - mean(y))^2)
tss_simple
```
```
## [1] 1855.202
```
At this stage, a question might arise:
Why square the distances to obtain the total sum of, well, *squares*?
One intuitive motivation is that we want small deviations from our prediction to have less overall impact than huge deviations.
A technical motivation is that the best solution to OLS estimation corresponds to the best solution under a maximum likelihood approach, if we use a normal distribution as likelihood function.
This is what we will cover in Section [12\.2](Chap-04-01-linear-regression-MLE.html#Chap-04-01-linear-regression-MLE) after having introduced the regression model in full.
### 12\.1\.2 Prediction with knowledge of unemployment rate
We might not be very content with this prediction error. Suppose we could use some piece of information about the random city whose murder rate we are trying to predict. For instance, we might happen to know the value of the variable `unemployment`. How could that help us make a better prediction?
There does seem to be some useful information in the unemployment rate, which may lead to better predictions of the murder rate. We see this in a scatter plot:
Let us assume, for the sake of current illustration, that we expect a very particular functional relationship between the variables `murder_rate` and `unemployment`. For some reason or other, we hypothesize that even with 0% unemployment, the murder rate would be positive, namely at 4 murders per million inhabitants. We further hypothesize that with each increase of 1% in the unemployment percentage, the murder rate per million increases by 2\. The functional relationship between dependent variable \\(y\\) (\= murder rate) and predictor variable \\(x\\) (\= unemployment) can then be expressed as a linear function of the following form, where \\(\\xi \\in \\mathbb{R}^n\\) is now a vector of \\(n\\) predictions (one prediction \\(\\xi\_i\\) for each data observation \\(y\_i\\)):[54](#fn54)
\\\[
\\xi\_i \= 2x\_i \+ 4
\\]
Here is a graphical representation of this particular functional relationship assumed in the equation above. Again, the black dots show the data points, the red line the linear function \\(f(x) \= 2x \+4\\), the small hollow dots show the specific predictions for each observed value \\(x\_i\\) and the gray lines show the distance between our prediction \\(\\xi\_i\\) and the actual data observation \\(y\_i\\). (Notice that there are data points for which the unemployment rate is the same, but we observed different murder rates.)
We can again quantify our prediction error in terms of a sum of squares like we did before. For the case of a prediction vector \\(\\xi\\), the quantity in question is called the **residual sum of squares**.
\\\[
\\text{RSS} \= \\sum\_{i\=1}^n (y\_i \- \\xi\_i)^2
\\]
Here is how we can calculate RSS in R for the particular vector \\(\\xi \\in \\mathbb{R}^n\\) for which \\(\\xi\_{i} \= 2x\_i \+ 4\\):
```
y <- murder_data %>% pull(murder_rate)
x <- murder_data %>% pull(unemployment)
predicted_y <- 2 * x + 4
n <- length(y)
rss_guesswork <- sum((y - predicted_y)^2)
rss_guesswork
```
```
## [1] 1327.74
```
Compared to the previous prediction, which was based on the mean \\(\\bar{y}\\) only, this linear function reduces the prediction error (measured here geometrically in terms of a sum of squares).
This alone could be taken as *prima facie* evidence that knowledge of `unemployment` helps make better predictions about `murder_rate`.
**Exercise 13\.1 \[optional]**
1. Compare RSS and TSS. How / where exactly do these notions differ from each other? Think about which information the difference between the two measures conveys.
Solution
TSS computes the distance between a data point and the overall mean of all data points, whereas RSS computes the distance between a data point and a predictor value specific to this data point.
The difference between RSS and TSS tells us how good our prediction is in comparison to a naive prediction (using just the mean).
2. Is it possible for TSS to be smaller than RSS?
That is, could the error based on a single numeric prediction for all data points be smaller than an error obtained for a linear predictor that has a single prediction for each data point?
Solution
Yes, that’s possible.
The definition of RSS and TSS does not imply that we look at the *optimal* point\-valued or linear predictor.
It is conceivable to choose a good single number and a very bad linear predictor, so that RSS is smaller than TSS.
### 12\.1\.3 Linear regression: general problem formulation
Suppose we have \\(k\\) predictor variables \\(x\_1, \\dots , x\_k\\) and a dependent variable \\(y\\).
We consider the linear relation:
\\\[ \\xi\_i({\\beta}\_0, {\\beta}\_1, \\dots, {\\beta}\_k) \= \\beta\_0 \+ \\beta\_1 x\_{1i} \+ \\dots \+ \\beta\_k x\_{ki} \\]
Often we do not explicitly write \\(\\xi\\) as a function of the parameters \\(\\beta\_0, \\dots \\beta\_k\\), and write instead:
\\\[ \\xi\_i \= \\beta\_0 \+ \\beta\_1 x\_{1i} \+ \\dots \+ \\beta\_k x\_{ki} \\]
The parameters \\(\\beta\_0, \\beta\_1, \\dots, \\beta\_k\\) are called **(regression) coefficients**.
In particular, \\(\\beta\_0\\) is called the **(regression) intercept** and \\(\\beta\_1, \\dots, \\beta\_k\\) are **(regression) slope coefficients**.
The term **simple linear regression** is often used to cover the special case of \\(k\=1\\).
If there is more than one predictor, i.e., \\(k \> 1\\), the term **multiple linear regression** is common.
Based on the predictions of a parameter vector \\(\\langle {\\beta}\_0, {\\beta}\_1, \\dots, {\\beta}\_k\\rangle\\), we consider the residual sum of squares as a measure of prediction error:
\\\[\\text{RSS}\_{\\langle {\\beta}\_0, {\\beta}\_1, \\dots, {\\beta}\_k\\rangle} \= \\sum\_{i \= 1}^k \[y\_i \- \\xi\_i ({\\beta}\_0, {\\beta}\_1, \\dots, {\\beta}\_k) ]^2 \\]
We would like to find the *best parameter values* (denoted traditionally by a hat on the parameter’s variable: \\(\\hat{\\beta}\_i\\)) in the sense of minimizing the residual sum of squares:
\\\[
\\langle \\hat{\\beta}\_0, \\hat{\\beta}\_1, \\dots , \\hat{\\beta}\_k\\rangle \= \\arg \\min\_{\\langle \\beta\_0, \\beta\_1, \\dots, \\beta\_k\\rangle} \\text{RSS}\_{\\langle {\\beta}\_0, {\\beta}\_1, \\dots, {\\beta}\_k\\rangle}
\\]
The prediction corresponding to the best parameter values is denoted by \\(\\hat{\\xi} \\in \\mathbb{R}^n\\) and called the *best linear predictor*:
\\\[ \\hat{\\xi}\_i \= \\hat{\\beta}\_0 \+ \\hat{\\beta}\_1 x\_{1i} \+ \\dots \+ \\hat{\\beta}\_k x\_{ki}\\]
It is also possible, and often convenient, to state the linear regression model in terms of matrix operations.
Traditionally, we consider a so\-called **predictor matrix** \\(X\\) of size \\(n \\times (k\+1\)\\), where \\(n\\) is the number of observations in the data set and \\(k\\) is the number of predictor variables.
The predictor matrix includes the values for all predictor variables and it also includes an “intercept column” \\((X^{T})\_0\\) for which \\(X\_{i0}\=1\\) for all \\(1 \\le i \\le n\\) so that the intercept \\(\\beta\_0\\) can be treated on a par with the other regression coefficients.[55](#fn55)
Using the predictor matrix \\(X\\), the linear predictor vector \\(\\xi\\) is:
\\\[\\xi \= X \\beta\\]
**Exercise 13\.2**
How can we interpret the parameters \\(a\\) and \\(b\\) of the linear model \\(\\xi\_i \= a x\_i \+ b\\)?
How are these parameters usually called in regression jargon?
Solution
Parameter \\(a\\) is the slope, \\(b\\) the intercept of a simple linear regression.
Parameter \\(a\\) gives the amount of change of \\(y\\) for each unit change in \\(x\\).
Parameter \\(b\\) gives the prediction \\(\\xi\\) for \\(x\=0\\).
### 12\.1\.4 Finding the OLS\-solution
In the above example, where we regressed `murder_rate` against `unemployment`, the model has two regression coefficients: an intercept term and a slope for `unemployment`.
The optimal solution for these delivers the regression line in the graph below.
The total sum of squares for the best fitting parameters is:
```
## [1] 467.6023
```
This is the best prediction we can make based on a linear predictor.
In the following, we discuss several methods of finding the best\-fitting values for regression coefficients that minimize the residual sum of squares.
#### 12\.1\.4\.1 Finding optimal parameters with `optim`
We can use the `optim` function to find the best\-fitting parameter values for our simple linear regression example.
```
# data to be explained / predicted
y <- murder_data %>% pull(murder_rate)
# data to use for prediction / explanation
x <- murder_data %>% pull(unemployment)
# function to calculate residual sum of squares
get_rss = function(y, x, beta_0, beta_1) {
yPred = beta_0 + x * beta_1
sum((y - yPred)^2)
}
# finding best-fitting values for RSS
fit_rss = optim(par = c(0, 1), # initial parameter values
fn = function(par) { # function to minimize
get_rss(y, x, par[1], par[2])
}
)
# output the results
message(
"Best fitting parameter values:",
"\n\tIntercept: ", fit_rss$par[1] %>% round(2),
"\n\tSlope: ", fit_rss$par[2] %>% round(2),
"\nRSS for best fit: ", fit_rss$value %>% round(2)
)
```
```
## Best fitting parameter values:
## Intercept: -28.53
## Slope: 7.08
## RSS for best fit: 467.6
```
#### 12\.1\.4\.2 Fitting OLS regression lines with `lm`
R also has a built\-in function `lm` which fits linear regression models via RSS minimization. Here is how you call this function for the running example:
```
# fit an OLS regression
fit_lm <- lm(
# the formula argument specifies dependent and independent variables
formula = murder_rate ~ unemployment,
# we also need to say where the data (columns) should come from
data = murder_data
)
# output the fitted object
fit_lm
```
```
##
## Call:
## lm(formula = murder_rate ~ unemployment, data = murder_data)
##
## Coefficients:
## (Intercept) unemployment
## -28.53 7.08
```
The output of the fitted object shows the best\-fitting values (compare them to what we obtained before).[56](#fn56)
#### 12\.1\.4\.3 Finding optimal parameter values with math
It is also possible to determine the OLS\-fits by a mathematical derivation. We start with the case of a simple linear regression with just one predictor variable.
**Theorem 12\.1 (OLS solution for simple linear regression)** For a simple linear regression model with just one predictor for a data set with \\(n\\) observations, the solution for:
\\\[\\arg \\min\_{\\langle \\beta\_0, \\beta\_1\\rangle} \\sum\_{i \= 1}^n (y\_i \- (\\beta\_0 \+ \\beta\_1 x\_{i}))^2\\]
is given by:
\\\[
\\begin{aligned}
\\hat{\\beta\_1} \&\= \\frac{Cov(x,y)}{Var(x)} \&
\\hat{\\beta\_0} \&\= \\bar{y} \- \\hat{\\beta}\_1 \\bar{x}
\\end{aligned}
\\]
Show proof.
*Proof*. *(See e.g., [Olive 2017, 57–59](#ref-olive2017))*
Given a set of \\(n\\) observations \\((x\_i, y\_i)\\), we want to find:
\\\[\\langle \\hat{\\beta}\_0, \\hat{\\beta}\_1 \\rangle \= \\arg \\min\_{\\langle \\beta\_0, \\beta\_1 \\rangle} \\sum\_{i \= 1}^n (y\_i \- (\\beta\_0 \+ \\beta\_1 x\_{i}))^2\\]
Let \\(Q\\) denote the RSS function. We want to find the minima of \\(Q\\).
So, we want to find the values \\(\\hat\\beta\_0\\) and \\(\\hat\\beta\_1\\) for which \\(\\frac{\\partial Q}{\\partial \\hat\\beta\_0}\=0\\) and \\(\\frac{\\partial Q}{\\partial \\hat\\beta\_1}\=0\\), since all partial derivatives equal to 0 at the global minimum.
The first condition is:
\\\[ \\begin{align} \\frac{\\partial Q}{\\partial \\hat\\beta\_0}\=\\sum\_{i\=1}^{n}\-2(y\_i\-\\hat\\beta\_0\-\\hat\\beta\_1x\_i)\&\= 0\\\\
\&\=\-\\sum\_{i\=1}^ny\_i\+\\sum\_{i\=1}^n\\hat \\beta\_0\+\\sum\_{i\=1}^n\\hat\\beta\_1x\_i\\\\
\&\=\-\\sum\_{i\=1}^ny\_i\+n\\hat\\beta\_0\+\\sum\_{i\=1}^n\\hat\\beta\_1x\_i
\\end{align}\\]
If we solve for \\(\\hat\\beta\_0\\), this becomes:
\\\[\\begin{align}
\\hat\\beta\_0\&\=\\frac{1}{n}\\sum\_{i\=1}^{n}y\_i\-\\frac{1}{n}\\hat\\beta\_1\\sum\_{i\=1}^{n}x\_i\\\\
\&\=\\bar y \- \\hat\\beta\_1\\bar x
\\end{align}\\]
This solution is indeed a minimum as the second partial derivative is positive:
\\(\\frac{\\partial^2 Q}{\\partial\\hat\\beta\_0^2}\=n\>0\\)
The second condition is:
\\\[ \\begin{align}
\\frac{\\partial Q}{\\partial \\hat\\beta\_1}\& \=\\sum\_{i\=1}^{n}\-2x\_i(y\_i\-\\hat\\beta\_0\-\\hat\\beta\_1x\_i)\=0\\\\
\&\=\\sum\_{i\=1}^{n}(\-x\_iy\_i\+\\hat\\beta\_0x\_i\+\\hat\\beta\_1x\_i^2\)\\\\
\&\=\-\\sum\_{i\=1}^{n}x\_iy\_i\+\\hat\\beta\_0\\sum\_{i\=1}^{n}x\_i\+\\hat\\beta\_1\\sum\_{i\=1}^{n}x\_i^2
\\end{align}\\]
Substitution of \\(\\hat\\beta\_0\\) by (1\.1\.5\) yields:
\\\[ \\begin{align}
0\&\=\-\\sum\_{i\=1}^{n}x\_iy\_i\+(\\bar y \- \\hat\\beta\_1\\bar x)\\sum\_{i\=1}^{n}x\_i\+\\hat\\beta\_1\\sum\_{i\=1}^{n}x\_i^2\\\\
\&\=\-\\sum\_{i\=1}^{n}x\_iy\_i\+\\bar y\\sum\_{i\=1}^{n}x\_i\-\\hat\\beta\_1\\bar x\\sum\_{i\=1}^{n}x\_i\+\\hat\\beta\_1\\sum\_{i\=1}^{n}x\_i^2
\\end{align}\\]
Separating into two sums:
\\\[ \\sum\_{i\=1}^{n}\\left( x\_iy\_i\-x\_i\\bar y\\right)\-\\hat\\beta\_1\\sum\_{i\=1}^{n}\\left(x\_i^2\-x\_i\\bar x\\right)\=0 \\tag{1\.1\.9}\\]
So that:
\\\[ \\hat\\beta\_1 \= \\frac{\\sum\_{i\=1}^{n}\\left( x\_iy\_i\-x\_i\\bar y\\right)}{\\sum\_{i\=1}^{n}\\left( x\_i^2\-x\_i\\bar x\\right)} \= \\frac{\\sum\_{i\=1}^{n}\\left( x\_iy\_i\\right)\-n\\bar x\\bar y}{\\sum\_{i\=1}^{n}\\left( x\_i^2\\right)\-n \\bar x^2} \\tag{1\.1\.10}\\]
Thus:
\\\[ \\sum\_{i\=1}^{n}\\left( \\bar x^2\-x\_i\\bar x\\right)\=0\\]
And:
\\\[ \\sum\_{i\=1}^{n}\\left(\\bar x \\bar y \- y\_i \\bar x\\right)\=0\\]
This can be used in order to expand the previous term and finally to rewrite \\(\\hat\\beta\_1\\) as the ratio of \\(Cov(x,y)\\) to \\(Var(x)\\):
\\\[
\\begin{align}
\\hat\\beta\_1\&\=\\frac{\\sum\_{i\=1}^{n}\\left( x\_iy\_i\-x\_i\\bar y\\right)\+\\sum\_{i\=1}^{n}\\left(\\bar x\\bar y \- y\_i \\bar x\\right)}{\\sum\_{i\=1}^{n}\\left( x\_i^2\-x\_i\\bar x\\right)\+\\sum\_{i\=1}^{n}\\left( \\bar x^2\-x\_i\\bar x\\right)}\=\\frac{\\sum\_{i\=1}^{n}\\left( x\_iy\_i\-x\_i\\bar y\\right)\+0}{\\sum\_{i\=1}^{n}\\left( x\_i^2\-x\_i\\bar x\\right)\+0}\\\\
\\\\
\&\=\\frac{\\frac{1}{n}\\sum\_{i\=1}^{n}\\left( x\_i\-\\bar x\\right) \\left(y\_i\- \\bar y \\right)}{\\frac{1}{n}\\sum\_{i\=1}^{n}\\left( x\_i\-\\bar x\\right)^2}\\\\
\\\\
\&\=\\frac{Cov(x,y)}{Var(x)}
\\end{align}\\]
The solution is indeed a minimum as the second partial derivative is positive:
\\\[\\frac{\\partial^2Q}{\\partial \\hat\\beta\_1^2}\= 2 \\sum\_{i\=1}^{n}x\_i^2 \>0\\]
Let’s use these formulas to calculate regression coefficients for the running example as well:
```
tibble(
beta_1 = cov(x, y) / var(x),
beta_0 = mean(y) - beta_1 * mean(x)
)
```
```
## # A tibble: 1 × 2
## beta_1 beta_0
## <dbl> <dbl>
## 1 7.08 -28.5
```
A similar result also exists for regression with more than one predictor variable, so\-called **multiple linear regression**.
**Theorem 12\.2 (OLS general)** Let \\(X\\) be the \\(n \\times (k\+1\)\\) regression matrix for a linear regression model with \\(k\\) predictor variables for a data set \\(y\\) with \\(n\\) observations. The solution for OLS regression
\\\[
\\hat{\\beta} \= \\langle \\hat{\\beta}\_0, \\hat{\\beta}\_1, \\dots , \\hat{\\beta}\_k\\rangle \= \\arg \\min\_{\\beta} \\sum\_{i \= 1}^k (y\_i \- (X \\beta)\_i)^2
\\]
is given by:
\\\[
\\hat{\\beta} \= (X^T \\ X)^{\-1}\\ X^Ty
\\]
Show proof.
*Proof*. With \\(n\\) observations, the vector \\(\\xi\\) of predicted values for given coefficient vector \\(\\beta\\) is:
\\\[
\\xi\=X \\beta
\\]
More explicitly, this means that:
\\\[
\\begin{align\*}
\\xi\_1\&\=\\beta\_0 \+ \\beta\_{1} X\_{11}\+\\beta\_2 X\_{12} \+ \\ldots \+ \\beta\_k X\_{1k}\\\\
\\xi\_2\&\=\\beta\_0 \+ \\beta\_{1} X\_{21}\+\\beta\_2 X\_{22} \+ \\ldots \+ \\beta\_k X\_{2k}\\\\
\\ldots\\\\
\\xi\_n\&\=\\beta\_0 \+ \\beta\_{1} X\_{n1}\+\\beta\_2 X\_{n2}\+ \\ldots \+ \\beta\_k X\_{nk}
\\end{align\*}
\\]
The OLS estimator is obtained (like in the special case of simple linear regression) by minimizing the residual sum of squares (RSS).
The RSS for the multiple linear regression model is
\\\[
Q\=\\sum\_{i\=1}^n \\left(y\_i\-\\beta\_0 \- \\beta\_1 X\_{i1}\- \\beta\_2 X\_{i2}\-...\-\\beta\_k X\_{ik}\\right)^2
\\]
To find the minimum of \\(Q\\) we calculate the first partial derivative of \\(Q\\) for each \\(\\beta\_j\\):
\\\[\\begin{align}
\\frac{\\partial Q}{\\partial\\beta\_0}\&\=2\\sum\_{i\=1}^n\\left(y\_i\-\\beta\_0\-\\beta\_1 X\_{i1}\-\\beta\_2 X\_{i2}\- \\ldots \-\\beta\_k X\_{ik}\\right)(\-1\)\\\\
\\\\
\\frac{\\partial Q}{\\partial\\beta\_1}\&\=2\\sum\_{i\=1}^n\\left(y\_i\-\\beta\_0\-\\beta\_1 X\_{i1}\-\\beta\_2 X\_{i2}\- \\ldots \-\\beta\_k X\_{ik}\\right)(\- X\_{i1})\\\\
\\\\
\\frac{\\partial Q}{\\partial\\beta\_2}\&\=2\\sum\_{i\=1}^n\\left(y\_i\-\\beta\_0\-\\beta\_1 X\_{i1}\-\\beta\_2 X\_{i2}\- \\ldots \-\\beta\_k X\_{ik}\\right)(\- X\_{i2})\\\\
\\ldots \\\\
\\frac{\\partial Q}{\\partial\\beta\_k}\&\=2\\sum\_{i\=1}^n\\left(y\_i\-\\beta\_0\-\\beta\_1 X\_{i1}\-\\beta\_2 X\_{i2}\- \\ldots \-\\beta\_k X\_{ik}\\right)(\- X\_{ik})
\\end{align}\\]
For the minimum \\(\\hat{\\beta}\\) the derivative of each equation must be zero:
\\\[\\begin{align}
\&\\sum\_{i\=1}^n\\left(y\_i\-\\hat\\beta\_0\-\\hat\\beta\_1 X\_{i1}\-\\hat\\beta\_2 X\_{i2}\- \\ldots \-\\hat\\beta\_k X\_{ik}\\right) \= 0\\\\
\&\\sum\_{i\=1}^n\\left(y\_i\-\\hat\\beta\_0\-\\hat\\beta\_1 X\_{i1}\-\\hat\\beta\_2 X\_{i2}\- \\ldots \-\\hat\\beta\_k X\_{ik}\\right) X\_{i1} \= 0\\\\
\&\\sum\_{i\=1}^n\\left(y\_i\-\\hat\\beta\_0\-\\hat\\beta\_1 X\_{i1}\-\\hat\\beta\_2 X\_{i2}\- \\ldots \-\\hat\\beta\_k X\_{ik}\\right) X\_{i2} \= 0\\\\
\& \\ldots \\\\
\&\\sum\_{i\=1}^n\\left(y\_i\-\\hat\\beta\_0\-\\hat\\beta\_1 X\_{i1}\-\\hat\\beta\_2 X\_{i2}\- \\ldots \-\\hat\\beta\_k X\_{ik}\\right) X\_{ik} \= 0
\\end{align}\\]
Alternatively, we can use matrix notation and combine the above equations into the following form:
\\\[X^Ty\-X^TX\\hat\\beta\=0\\]
Rearranging this, the following expression is known as **normal equations**:
\\\[X^TX\\hat\\beta\=X^Ty\\]
Just for illustration, the system of normal equations in expanded matrix notation is:
\\\[
\\begin{bmatrix}
n \& \\sum\_{i\=1}^n X\_{i1} \& ... \& \\sum\_{i\=1}^n X\_{ik}\\\\
\\sum\_{i\=1}^n X\_{i1} \& \\sum\_{i\=1}^n X\_{i1}^2 \& ... \& \\sum\_{i\=1}^n X\_{i1} X\_{ik}\\\\... \& ... \& ... \& ...\\\\
\\sum\_{i\=1}^n X\_{ik} \& \\sum\_{i\=1}^n X\_{ik} X\_{i1} \& ... \& \\sum\_{i\=1}^n X\_{ik}^2
\\end{bmatrix}
\\begin{bmatrix}
\\hat\\beta\_0 \\\\
\\hat\\beta\_1 \\\\
\\ldots \\\\
\\hat\\beta\_k
\\end{bmatrix}
\=
\\begin{bmatrix}
\\sum\_{i\=1}^ny\_i\\\\\\sum\_{i\=1}^n X\_{i1}y\_i \\\\
\\ldots \\\\
\\sum\_{i\=1}^nX\_{ik}y\_i
\\end{bmatrix}
\\]
The estimator \\(\\hat\\beta\\) can be obtained by rearranging again:
\\\[
\\hat{\\beta} \= (X^T \\ X)^{\-1}\\ X^Ty
\\]
Finally, to see that \\(\\hat\\beta\\) is indeed a global minimizer of the OLS criterion, we check that the second order condition is always a semidefinite positive matrix (details omitted here):
\\\[\\frac{\\partial^2 Q}{\\partial \\mathbf{\\hat\\beta}^2}\=2X'X \>0\.\\]
The availability of these elegant mathematical solutions for OLS\-regression explains why the computation of best\-fitting regression coefficients with a built\-in function like `lm` is lightning fast: it does not rely on optimization with `optim`, sampling methods or other similar computational approaches. Instead, it instantaneously calculates the analytical solution.
#### 12\.1\.4\.1 Finding optimal parameters with `optim`
We can use the `optim` function to find the best\-fitting parameter values for our simple linear regression example.
```
# data to be explained / predicted
y <- murder_data %>% pull(murder_rate)
# data to use for prediction / explanation
x <- murder_data %>% pull(unemployment)
# function to calculate residual sum of squares
get_rss = function(y, x, beta_0, beta_1) {
yPred = beta_0 + x * beta_1
sum((y - yPred)^2)
}
# finding best-fitting values for RSS
fit_rss = optim(par = c(0, 1), # initial parameter values
fn = function(par) { # function to minimize
get_rss(y, x, par[1], par[2])
}
)
# output the results
message(
"Best fitting parameter values:",
"\n\tIntercept: ", fit_rss$par[1] %>% round(2),
"\n\tSlope: ", fit_rss$par[2] %>% round(2),
"\nRSS for best fit: ", fit_rss$value %>% round(2)
)
```
```
## Best fitting parameter values:
## Intercept: -28.53
## Slope: 7.08
## RSS for best fit: 467.6
```
#### 12\.1\.4\.2 Fitting OLS regression lines with `lm`
R also has a built\-in function `lm` which fits linear regression models via RSS minimization. Here is how you call this function for the running example:
```
# fit an OLS regression
fit_lm <- lm(
# the formula argument specifies dependent and independent variables
formula = murder_rate ~ unemployment,
# we also need to say where the data (columns) should come from
data = murder_data
)
# output the fitted object
fit_lm
```
```
##
## Call:
## lm(formula = murder_rate ~ unemployment, data = murder_data)
##
## Coefficients:
## (Intercept) unemployment
## -28.53 7.08
```
The output of the fitted object shows the best\-fitting values (compare them to what we obtained before).[56](#fn56)
#### 12\.1\.4\.3 Finding optimal parameter values with math
It is also possible to determine the OLS\-fits by a mathematical derivation. We start with the case of a simple linear regression with just one predictor variable.
**Theorem 12\.1 (OLS solution for simple linear regression)** For a simple linear regression model with just one predictor for a data set with \\(n\\) observations, the solution for:
\\\[\\arg \\min\_{\\langle \\beta\_0, \\beta\_1\\rangle} \\sum\_{i \= 1}^n (y\_i \- (\\beta\_0 \+ \\beta\_1 x\_{i}))^2\\]
is given by:
\\\[
\\begin{aligned}
\\hat{\\beta\_1} \&\= \\frac{Cov(x,y)}{Var(x)} \&
\\hat{\\beta\_0} \&\= \\bar{y} \- \\hat{\\beta}\_1 \\bar{x}
\\end{aligned}
\\]
Show proof.
*Proof*. *(See e.g., [Olive 2017, 57–59](#ref-olive2017))*
Given a set of \\(n\\) observations \\((x\_i, y\_i)\\), we want to find:
\\\[\\langle \\hat{\\beta}\_0, \\hat{\\beta}\_1 \\rangle \= \\arg \\min\_{\\langle \\beta\_0, \\beta\_1 \\rangle} \\sum\_{i \= 1}^n (y\_i \- (\\beta\_0 \+ \\beta\_1 x\_{i}))^2\\]
Let \\(Q\\) denote the RSS function. We want to find the minima of \\(Q\\).
So, we want to find the values \\(\\hat\\beta\_0\\) and \\(\\hat\\beta\_1\\) for which \\(\\frac{\\partial Q}{\\partial \\hat\\beta\_0}\=0\\) and \\(\\frac{\\partial Q}{\\partial \\hat\\beta\_1}\=0\\), since all partial derivatives equal to 0 at the global minimum.
The first condition is:
\\\[ \\begin{align} \\frac{\\partial Q}{\\partial \\hat\\beta\_0}\=\\sum\_{i\=1}^{n}\-2(y\_i\-\\hat\\beta\_0\-\\hat\\beta\_1x\_i)\&\= 0\\\\
\&\=\-\\sum\_{i\=1}^ny\_i\+\\sum\_{i\=1}^n\\hat \\beta\_0\+\\sum\_{i\=1}^n\\hat\\beta\_1x\_i\\\\
\&\=\-\\sum\_{i\=1}^ny\_i\+n\\hat\\beta\_0\+\\sum\_{i\=1}^n\\hat\\beta\_1x\_i
\\end{align}\\]
If we solve for \\(\\hat\\beta\_0\\), this becomes:
\\\[\\begin{align}
\\hat\\beta\_0\&\=\\frac{1}{n}\\sum\_{i\=1}^{n}y\_i\-\\frac{1}{n}\\hat\\beta\_1\\sum\_{i\=1}^{n}x\_i\\\\
\&\=\\bar y \- \\hat\\beta\_1\\bar x
\\end{align}\\]
This solution is indeed a minimum as the second partial derivative is positive:
\\(\\frac{\\partial^2 Q}{\\partial\\hat\\beta\_0^2}\=n\>0\\)
The second condition is:
\\\[ \\begin{align}
\\frac{\\partial Q}{\\partial \\hat\\beta\_1}\& \=\\sum\_{i\=1}^{n}\-2x\_i(y\_i\-\\hat\\beta\_0\-\\hat\\beta\_1x\_i)\=0\\\\
\&\=\\sum\_{i\=1}^{n}(\-x\_iy\_i\+\\hat\\beta\_0x\_i\+\\hat\\beta\_1x\_i^2\)\\\\
\&\=\-\\sum\_{i\=1}^{n}x\_iy\_i\+\\hat\\beta\_0\\sum\_{i\=1}^{n}x\_i\+\\hat\\beta\_1\\sum\_{i\=1}^{n}x\_i^2
\\end{align}\\]
Substitution of \\(\\hat\\beta\_0\\) by (1\.1\.5\) yields:
\\\[ \\begin{align}
0\&\=\-\\sum\_{i\=1}^{n}x\_iy\_i\+(\\bar y \- \\hat\\beta\_1\\bar x)\\sum\_{i\=1}^{n}x\_i\+\\hat\\beta\_1\\sum\_{i\=1}^{n}x\_i^2\\\\
\&\=\-\\sum\_{i\=1}^{n}x\_iy\_i\+\\bar y\\sum\_{i\=1}^{n}x\_i\-\\hat\\beta\_1\\bar x\\sum\_{i\=1}^{n}x\_i\+\\hat\\beta\_1\\sum\_{i\=1}^{n}x\_i^2
\\end{align}\\]
Separating into two sums:
\\\[ \\sum\_{i\=1}^{n}\\left( x\_iy\_i\-x\_i\\bar y\\right)\-\\hat\\beta\_1\\sum\_{i\=1}^{n}\\left(x\_i^2\-x\_i\\bar x\\right)\=0 \\tag{1\.1\.9}\\]
So that:
\\\[ \\hat\\beta\_1 \= \\frac{\\sum\_{i\=1}^{n}\\left( x\_iy\_i\-x\_i\\bar y\\right)}{\\sum\_{i\=1}^{n}\\left( x\_i^2\-x\_i\\bar x\\right)} \= \\frac{\\sum\_{i\=1}^{n}\\left( x\_iy\_i\\right)\-n\\bar x\\bar y}{\\sum\_{i\=1}^{n}\\left( x\_i^2\\right)\-n \\bar x^2} \\tag{1\.1\.10}\\]
Thus:
\\\[ \\sum\_{i\=1}^{n}\\left( \\bar x^2\-x\_i\\bar x\\right)\=0\\]
And:
\\\[ \\sum\_{i\=1}^{n}\\left(\\bar x \\bar y \- y\_i \\bar x\\right)\=0\\]
This can be used in order to expand the previous term and finally to rewrite \\(\\hat\\beta\_1\\) as the ratio of \\(Cov(x,y)\\) to \\(Var(x)\\):
\\\[
\\begin{align}
\\hat\\beta\_1\&\=\\frac{\\sum\_{i\=1}^{n}\\left( x\_iy\_i\-x\_i\\bar y\\right)\+\\sum\_{i\=1}^{n}\\left(\\bar x\\bar y \- y\_i \\bar x\\right)}{\\sum\_{i\=1}^{n}\\left( x\_i^2\-x\_i\\bar x\\right)\+\\sum\_{i\=1}^{n}\\left( \\bar x^2\-x\_i\\bar x\\right)}\=\\frac{\\sum\_{i\=1}^{n}\\left( x\_iy\_i\-x\_i\\bar y\\right)\+0}{\\sum\_{i\=1}^{n}\\left( x\_i^2\-x\_i\\bar x\\right)\+0}\\\\
\\\\
\&\=\\frac{\\frac{1}{n}\\sum\_{i\=1}^{n}\\left( x\_i\-\\bar x\\right) \\left(y\_i\- \\bar y \\right)}{\\frac{1}{n}\\sum\_{i\=1}^{n}\\left( x\_i\-\\bar x\\right)^2}\\\\
\\\\
\&\=\\frac{Cov(x,y)}{Var(x)}
\\end{align}\\]
The solution is indeed a minimum as the second partial derivative is positive:
\\\[\\frac{\\partial^2Q}{\\partial \\hat\\beta\_1^2}\= 2 \\sum\_{i\=1}^{n}x\_i^2 \>0\\]
Let’s use these formulas to calculate regression coefficients for the running example as well:
```
tibble(
beta_1 = cov(x, y) / var(x),
beta_0 = mean(y) - beta_1 * mean(x)
)
```
```
## # A tibble: 1 × 2
## beta_1 beta_0
## <dbl> <dbl>
## 1 7.08 -28.5
```
A similar result also exists for regression with more than one predictor variable, so\-called **multiple linear regression**.
**Theorem 12\.2 (OLS general)** Let \\(X\\) be the \\(n \\times (k\+1\)\\) regression matrix for a linear regression model with \\(k\\) predictor variables for a data set \\(y\\) with \\(n\\) observations. The solution for OLS regression
\\\[
\\hat{\\beta} \= \\langle \\hat{\\beta}\_0, \\hat{\\beta}\_1, \\dots , \\hat{\\beta}\_k\\rangle \= \\arg \\min\_{\\beta} \\sum\_{i \= 1}^k (y\_i \- (X \\beta)\_i)^2
\\]
is given by:
\\\[
\\hat{\\beta} \= (X^T \\ X)^{\-1}\\ X^Ty
\\]
Show proof.
*Proof*. With \\(n\\) observations, the vector \\(\\xi\\) of predicted values for given coefficient vector \\(\\beta\\) is:
\\\[
\\xi\=X \\beta
\\]
More explicitly, this means that:
\\\[
\\begin{align\*}
\\xi\_1\&\=\\beta\_0 \+ \\beta\_{1} X\_{11}\+\\beta\_2 X\_{12} \+ \\ldots \+ \\beta\_k X\_{1k}\\\\
\\xi\_2\&\=\\beta\_0 \+ \\beta\_{1} X\_{21}\+\\beta\_2 X\_{22} \+ \\ldots \+ \\beta\_k X\_{2k}\\\\
\\ldots\\\\
\\xi\_n\&\=\\beta\_0 \+ \\beta\_{1} X\_{n1}\+\\beta\_2 X\_{n2}\+ \\ldots \+ \\beta\_k X\_{nk}
\\end{align\*}
\\]
The OLS estimator is obtained (like in the special case of simple linear regression) by minimizing the residual sum of squares (RSS).
The RSS for the multiple linear regression model is
\\\[
Q\=\\sum\_{i\=1}^n \\left(y\_i\-\\beta\_0 \- \\beta\_1 X\_{i1}\- \\beta\_2 X\_{i2}\-...\-\\beta\_k X\_{ik}\\right)^2
\\]
To find the minimum of \\(Q\\) we calculate the first partial derivative of \\(Q\\) for each \\(\\beta\_j\\):
\\\[\\begin{align}
\\frac{\\partial Q}{\\partial\\beta\_0}\&\=2\\sum\_{i\=1}^n\\left(y\_i\-\\beta\_0\-\\beta\_1 X\_{i1}\-\\beta\_2 X\_{i2}\- \\ldots \-\\beta\_k X\_{ik}\\right)(\-1\)\\\\
\\\\
\\frac{\\partial Q}{\\partial\\beta\_1}\&\=2\\sum\_{i\=1}^n\\left(y\_i\-\\beta\_0\-\\beta\_1 X\_{i1}\-\\beta\_2 X\_{i2}\- \\ldots \-\\beta\_k X\_{ik}\\right)(\- X\_{i1})\\\\
\\\\
\\frac{\\partial Q}{\\partial\\beta\_2}\&\=2\\sum\_{i\=1}^n\\left(y\_i\-\\beta\_0\-\\beta\_1 X\_{i1}\-\\beta\_2 X\_{i2}\- \\ldots \-\\beta\_k X\_{ik}\\right)(\- X\_{i2})\\\\
\\ldots \\\\
\\frac{\\partial Q}{\\partial\\beta\_k}\&\=2\\sum\_{i\=1}^n\\left(y\_i\-\\beta\_0\-\\beta\_1 X\_{i1}\-\\beta\_2 X\_{i2}\- \\ldots \-\\beta\_k X\_{ik}\\right)(\- X\_{ik})
\\end{align}\\]
For the minimum \\(\\hat{\\beta}\\) the derivative of each equation must be zero:
\\\[\\begin{align}
\&\\sum\_{i\=1}^n\\left(y\_i\-\\hat\\beta\_0\-\\hat\\beta\_1 X\_{i1}\-\\hat\\beta\_2 X\_{i2}\- \\ldots \-\\hat\\beta\_k X\_{ik}\\right) \= 0\\\\
\&\\sum\_{i\=1}^n\\left(y\_i\-\\hat\\beta\_0\-\\hat\\beta\_1 X\_{i1}\-\\hat\\beta\_2 X\_{i2}\- \\ldots \-\\hat\\beta\_k X\_{ik}\\right) X\_{i1} \= 0\\\\
\&\\sum\_{i\=1}^n\\left(y\_i\-\\hat\\beta\_0\-\\hat\\beta\_1 X\_{i1}\-\\hat\\beta\_2 X\_{i2}\- \\ldots \-\\hat\\beta\_k X\_{ik}\\right) X\_{i2} \= 0\\\\
\& \\ldots \\\\
\&\\sum\_{i\=1}^n\\left(y\_i\-\\hat\\beta\_0\-\\hat\\beta\_1 X\_{i1}\-\\hat\\beta\_2 X\_{i2}\- \\ldots \-\\hat\\beta\_k X\_{ik}\\right) X\_{ik} \= 0
\\end{align}\\]
Alternatively, we can use matrix notation and combine the above equations into the following form:
\\\[X^Ty\-X^TX\\hat\\beta\=0\\]
Rearranging this, the following expression is known as **normal equations**:
\\\[X^TX\\hat\\beta\=X^Ty\\]
Just for illustration, the system of normal equations in expanded matrix notation is:
\\\[
\\begin{bmatrix}
n \& \\sum\_{i\=1}^n X\_{i1} \& ... \& \\sum\_{i\=1}^n X\_{ik}\\\\
\\sum\_{i\=1}^n X\_{i1} \& \\sum\_{i\=1}^n X\_{i1}^2 \& ... \& \\sum\_{i\=1}^n X\_{i1} X\_{ik}\\\\... \& ... \& ... \& ...\\\\
\\sum\_{i\=1}^n X\_{ik} \& \\sum\_{i\=1}^n X\_{ik} X\_{i1} \& ... \& \\sum\_{i\=1}^n X\_{ik}^2
\\end{bmatrix}
\\begin{bmatrix}
\\hat\\beta\_0 \\\\
\\hat\\beta\_1 \\\\
\\ldots \\\\
\\hat\\beta\_k
\\end{bmatrix}
\=
\\begin{bmatrix}
\\sum\_{i\=1}^ny\_i\\\\\\sum\_{i\=1}^n X\_{i1}y\_i \\\\
\\ldots \\\\
\\sum\_{i\=1}^nX\_{ik}y\_i
\\end{bmatrix}
\\]
The estimator \\(\\hat\\beta\\) can be obtained by rearranging again:
\\\[
\\hat{\\beta} \= (X^T \\ X)^{\-1}\\ X^Ty
\\]
Finally, to see that \\(\\hat\\beta\\) is indeed a global minimizer of the OLS criterion, we check that the second order condition is always a semidefinite positive matrix (details omitted here):
\\\[\\frac{\\partial^2 Q}{\\partial \\mathbf{\\hat\\beta}^2}\=2X'X \>0\.\\]
The availability of these elegant mathematical solutions for OLS\-regression explains why the computation of best\-fitting regression coefficients with a built\-in function like `lm` is lightning fast: it does not rely on optimization with `optim`, sampling methods or other similar computational approaches. Instead, it instantaneously calculates the analytical solution.
| Data Science |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.